-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve project-to-plane functionality #3371
Comments
Hi @rbeyer, Is what you're looking for here the ability to project the pixels of a target to an arbitrary plane located anywhere in 3D space? (Where this plane could be located literally anywhere, including inside, outside, or tangent to the body, and have any orientation?) If so, how would you want to specify the plane to project on to? Also, what kind of projection would you like to do onto this arbitrary plane? Point-perspective, or something else? If the questions I've asked above seem like they're leading down the wrong path, how do you see this working? |
I would advocate for the mathematical definition, point on the plane and a normal vector (not necessarily a unit vector). |
@kberryUSGS,
Yes.
Great question. Not sure. :) This might be the hardest part, how to specify this. Ideally, you'd want to be able to express this in a completely arbitrary way, but also providing some reasonable shortcuts. The use case that this came up in was for a plane centered on the target center, perpendicular to the view vector, so this should be possible without the user having to do a bunch of vector math, because there is theoretically always a "solution" for this situation that you can derive from the position and pose data you have. Just doing that would satisfy the use case I was thinking of. But then, I was thinking, if you're going to enable this, then you might as well provide the ability to designate an arbitrary plane. How to do that? Not sure. Maybe require a SPICE state vector or something for the purely arbitrary? So for someone to give you a completely arbitrary plane, they're gonna have to use cspice or Spiceypy to get something that can be fed to this program.
Yes, I think this should just be point-perspective: take each ray that corresponds to a pixel, trace it out of the camera and intersect it with the arbitrary plane. |
Thanks for the information! We're not going to be able to actually tackle the work for this in the current sprint, but now we have enough information to scope the work and make a plan for moving forward when we pick this up again. |
Sounds good. |
Requirements:
MVP:
MVP (n+1):
Example Image Data:
Initial steps:
|
NOTE: We are working on this issue. If anyone has input please post here. Preliminary questions:
|
Yes, that is the primary motivator, but there is also the possibility that this could be useful for an object before a detailed shape model is available.
I would think so, but if you have concerns about this, let's talk.
Not really sure that any meta-data of this nature off the limb is possible. The original intent was to be able to measure the elevation of atmospheric layers from the limb, so really just about projecting the pixels to the right place.
I suspect not many, maybe qview and then possibly an isis2something. It is possible that pixel-based processing might be used, but I don't think you'd work with the resulting image and consider it any kind of true map projection.
Yes. |
We had some additional questions after reading your responses:
|
I go into more detail in answers below, but I think the answers are "yes" to both.
If there is a cloud layer floating at some altitude, that will be expressed as a brighter line (curve) of pixels that is parallel to the limb, and a scientist wants to be able to measure its "elevation" from the surface. In order to do that, they need the pixels in the projected image to enable reliable measurement.
Given that the "usual" pixels-on-the-ground measurement capabilities of qview (or any other GIS application) wouldn't know what to do with this projected image, my thought was that just using qview to measure the number of pixels from the limb to the cloud layer would do it, and then if the "size" of the pixels was dependable, then you could just multiply. This is effectively what is done now with the raw camera images, but as you can imagine, a scientist needs to do a lot of geometry and accounting to get a good value, and the hope is that this project-to-plane would make that easier.
That's a good point, maybe we don't want a point projection, but more of an ortho projection onto the plane to enable measurement?
Aside from the point/ortho projection issues you brought up above, even if point projection were still a good idea, this wouldn't work, as we want to measure distances off the limb. At the very least the "plane" in 3D space needs to include the center of the object, so that the distances measured at the limb are perpendicular to the surface there. Attaching this plane to a point on the surface closest to the observer, would not enable good distance measurements off the limb. |
@kberryUSGS @scsides Can we close? |
The changes have been merged, and there are no outstanding comments. The author can reopen after a release comes out with the new feature. |
Description
There is currently a planar map projection, allow the user to define an arbitrary plane instead of one centered at the body's center of mass and aligned in an equatorial manner.
The planar map projection was created to facilitate map projection onto a ring plane. However, for small irregular bodies, or even exploring atmospheric layers visible on a planetary limb, the ability to specify an arbitrary plane (in some cases perpendicular to the look vector) for map projection has value.
The text was updated successfully, but these errors were encountered: