Last week I presented a talk on Ecosynth at the American Society of Photogrammetry and Remote Sensing (ASPRS) confe... in downtown Baltimore.  My talk focused on presenting aerial Ecosynth as a remote sensing sampling tool, capable of mapping canopy structural and spectral traits at high spatial and temporal resolution.  I was in a session on LIDAR methods with several speakers with whom I was familiar based on my research into LIDAR methods and quality analysis.  Below is a link to my presentation slides (PPTX).

There was a lot of discussion about using UAS (Unmanned Aerial Systems) for remote sensing and also the application of automated 3D modeling from photos using computer vision and photogrammetry.  I saw several presentations about generating large 3D point cloud models ( >10 million points) from regular images using a technique called Semi-Global-Matching (SGM; Hirschmüller 2005) that provides high performance per-pixel 3D modeling for sequential sets of stereo-pairs.  The results are pretty awesome, 3D point clouds ('info clouds') with densities > 300 points m-2 with multi-spectral photo color on each point.  However, I couldn't help but notice that the results are very different than the point clouds we generate in the Ecosynth project using either Bundler, VSFM, or Photoscan computer vision Structure from Motion (SFM).

With stereo-pair photogrammetric SGM, the result is a point cloud with a density that can closely match the pixel resolution of the input images and where color is extracted for each point based on the known geometry of the images that observe that point. However, the point cloud is different from the computer vision SFM (CV-SFM) point clouds we are used to looking at in that a point is placed for each image pixel so that the scene looks more like a drape of points, reminding me of pin art toys (at right). This is because each point is placed based on precise knowledge of the geometry of the scene and the cameras/images that viewed the scene: In other words, the photogrammetric SGM approaches attempt to place a 3D-multispectral point at every single pixel in the input image stereo-pair and in general points will not be 'on top' of each other.

CV-SFM point clouds, on the other hand, are not so uniform, instead they appear as a non-uniform splattering of points, generally with more points in more texturally complex areas and fewer or no points in texturally simple areas, but with points appearing above or below other points.  For example, the image at left shows an overhead view of the 3D-RGB point cloud at our Herbert Run site (bottom), produced using Photoscan and post-processed using our Ecosynth pipeline, and the point cloud density (top) of the same area. Point densities are very high in areas of complex land cover, e.g., the forest in the upper right hand area and the line of rip-rap and weeds in about the middle, and lower in more texturally simple areas like the grassy areas at the top of the scene and along the road.  In some areas there are no points at all, like on top of a smooth metal roof and in highly shadowed areas of the canopy (blue at top). 

So what does all this mean?  From my point of view, I think it means that while photogrammetric SGM and computer vision SFM can be used for the same objective: generating accurate 3D-spectral point clouds from images, I think we know a lot less about the properties or characteristics of the computer vision SFM point clouds.  With the SGM method, or even with LIDAR, we have a good sense of the geometric properties of a point (e.g., representing a pixel or laser spot size of a relatively well-known spatial resolution).  The same cannot currently be said about the computer vision SFM point cloud.  I think (as does my committee!) that trying to assess such information about SFM point clouds will help us get a better understanding of how we can use this data and how it compares to similar data collected with different instruments, and perhaps more importantly how we can use computer vision SFM to study forests.

And with that, I need to get back to work writing and researching that very topic!

Link to my PPTX slides from the presentation here: Dandois_Ellis_Ecosynth_ASPRS_2013_web.pptx (download)

References:

Hirschmuller, Heiko. "Accurate and efficient stereo processing by semi-global matching and mutual information." Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 2. IEEE, 2005.

ASPRS Conference image from program cover: http://www.asprs.org/Conferences/Baltimore-2013/Program/Preliminary...

Pin Art image: http://en.wikipedia.org/wiki/File:Pin_art,_Flickr.jpg

 

Views: 141

Comment

You need to be a member of Ecosynth to add comments!

Join Ecosynth

Members

© 2018   Created by Erle Ellis. Content is made available under CC BY 4.0.   Powered by

Badges  |  Report an Issue  |  Terms of Service