I've been looking at designing an SfM pipeline that's next to linearly scalable and can run in parallel. I have a background in programming and worked on a number of distributed systems. I'm just relaying my thoughts here to see what can possibly be achieved with little effort.

I have never implemented SIFT or anything related to SfM processing, not even on a single computer. So far I understand some basics of the processing pipeline.

My proposed architecture for parallellization is to attempt to redesign the pipeline as a set of operations on files on top of the hadoop framework with a "ceph" file backend, where these steps are designed as 100% parallellizable map/reduce steps, of course significantly subdividing the work. I'm not suggesting implementing each algorithm from scratch, but rather attempt to use 'best-of-class' implementations from library for each substep and try to manipulate the intermediate output files to a file format that the other process understands.

Benefits:

  • it can scale near linearly
  • there are clear intermediate milestones in processing
  • it supports and I think inspires research on new processing techniques using generated intermediate data, greatly reducing research time spent. The pipeline no longer runs as one single application, but breaks open the steps in processing just by reading intermediate files.
  • It makes it easier to swap/extend algorithms and libraries in the process (on large data).
  • It also makes access to end-users and other applications easier, as all data now appears on one single file system.
  • All opensource, non-GUI processing?
  • Run multiple pipelines in parallel using different algorithms or settings to compare results.
  • Parts of the pipeline can be rerun (historically) if algorithms improve substantially.

Analysis:

SfM can be described as the following pipeline:

  1. Find features in images    
  2. Match features to find matching images
  3. estimate fundamental matrix
  4. estimate essential matrix
  5. recover projection matrices
  6. triangulation
  7. bundle adjustment

The result is a sparse 'point cloud' with known projection matrices (the cameras). Undistortion of images is not performed here because this affects feature finding in the images due to pixel stretching.

1-5 are definitely parallellizable. the others should be evaluated in more detail.

After these steps, the original images can be analyzed in high detail to extract more information and generate denser point clouds. An intermediate output could be bundler.out and list.txt at this point with undistorted images.

Libraries and applications that could be used for some of these steps :

  • openMVG
  • libmv?
  • ceres solver
  • vlfeat
  • code from micmac

Additional optimizations:

  • at 2: Use GPS positions, altitude (distance to object?) + angles to construct a frustum and determine if there's sufficient overlap and sufficiently low angle of incidence for a good match to occur.

Views: 208

Reply to This

Replies to This Discussion

Hi Gerard,

I see this is an old thread and I am dealing with a similar situation (the major difference is that it is simultaneous multi-camera rather than SfM). Did you end up finding a good solution?

Thanks,

Don

Hi Don,

Well, I worked on writing some software on my own, but it's a big job to do that. I just used Agisoft Photoscan in the end and let it run longer. If you really have that much data, very large photos and many of them, then you should look into Acute3D. Obviously there's a price tag there. Look on youtube for aerometrex and their site for some video for examples.

This is just the processing stage though. The viewing solution must be clear, what formats it accepts, if you need LOD to make it viewable. Acute3D creates "cubes"  of data of very large areas in different levels of detail, so you can move through the data quite seamlessly. Other data productions may require intermediate steps or if they do monolithic processing and even then you may run into memory issues. I believe PS allows you to 'chunk'  the data up and then seam it together later, but I never got to projects of that size.

Thank you very much for the quick and helpful reply!

RSS

Members

© 2017   Created by Erle Ellis. Content is made available under CC BY 4.0.   Powered by

Badges  |  Report an Issue  |  Terms of Service