Warning: Lots of hi res images!

This is my Ecosynth scan of the UMBC campus (view in interactive 3D!).  I've been thinking about doing a large combined scan like this one for a while, and this past autumn we finally had the free resources and time to do it.  As far as I know, this is the first full color 3D scan of everything inside the loop.  While the model is certainly fascinating rendition of campus, there are also possibilities to perform scientific analysis on this model.  Measuring the percent green space inside the loop springs to mind as an easy one.  The following is my write up of this mission, originally submitted for the DIYDrones T3 contest.


The Rig &The Mission: 

The rig is a Mikrokopter Okto framed Arducopter.  Parts list available here.  The specs are as follows:

  • 12" APC Props
  • MK3638 Motors
  • jDrones 30A ESCs
  • jDrones Power Distro Ring
  • Mikrokopter Okto XL Frame
  • Mikrokopter Hilander Landing Gear
  • APM 2.5 running 2.9.1b
  • 3DR Telem Radio
  • Spektrum AR7000 + DX7S
  • Garmin Astro GPS Dog Tracker
  • Ziploc Tupperware Dome
  • Four Parallel 5000mAh 4S Lipos

It can fly safely for 30 minutes (and a max linear distance of 8 kilometers at a target velocity of 7m/s) using this setup.

The Camera is a Canon Powershot ELPH 520, mounted in a waterproof case.  The case is no longer waterproof because it has been lightened.  It's main function is to provide a stable and consistent mount for the camera.  The case is mounted to the underside of the frame using M3 plastic standoffs and rubber vibration dampers.  Because CHDK is not available for this model of camera, the shutter button is held down with a thin velcro strap.  In sequential shooting mode, this results in a constant 2 still frames/second.

Due to the distances involved, the campus had to be divided into three missions of approximately 6km each.  The mission specs were 100 meters above ground level (to stay well above rooftop level but also well below 400 ft.)  39m apart tracks for 75% side overlap between photos.

Download Mission Planner Files.

The flights went extremely well.  Each flight was fully automatic.  The only human intervention was switching to AUTO mode while on the ground, and disabling the copter once it had landed.  Line of sight was maintained on the copter at all times and I was standing by to take control if necessary.  Please note that the KML files do no show height properly: the height above the ground is shown as height above sea level.  So the tracks are missing about 61M of height in the Google Earth representation.

Download the Google Earth File Shown Above.

Download the Raw Dataflash and Tlogs.

This is an example of a typical image captured by the camera.  In this photoset, I noticed that my pictures were somewhat motion blurred.  This is likely due to the overcast lighting conditions triggering longer exposure times.  In bright sunlight, images are usually sharper in my experience.  Solutions to this include vibration damping the camera better, and using a higher quality camera.

In total, 5443 useful pictures were used in the scan.  Additional pictures from the ascent, landing, and going to and from home were discarded.


 The Workflow:

The photos were georeferenced using a custom Python script and run through Agisoft Photoscan to produce 3D models.

1.  I manually discarded extraneous photos.  Due to the camera setup with the shutter button held down (for maximum fps), pictures were taken for the entire duration of all three flights.  I trimmed out the pictures from the takeoffs, landings, and going to and from the first and last waypoint.  This leaves only the pictures taken along the vertical tracks and the short horizontal connecting tracks.

2.  I batch renamed the photos to 0001 through 5443.

3.  Used Python to convert my telemetry files into a text file with only GPS, altitude, and waypoint flags.  I downloaded the Ecosynth Aerial Pipeline, ran start_windows.bat, clicked on Point Cloud Pre-Processing, clicked on telemetry conversion sript, and ran my log files.

4.  I then manually trimmed my text files down to only the start and end of the tracks.  I did this using the waypoint flags and by double checking the GPS coordinates in Google Earth to make sure they were right on top of the places where my tracks start and stop.  I then deleted the waypoint flags leaving only GPS coordinates and altitudes.  It looked like this:

39.2553516 -76.7060103 100
39.2553536 -76.7060059 99.97
39.2553558 -76.706002 99.96
39.2553587 -76.7059985 99.97
39.2553619 -76.7059957 99.99

5.  Next I ran another python script to assign GPS coordinates to each picture.  Basically I took a folder for each flight, put the pictures in their corresponding folders according to flight, put the appropriate text file for each flight in the folder, and ran the script.  Please note that I believe this file only works for our 2FPS Canon ELPH520 setup.  It looked like this:

# <label> <x> <y> <z>
IMG_0001.JPG 39.2553536 -76.7060059 99.97
IMG_0002.JPG 39.2553587 -76.7059985 99.97
IMG_0003.JPG 39.2553653 -76.7059932 99.99
IMG_0004.JPG 39.2553733 -76.7059892 100.01
IMG_0005.JPG 39.2553829 -76.7059861 100.02

6.  I then merged the resulting three text files into one large file for Photoscan ground control, and moved all the photos back into one large folder together.

7.  I added my photos to Photoscan, and used the ground control screen to import my GPS coordintes and heights.  I left the accuracy at 10m.

8.  I ran Photoscan!  Everything after this is just simple use of Photoscan according to the manual.  

9.  After the point cloud was procesed, I ran both a height map and an arbitrary geometry (true 3D) mesh model.  Both models were very large, about 16 GB each.  I made several decimated and textured models for export.

10.  I exported directly from Photoscan to Sketchfab.  I also made some .ply files, as well as orthophotos.


The Goods:

Here's a small version of the orthophoto.  The full resolution version is 0.03 m resolution, meaning each pixel represents 3 centimeters.  I've never been super excited about orthophotos since I work mainly in 3D, but this was easy to make so I figured why not include it.

Download the Full Resolution Orthophoto as Shown Above.

This is the sparse point cloud from Photoscan.  Point clouds are our bread and butter in our lab.  I think this one turned out rather nicely.

Download the Sparse Point Cloud (.ply) as Shown Above.

Download the Sparse Point Cloud (.las) as Shown Above.

Once you zoom in, you can see why it's called a sparse point cloud.  This is a the same cloud as the previous image, but cropped down to just the library and zoomed in.  Obviously, roofs and lawns get a lot more points than the sides of buildings.

Here's that same view of the library, but with the dense point cloud.  A lot nicer!  I had to do only the library on dense, because dense cloud processing time is prohibitively long.  But I feel it needs mentioning, the entire campus could be processed to this level of detail given enough patience and a supercomputer.  Notice how the points on the tan roofs and the grass are so dense as to look like a solid, but the white roofs and the sides of the building are not as dense.

Download the Dense Point Cloud (.ply) as Shown Above.

Download the Dense Point Cloud (.las) as Shown Above.

Now it's time for some 3D meshes!  Obviously, the raw mesh product is a prohibitively large file.  So I performed decimations and cropped to smaller areas.  I have Sketchfab pages and .ply files!

This is the full campus!  It had to be decimated pretty heavily to fit onto Sketchfab.

Click Through to Sketchfab.

Download the .ply file.

Apartments.  Nice textures, looks picturesque when cropped to a circle.

Click Through to Sketchfab.

Download the .ply file.

The UMBC Physics building.  Lots of windows and structures on the roof.

Click Through to Sketchfab.

Download the .ply file.

Another nice one.  These apartments were easy work for Photoscan.

Click Through to Sketchfab.

Download the .ply file.

The Administration building.  On of the tallest buildings on campus, but from the air it doesn't look so big!

Click Through to Sketchfab.

Download the .ply file.

If you zoom in on the parkin garage, you will see that a couple of cars look transparent and ghostly.  This is because the car either left or pulled in between the copter's multiple passes.

Click Through to Sketchfab.

Download the .ply file.


The following series of images shows each progressively more complex representation of the 3D model available from Photoscan.  They're from my most complex model, screenshotted straight out of Photoscan.  This model is so big that I cannot open it with any external programs, I have to decimate it.


This is just the sparse point cloud, the most basic representation.  Notice some surfaces have no or few points, like the sidewalks and some roofs.  This is because plain white objects have few identifiable features.

This is the wire mesh representation.  I had to zoom in to make the individual polygons visible.

Now we have the solid mesh.  It is like the wire mesh, but with each polygon filled in.  This representation is good for examining the shape of your model without visual clues in the texture changing how the shapes appear.

Next is the shaded solid!  Photoscan assigns each polygon some diffuse colors.  Since the polygons in this model are so small, this gives a decent representation.

The final textured model.  This is as realistic looking as it gets, for this scan.


And finally, I'd like to show some screenshots from the high quality model:


All in all, this project was a cool experience and I'm glad the T3 contest prompted me to do it.  I definitely learned a few things:

  • Plain white roofs make poor reconstructions, because they have very few identifiable features.  If you are trying to capture a bright white roof, try dialing down the camera's exposure in an attempt to make the roof less washed out.
  • If you are trying to accurately capture the texture on the sides of buildings, top down photos won't cut it.  Even though the camera has a wide field of view, all of the sides of buildings are photographed from a high angle.  Additional pictures from the sides of buildings will give you better side textures.
  • If you have enough pictures, moving objects simply disappear.  There was light foot traffic on campus during this scan, but in the 3D models the campus looks like a ghost town.
  • Tall thin objects like lamp posts cannot be captured from 100m up using this camera.
  • It would be a lot easier to tag these photos automatically using APM.  Unfortunately, this camera has no CHDK so I would need a servo to press the shutter, which is complicated.
  • And a bunch more I'll add if I can think of it!

Credit also goes to Jonathan Dandois for helping me with georeferencing the photos.

Views: 11046

Comment by Gerard Toonstra on February 6, 2014 at 6:07am

Wow, this is impressive. So the "final textured model" is the model with the original photos applied as texture fragments on the final mesh?

What tool did you use to go from the point cloud representation to the mesh model? (poisson surface reconstruction?). 

Apparently Menci makes a streaming point cloud viewer. You can get a trail version at:


Would be interesting to see how that performs on your data.

Comment by Stephen Gienow on February 6, 2014 at 8:25am

Hi Gerard.  Everything you see is from inside of Photoscan.  I'm more of an aircarft guy than a photogrammetry guy, so sorry that's all I can tell you.  I'll check out the link when I get a chance.

Comment by Thorsten on February 16, 2014 at 2:36pm

Hi Stephen, as mentioned over at DIYDrones your work is really impressive! 

What was the overall flight time?

It would be interesting to see a comparison - not for the whole campus for sure but for some subsets - between Photoscan and Ecosynther. Maybe one forest area and a building. 

Comment by Erle Ellis on February 19, 2014 at 8:45am

That is a great idea- Stephen- are you up to try this out?  Could do as part of Ecosynth project activities...

Comment by Thorsten on February 20, 2014 at 5:46am

I suggested it over at DIYDrones, but got no response so far: It would be good to have a repository of free image datasets to compare algorithms and techniques. Maybe ecosynth.org is a good place for hosting such a repository?

Comment by Will Bierbower on February 24, 2014 at 3:30pm

Hi Thorsten, we've put up an 'acquisition' dataset that we've been using to test Ecosynther over on our data.ecosynth.org wiki here.  The dataset includes 511 images at over 1.3 GB, so it may take awhile to download from our server.  We're currently exploring other options for how we might share such data.  Thanks for the idea!

Comment by Gerard Toonstra on February 24, 2014 at 4:10pm

Hi Will,

The first line in the positions file, does that correspond to IMG_0027.JPG?

Comment by Will Bierbower on February 24, 2014 at 4:16pm

Hi Gerard, no those are just the GPS positions filtered from the copter's flight log file.  Our current setup uses the GPS data from the copter and just a standard camera.  However, the order of the GPS points in that file and the numbered images are correlated with respect to time.

Comment by Gerard Toonstra on February 24, 2014 at 4:25pm

Gotcha. I interpret this to mean the time in the photo and the actual GPS time.

One comment on the log file there. I notice CTUN and NTUN are left on. These consume quite a bit of bandwidth and can be turned off: http://code.google.com/p/arducopter/wiki/AC2_Datalogging

You can also turn on the "A9 relay" messages (enable cam) when these are triggered so you get "CAM" messages with position: http://diydrones.com/forum/topics/camera-triggering-message-through...

Of course, that's of no use if you use an interval based trigger. But anyway, thought I should mention it.

Comment by Will Bierbower on February 24, 2014 at 4:43pm

Yes, sorry, just to clarify, the GPS points and cameras photos are both ordered by the time they were taken. And they were captured at regular but different intervals of time.  I can try to get back to you on what those intervals were.

And thank you!  I'll be sure to pass that along.


You need to be a member of Ecosynth to add comments!

Join Ecosynth


© 2019   Created by Erle Ellis. Content is made available under CC BY 4.0.   Powered by

Badges  |  Report an Issue  |  Terms of Service