I have put together this discussion thread to aid in planning Ecosynth work in Panama. Some of our discussions have happened on email, but I am going to try to start answering and posing all questions on this forum.
Helene had a few specific questions recently that I want to try to address:
1. Platforms: We think an octocopter (8 propeller) multirotor copter is best suited for the size of area you are interested in flying (50 ha over one mission). We are preparing a common parts-list / budget for such a device now. We have a setup using the Arducopter system and several batteries that can fly for about 30 minutes.
2. Launch: It is best to launch from as close to the mission area as possible and ideally less than 100 m away from the area of interest (AOI). One thing we have done in the past in closed-canopy forests, is to launch from inside the forest near a gap, fly horizontally into the gap, then vertically up and out for the mission: see a video of yours truly doing that here.
3. Climate: The climate in Panama can be especially hard on electronics, including batteries. This is really new territory for us. Some of the most moisture sensitive components are on the electronic speed controllers, ESCs, and failures there can quickly lead to crashes. It will be worth testing the use of 'conformal sprays' for water-proofing these sensitive electronics. Links to some examples: PDF guide for protecting Mikrokopter style ESCs, forum chats about conformal spray, and a product link to All-Spec TechSpray.
As for the lipo batteries, my biggest concern would be corrosion of the 'Deans Plugs' terminals. The lipos themselves are tightly sealed, but the terminals are exposed. A salty, humid climate protocol might need to be developed to weather proof the gear and prevent corrosion.
You reported the winds at around 15 kph (~9mph), which should be OK for most normal flying, but is right on the edge of what we consider our normal operating margin. We have been able to complete our 15 minute 250 m x 250 m mission in winds up to 24 kph (15 mph), but in general such conditions should be avoided.
Now, I have some questions!
What is the terrain like in the AOI?
Do you have existing understory digital terrain models for this location? For example, as would be obtained from LIDAR or a ground survey?
Can you venture a guess at what the wind speed is like above the canopy, say 100 - 200m up? It will be necessary to fly higher over the forest to capture the entire AOI in one flight, compared to the 40m above the canopy when we fly in Maryland.
Thanks for your thoughts on this.
The windspeeds are lower than 15 Kph most of the year. In the wet season, May to December, the mean daily windspeed is less than 5 kph. In the dry season (Jan to April) the monthly average is also only 10Kph, but the problem is that there is little wind at night and a lot of wind during the middle of the day, so I beleive at that time of year there might be a fairly narrow window in the early morning when one could take aerial photos without worrying about high winds.
The windspeed data I'm citing above are taken at 48 m above ground, and just a few m above the top of the canopy on a fairly large tower. There is another dataset taken higher above the ground and higher above the canopy and closer to our site; I will ask for a copy of this so we can take a closer look at the wind patterns. There are wind numbers for 40 and 48 m at the old site, if that helps in terms of extrapolating to winds at 100 m elevation.
The topography on the 50 ha plot itself is quite gentle. You can see a topo map at
We have 5-m resolution topo data for the 50 ha plot, and coarser resolution for the whole island from a ground survey that we can easily send you. There are also LIDAR data available for the whole island, and one of those datasets is freely available.
Some questions for you -
What is the flight time and flying speed of the arducopter you propose to use? Is that flight time adequate to obtain 10 cm resolution data of the entire 50 ha plot in one flight with some room to spare?
Do you have specific recommendations yet in terms of camera, sensors, and software for processing images, (I realize this may take some time to work out)? Do you need additional info from us to move forward on those?
Thanks for your assistance!
Right now we are proposing an Octocopter (8 propeller) assembly that combines a frame and motors from a Mikrokopter brand system with Arducopter flight electronics. Based on our current work this can allow flights up to around 30 minutes, or roughly 8500 m flying distance at 5 m/s.
The proposed flight plan would be to use this 'Octo' carrying 4 5000 mAh lipo batteries at a minimum height of around 150 m above the 'maximum canopy height' - in other words, the tallest thing within the AOI. for the 1000 m x 500 m AOI, this would involve a flight of around 7500 m flight distance and would cover the entire area with images containing 50% side overlap. This includes flying over a buffer area around all sides to make sure there are no edge effects at the actual AOI.
For sensors, we have been using a Canon SD4000 point and shoot camera and recently a Canon ELPH 520 HS. These shoot at around 2 frames per second at 10 MP. At a flying altitude of 150 m we can estimate the ground sampling distance of each pixel at around 7.3 cm x 7.3 cm and we would take about 3000 photos of the scene.
For processing, we have used several structure from motion packages, including Bundler, VSFM, and Photoscan. Photoscan will produce results fastest of the three, probably taking an estimate of 2-5 days on a high-end workstation with 24-48 cores and at least 24 GB of RAM. Photoscan has some recommendations for computer power based on number of photos: http://www.agisoft.ru/wiki/PhotoScan/Tips_and_Tricks#Memory_Require...
Photoscan can then be used to produce an ortho mosaic, but with this many photos I do not have a good estimate of processing time, perhaps 4 days or more per scene with that workstation.
We have developed an open software package in Python for processing the 3D point cloud into georeferenced LIDAR-like models and gridded maps of height, density and even color on user specified grid size, it runs best in Linux and I would estimate about a day of computer time for producing fine resolution models at the 10 cm pixel size.
Our primary undergraduate mechanical engineering student, Stephen, is working out a parts-list and budget and we plan to get you something early next week.
Hmm, having a system that can do 8500 m flying distance for a flight plan that would take 7500 m flight distance sounds to me like it is cutting things pretty close, especially as the battery life degrades over time. And do winds cut the flight time? Is there a good option that would be able to do about 2 x as much flying?
So you can process a 3D point cloud even with just 50% side overlap? I thought I had heard that every point had to appear in 3 images to generate these point clouds. Maybe I don't understand the term "50% side overlap", though. What exactly does that mean?
The camera and processing plan sound good. Neat that you have an open software package for processing the 3D point cloud.
I agree that that flight plan is cutting it close. Our engineering students have been looking at other options for increasing flight time, which include getting longer arms and bigger props. From what we have read from other bloggers online, this can lead to substantial improvements in flight time. Wind will indeed increase flight time as it makes it more difficult for the copter to reach its waypoints.
As for the overlap, we use the terms side and forward overlap. Side overlap refers to how much overlap there is between photos from parallel flight tracks. Forward overlap refers to the overlap between subsequent photos in the sequence. At the 5 m/s speed of the UAVs and about 2 fps framerate of the camera, we get greater then 90% forward overlap all the time, except at altitudes below around 20m. Side overlap is determined based on the flight plan, In our experience, the 3D reconstruction can work with low amounts of side overlap, even less then 40%, but this comes at the increased risk that small amounts of wind or variation in the flight path can lead to gaps in coverage. We have not done comprehensive testing on the effects of such parameters on reconstruction however.
I have read your correspondence with Helene and appreciate you setting up this site so we can all very the conversation. I have a few questions.
Navigation: What system do you use for navigation and how do prepare flight plans for different weather conditions?
Image processing: I saw your information about image processing. Do you do that or do we do that? Does your system require ground control points be set up? Georeferenced, mosaicked images within 1-2 meters accuracy would be ideal, especially because we want repeated images where we can track individual crowns that are linked to data on the tagged tree stem. What horizontal accuracy do you expect?
Glad you can join in the conversation.
For Navigation, the systems will use the built in auto-pilot system and fly across the study area based on a predetermined flight path. In the past we have used the Mikrokopter brand of UAVs, but more recently have switched to the Arducopter system. We find the Arducopter system has a better path-based route following which keeps it on track much better than the Mikrokopter route following which is more point based. This allows the Arducopter to stay on path more consistently under windy conditions. We typically plan out the flight plans in ArcGIS and then export them as a list of waypoints to be imported into either the Mikrokopter or Arducopter mission control programs.
For Image Processing, we do not offer a service for image processing at this time. We use the Agisoft Photoscan software for its speed and ease of use for making 3D reconstructions and ortho-mosaics. We have two approaches to georeferencing the 3D model data, one with and one without ground control. We achieve accuracies in the 1-2m horizontal RMSE range when ground control is used, but accuracies in the 3-5 m horizontal RMSE range when no ground control is used.
Perhaps if a permanent plot is to be established for repeated collection, it would be possible to set up a small number of ground control points (for example 5 distributed across the scene as on a die, one in each corner and one in the center) to aid in georeferencing down to the 1-2m horizontal accuracy range.
I'm hoping we can use individual dead trees or such as effective ground control points. If we identify some candidate dead trees on the images distributed as you suggest (or more), we could go out in the field and get precise coordinates for them based on the plot grid. I say dead trees because I expect these would sway less in the wind than live trees, and have more distinct and consistent points to be identified. The ground grid is mapped out every 5 m; I'm not sure of the error in that, but to some degree that would be beside the point as we are interested in mapping the crowns to the ground grid for association with tree stems already mapped to that grid. Of course dead trees do not last forever, so we might need to establish some new points every 1 or 2 times per year, but I think this would be a lot easier than trying to maintain ground control points. Does this sound feasible?
I'm the lab technician here working on Ecosynth and handle a vast majority of the point cloud/image processing we do here in the labs.
We've processed flights ranging from 600-2400 photos at 10MP resolution. Our lab workstation has 192GB of RAM, 2x Xeon X5675 CPU's, and a Nvidia GTX 580 GPU. The current method we've been using for processing has been to construct a point cloud at the highest setting, and then build 3D geometry for ortho generation at a lower setting. It is also important to note that we are using an older version of photoscan: version 084 build 1289
The time it takes to reconstruct a point cloud is highly variable on the number of photos. With 600 10MP photos it takes 4 hours or less. With 2200 photos it takes 3 days or so with our workstation.
The parameters we use for geometry building are displayed on the left. I have yet to try geometry building with a 2000+ photoset. The largest I have ever tried it on was a set of 1450 photos and it took maybe 10 hours or so to build the scene geometry.
I have uploaded an orthophoto to our ftp server here. I've also attached the ortho as a .kmz file if you can't view the tif. The orthophoto was created using these settings on the left with 800 images. I have no idea how dramatic the difference would be between what we create and what you would get with the highest settings. But I think some compromise should be given.
I did once experiment with attempting to build geometry at the highest possible settings with a 1400 photo dataset. After a day of running the job was still on 1% complete.
This would have been at about 30m above the average maximum canopy height for the scene. I'll let Andrew comment on the Photoscan settings. We noted recently with this latest build that scans that ran OK in 0.8.4 had errors in reconstruction with 0.9.0 that were not present in the earlier version.
As for the endurance question from earlier: we have had success with reconstructions using an aerial platform down to about 20m altitude, any lower than that and the speed of the camera and UAV are not enough to achieve the forward overlap needed for the automated reconstruction, which seems to be at least 75% or so, but we have not fully tested that. That altitude is totally not an option for these large areas.
I have been putting together estimates based on our best understanding of the equipment we are most familiar with. It will almost certainly require some testing on site to get the parameters of the flight plan just right for the project goals. I don't think it would be unreasonable to fly with an Octo in two adjacent missions (2x 1000 x 250) and then supply all of the images to the photo processing software. Even with that plan, the entire scene could be probably be imaged in under 35 minutes max if you take into account time to land, swap batteries and re-launch. We did this to cover a 10 ha area of about 400 m x 300 m (10 ha plus overlap). I think we could probably try to optimize the flight plan to around 21 minutes for a single acquisition, which would leave about 9 minutes of safe time. This would be at an altitude of about 160 - 170 m above the ground, resolutions in the 5 cm x 5 cm range.
What kind of temps are we talking about? At temps around 38 C (101 F) we have experienced problems with the cameras over-heating and shutting down. As for longevity with humid, salty conditions - that is new for us. As in my comment at the start of this whole thread, we will likely want to look at using Conformal Spray to waterproof exposed electronics - but we have never done that.
Air temperatures don't get terribly high in Panama. Monthly mean high temperatures range from 29.7 to 30.7. Of course, items in the sun can heat up quite a bit above air temperature - but I assume the UAV would stay pretty close to air temperature because it would be air-cooled by its movement, right?
I'm not sure what kind of cooling we get from the movement. We had a few very high humidity and high temp days last summer where it was about 38 - 39 C with heat index. Temps in the 30 C should be OK and I cannot recall a time here where flying in those temps caused a problem.
In my experience medium quality does appear to be the best tradeoff for geometry building. As I said above, I tried on "Ultra quality" and it basically didn't do anything after a day. I have experimented with high, but mainly to see how pretty of a 3D model and how large of a file I could get. I never generated orthos from high.
I can do some test runs this weekend with our 1450 photo set and see how long it takes to generate a 3D model at high quality, and create a new ortho. That might give us a better measure of the quality vs processing time tradeoff.