Hi all,

I mapped out some regions here and there and noticed that the buildings came out pretty perfect, but vegetation usually is rather sketchy because of the inherent complexity of trees/branches, which worsens in higher wind situations.

The ecosynth browser however was designed to be able to view point clouds of forests. So my question is, what do you guys use to create point clouds of forests?  I used commercial (online) tools to submit a range of photos, but these make some assumptions on how the terrain is mapped. If foilage for example is pretty sparse, you get huge differences in how features are recognized.

Views: 801

Reply to This

Replies to This Discussion

Hi Gerard, 

We use a number of applications and recently release our own Amazon EC2 instance of the free and open source Ecosynther package, our own GPU enhanced implementation of structure from motion for generating 3D point clouds from photos.  This was developed as a free and open source alternative to the popular Photoscan software.

The original blog post is here: http://ecosynth.org/profiles/blogs/ecosynther-v0-7

While the code is available as source, it is highly recommended that you try out the EC2 instance first as we have had lots of problems with users trying to configure their CUDA drivers. For small number of runs, EC2 can be a cost effective solution compared to buying a high-end graphics processing workstation.

Jonathan

Hi John,

Ok, I thought it was only a point cloud viewer, didn't know it did the SfM as well. In these SfM processes, trees are really difficult to get right because they have such high complexity in relation to buildings. Does the software specifically focus on 3D vegetation regeneration?

So far I used the commercial offerings, like pix4uav and had a look at Menci. So if ecosynth has some special processing for trees, I'd really like to give that a try.

That is a great question Gerard.  While the Ecosynther SFM tool has been tested for use on aerial image sets of forests, I can't say that there is anything specific to the algorithm that is tailored to vegetation.  We have found though that it works well for catching forest and tree 3D structure as it pertains to taking measurements off the trees.  What do you mean 'really difficult to get right'?  I am 

Error on my part, I am used to looking at regenerated surface meshes with Poisson surface reconstruction applied and not the bare point clouds. I'll have to inspect my own source point clouds to see how the trees look there. I was just thinking one step too far.

Would be awesome if a reconstruction algorithm could recognise vegetation though and make an effort reconstructing the forest ground separately.

Yeah, I think an algorithm that is able to generate a mesh from a point cloud of a tree in such a way that the tree 'looks right' would be very interesting indeed.  But I think there is a big difference between modeling a tree from a point cloud and modeling in a building or anything smooth really, like rocks, or grassy fields. 

I have seen some researchers use the SFM point cloud model of the tree as a hull within which a 'tree-growing' algorithm fills out a modeled tree shape.  Also, groups in Finland have had great success modeling the 3D geometry of single trees from LIDAR point clouds.  Are you looking for something like the tree view in Google Earth?

What brought me here in the first place was the LE hexa due to a potential collaboration with the agency for environmental protection in this area. It's now progressed where we're going to do a little collaborative project together of a month to analyse the utility of the LE hexa for 3 specific scenarios.

So I'm just trying to find ways to maximise value for that tiny project, because I think it's going to help them immensely with their activities.

This environmental protection unit covers all the protected woods in this state of Pernambuco. As you may know, some coastal wood regions are under stress due to increased populations near the sea sides, but also industrial expansions and 'invaders' who build houses, clear trees for agriculture or illegal pot plantations and illegal sewage dumps. So there's lots of work to be done, like establishing protection regions to get better state support and surveying+surveillance to try to spot illegal activities or measure their impact.

Just two weeks ago I was looking at a video where a luxury resort hotel was dumping its sewage into the area called "manguezal", a swamp area right behind the hotel with plenty of wild life, but under stress since it's between the road, the beach and the ever growing resort area in this place.

Other protected areas get cut down due to resort expansions. Right behind those areas you have a sloth protectional area. What happens is that the houses in these resorts triple their value in 5-6 years, so the fines for illegal deforestation are just absorbed in the construction budget.

Interesting, that project sounds like it could do more with 3D model analysis than just visualizations.  We put out a paper last summer where we used 3D point cloud data, in this case generated via Photoscan, to estimate metrics of canopy structure similar to what is done with LIDAR http://www.sciencedirect.com/science/article/pii/S0034425713001326

All of our current tools and development are basically built initially around that work, automating the workflows to produce products, and ultimately to get away from Photoscan.  So something like generating canopy digital surface models and analyzing change over time might be a great application for your purposes.

Hmm... this is really interesting. There's a video from CloudCompare showing how to do a cloud-to-cloud comparison:

http://www.youtube.com/watch?v=MQiD4HjhpAU

Notice that in this case he utilised 2 sessions to capture point cloud data and shoveler out some dirt in-between. The difference point cloud clearly shows the changes that have taken place in the area. The application also allows you to play with scale, so that minute differences don't show up that much. 

There are some challenges in lining up the point clouds, but I reckon that if you have some data over there taken at different time frames, you may give it a shot with cloud compare yourself and study the results. It would probably be a really interesting tool to understand vegetation growth (or loss) between seasons/over time and to identify trees that may have been illegally removed or fallen over.

That is very cool.  I wonder if it is detecting change based on little 3D voxel pixels?  I can see how this could be used to see areas of change in tree cover, I wonder if it is possible to extract change data from this, for example total area or volume of change.

I don't think you can measure volume in CloudCompare, but it should be able to output difference clouds. The results can then be analysed in Rapidexplorer (now Geomagic). Of course, the clouds have to be georeferenced and in known unit scale to get meaningful output.

http://www.rapidform.com/products/xov/explorer-free-viewer/

For CloudCompare, here's Daniel's page about it:

http://www.danielgm.net/cc/

CloudCompare is a 3D point cloud (and triangular mesh) processing software. It has been originally designed to perform comparison between two 3D points clouds (such as the ones obtained with a laser scanner) or between a point cloud and a triangular mesh. It relies on a specific octree structure that enables great performances1 in this particular function. It was also meant to deal with huge point clouds (typically more than 10 millions points, and up to 120 millions2 with 2 Gb of memory).

more detailed info on "registration" of point clouds:

http://www.danielgm.net/cc/doc/wiki/index.php5?title=How_to_compare...

-----------------

There's another point cloud viewer that I saw recently by Menci, which uses a streaming method to access the point cloud. So if you ever face a huge point cloud, this should be able to help to chunk it up:

http://www.menci.com/more-products-hidden/remote-sensing-cloud-view

--------------

I had a look at one of my point clouds and I noticed that the canopy of trees in that point cloud was really sparse. I basically get the top leaves or the floor,  but not much of anything in-between. I reckon this may be related to another algorithm which requires >=3 matches on point data (from three different photos). For trees, that may be too much due to the complexity and how easy it is for data to get occluded in the complexity.

There's another dataset of the same built with other tools. I'll have a look tomorrow to see if that's any better.

I had another look at the standard ecosynth browser video and I see a (relatively) great distribution of points around the canopy, branches and trunk of the tree there. So I use that as a reference of how well the software I'm using is doing compared to that. I reckon I still have the originating dataset, so will have a look to access the EC2 services myself and see how that performs. If anything interesting comes out of that, I'll post the results on a blog post here showing the differences.

What kind of photographic overlap are you operating at?  Our forward overlap is >90% and we try to keep > 50% side overlap, commonly 75% if we can afford it.  The low match count may be affected by the overlap where due to the fact that the apparent angle between views of the same point of the canopy is too great, the matching algorithm cannot easily find and match the same points. Increasing the overlap will decrease the angle between two views and increase the chances for a match.

That probably did contribute a lot to this. Planar surfaces come out all right, but the general accuracy of points isn't that great. I performed that flight with 60% forward overlap and 70% sideways overlap. What's your forward speed set to?  You're using APM I think, so in m/s?

RSS

Members

© 2018   Created by Erle Ellis. Content is made available under CC BY 4.0.   Powered by

Badges  |  Report an Issue  |  Terms of Service