Previously, I wrote about building a slippy map using orthorectified imagery from Drone Deploy. One of the nice features of Drone Deploy is that it will generate point clouds and textured 3d models from your photographs. It was a simple matter of downloading the model and uploading it to Sketchfab to share.

Drone Deploy

Scroll through model. The planar textures are on point, but relying entirely on overhead imagery has distorted the vertical textures. The trees in particular are rather strange looking. This is a limitation of the technique rather than the platform and DroneDeploy will now allow us to incorporate oblique imagery. Future models will make sure to include additional oblique photos captured after the initial overflight.

Drone Deploy was simple to use. There was a pair of firmware updates to do and then it was a matter of plugging a tablet into the Phantom 3's controller, drawing out our coverage area, and pressing go. Seven minutes later we had high resolution overhead imagery ready to upload for processing.

PhotoScan

This past May, Faine and I used her Phantom 2 to build a 3D model of my parents' place in Vermont. We did this manually, using a GoPro and our best guesses vis-à-vis overlapping paths. The process was haphazard at best, and not georeferenced. We took the resulting photographs and build a set of 3D models — one in VisualSFM and Meshlab and another in Agisoft PhotoScan. The results were interesting, with tree tops suspended above the landscape and melting structures. For the sake of comparison I wanted to run the Drone Deploy photographs through PhotoScan.

PhotoScan is a beast. It will happily eat up all the processing power and RAM you throw at it. Last spring I stuck with low and medium resolution models. I was able to chew through a model on my lowly MacMini with 8GB RAM and Intel Graphics in a couple of days. This time around I wanted to put the two on even footing. I also wanted to familiarize myself with Amazon's EC2 offerings.

A major drawback of Agisoft Photoscan Standard Edition is the lack of command-line tools. The only way to use the program is via the GUI. This wasn't a complete barrier to using it in the cloud, but it did complicate the process. I used an Ubuntu instance and I installed XFCE4 and XRDC to give me access to a desktop. From there I downloaded the photographs from dropbox and installed PhotoScan.

EC2 Environment

A frustrating flaw in this approach became apparent later: the desktop doesn't seem to persist between RDC sessions. The terminal command top revealed that PhotoScan was still happily chewing away. However, it wasn't something I cared to address while I was burning dollars on processing. My bodged together workaround was simple: Be sure to process the model in batch mode and instruct PhotoScan to save upon completing each task. Every so often I would check the last modified date of my savefile via ls -l.

PhotoScan made short order of photo alignment, sparse, and dense cloud generation. It wasn't until it tried to build a mesh that the next problem revealed itself: RAM. Agisoft is a memory hog and I had set it to do everything at the highest possible resolution. I had deployed to Amazon's c3.4xlarge instance, which gave me 16 cores, 32 GB of RAM, and 2x160 GB of disk space. Again watching top I would see Agisoft slowly increase the share of memory it was using. It would hit 100% and the process was killed. Perhaps optimitistically, I redeployed to a c3.8xlarge instance, with 32 cores, 64 GB of RAM, and 2x320 GB of disk space. I saw the same results. Taking no chances, I decided to complete the project in one of Amazon's memory-optimized instances. The r3.8xlarge instance gave me 32 cores and a ridiculous 256 GB of RAM. It also costs $2.80/hr.

It worked. Five or so hours later I had a model, mesh and textures completed and ready to push back up to dropbox and down to my local machine. A 1.9 GB model, with 31 million faces and 15 million vertices and a lot of holes in the mesh.

Agisoft Complete

Agisoft includes tools for repairing the mesh, which I did. Next, I decimated the mesh significantly. Now down to 2.1M faces and 1M vertices and after compressing the textures using pngquant I had something something akin to a usable, not to mention uploadable, model.

Takeaway

Where it didn't distort the the surfaces, Agisoft appears to have done a slighty better job rendering vertical textures . It also doesn't get the trees right, although it seems to produce different errors. I suspect these these differences lie in the modeling process. I'm guessing that Drone Deploy used a 2.5D height field approach whereas with Agisoft I selected for a true 3D model. The inherent limitations of both are such that they produce two different sets of errors without oblique imagery.

In the end I think that both have their use cases. With the right imagery, Agisoft can produce some incredibly high resolution models. For example, generating the base model and textures for the Citidel in Mad Max: Fury Road. DroneDeploy's primary focus is deploying your drone over a landscape and generating accurate high resolution maps, NVDI analyses, and Digital Elevation Models. The 3d modeling aspect of it is a nice addition, and whatever special sauce they're using to build a model without holes is definitely a big bonus. Ultimately it has a home in the toolbox no matter what your 3D modeling workflow is.

After I pay my Amazon bills, I plan to experiment with OpenDroneMap in the EC2 environment. Stay tuned.

By: