Microsoft Releases Photosynth: Just a Novelty?

Ever since Blaise Aguera y Arcas demoed Photosynth last year at the TED Conference, most people who saw it have wanted to get their hands on the software. The demo was, as the TED web site called it, simply jaw dropping. Photosynth was able to take hundreds of photos of the Notre Dame Cathedral gathered from Flickr and use them to create a virtual 3D environment by recognizing specific aspects of the cathedral in each picture how they related to the other photos. Each photo was placed in space accurate to where it was taken, allowing viewers to zoom around the cathedral and get an accurate picture of what it looks like from all angles.

Last week, Microsoft released Photosynth as a free, Windows-only browser plugin and application, along with 20GB of free online storage to store your “synths.” The plugin supposedly runs on Firefox 3 and IE 7, but I could only get it working under Internet Explorer. Photosynth fits into Microsoft’s philosophy of software + services, in which desktop software is augmented by web-based components.

Once I got it working, Photosynth was exceptionally easy to use. Just choose your photos, add them to the Photosynth creator app, and click a button to make your synth. The process takes a few minutes and the photos are uploaded to your account on Photosynth.net. The software grades the “synthiness” of your creations — which I gather is a numerical representation of how good the coverage of your subject is and how well your photos stitch together. My first attempt — the entire first floor of the SitePoint offices in about 40 pictures — was a complete failure, achieving just 32% “synthiness.” My second attempt, a depiction of just the break area in the office and which is embedded below, had a better 61% “synthiness” score utilizing 45 photos.

Ideally, synths should have 150-200 photos and no less than 20.

While the speed and ease of use of Photosynth is indeed impressive, I unfortunately found the actual synths to be a little underwhelming. Part of that might be because my subject matter for creating test synths wasn’t the best fodder for this technology — it excels at spatially relating photos taken from multiple angles of a single object, whereas my attempts at “synthing” used photos of a mostly empty space taken from essentially the same spot.

However, even the demos on the site from Microsoft employees and National Geographic weren’t as impressive as the original TED demo of Notre Dame. Perhaps that’s because of inferior photo sets, or perhaps it is due to slight changes in the viewer (I found it harder to zoom out and get the overall “3D” picture, for example, even with synths that should have been perfect for that, such as the Taj Mahal and the Sphinx).

Microsoft’s TED demo scraped hundreds of images from Flickr and was able to relate them spatially to create a 3D representation of an object. But right now, Photosynth is a personal technology — there’s no easy way to collaborate and gather multiple photos from outside sources. You can only create synths using pictures from your computer, and most people are unlikely to have the comprehensive photo sets available necessary to create high quality synths. Microsoft hinted that more collaboration tools are coming, though. When that happens, the technology will be less of a novelty, as people will (hopefully) be able to contribute photos to public synths and keep making them better.

Have you tried Photosynth? What were your thoughts? What sort of things have you synthed?

Free book: Jump Start HTML5 Basics

Grab a free copy of one our latest ebooks! Packed with hints and tips on HTML5's most powerful new features.

  • locomotivate

    Openphotovr has been online for quite some time and is a flash based alternative to Photosynth: http://openphotovr.org/
    I haven’t tried Photosynth because I’m on a mac without a windows installation handy.

  • Ryan Stewart

    I played with it for the first time today and was impressed. I didn’t upload any synths, just browsed some. You’re right about some of the synths being a little underwhelming, but some of them are really good. I think as more people upload and get used to how things are stitched together, we’ll see some cool stuff.

    Also, is there a way to get a feed of just your blog posts, Josh?

    =Ryan
    ryan@adobe.com

  • Roy

    This is one of those things where Microsoft’s rep hurts them.

    This is a pretty cool tool, and it was released by Apple or Google everyone would eat it up, but when the super evil corporation releases it…it “just a novelty”

    I’m not a MS supporter, but I think people at least need to stop being biased of the product just because of the company that made it.

  • jordan

    microsoft is lame.
    they don’t know who they are anymore and they’re floundering. when you start trying all this random stuff that doesn’t really work and when you have to resort to hidden-camera-blind-consumer testing (ie. mojave experiment) as a marketing plan to convince people Vista doesn’t suck, you know you are losing market share.

  • honeymonster

    AFAICT openphotovr require you to manually place/stitch the photos. With Photosynth this is a completely automated process. The software looks for “characteristic” features on each photo. By indexing the photos by these features it can locate other photos in the set with a number of the same features. By comparing features and placement the software calculates where each photo was shot in the 3D space – and arrange them accordingly in the synth.

    I think Photosynth is awesome. Although impressive it is still a bit rough – if the synth isn’t perfect you can easily get backed into a corner of the synth with no obvious way out – short of starting over.

    Thinking of the possible applications of this technology is mind-boggingly. The obvious are realtors, tourists, tourist agencies, museums, department stores etc. You can get a “tour” of public buildings before even going there – all is needed is for someone to shoot a synth and upload it.

    But also imagine a football arena with multiple cameras constantly shooting images of the event. You can then “synth” a point in time a do an orbit of the interception.

    Google Streetview is awesome, but this technology has the potential to go inside the buildings as well. Instead of expensive equipment operated by a single global entity (Google) this will be de-centralized, much like the web is now.

    The “indexing” algorithm can also be used to find totally unexpected links. In the synth of the Nortre Dame Cathedral the software actually found a poster and placed it into the synth. If you have an image of a rare sculpture or of a painting the same algorithms can conceivably be used to find images of the same/similar objects.

    It would be nice to be able to view synths in Flash, though. Once the synths are generated it the viewer is comparatively simple. Although I suspect that eventually Silverlight and not Flash will be required…

  • pfitz

    Doesn’t run on Mac OS X. Yes just a novelty for windows users.
    Try using http://openphotovr.org/ instead please.