By Kevin Yank

Flickr, Zooomr and API Parity

By Kevin Yank

The following is republished from the Tech Times #141.

An issue I touched on in my editorial on Web 2.0 Connectedness bubbled to the surface this week with news that Flickr had denied a request from competitor Zooomr for access to the Flickr API so that Zooomr could import users’ Flickr photos and metadata (e.g. tags) for them.

Now before you hit the warpath, I should point out that Flickr has approved requests for API access from other competitors like Riya and Tabblo in the past. Something about the directness of the competition that Zooomr represented for Flickr tipped the scales, however, and Flickr made the call. Via email to Zooomr:

…we choose not to support use of the API for sites that are a straight alternative to Flickr.

Founder Stewart Butterfield even shared his reasoning on the FlickrCentral forums:

With respect to granting a commercial API license to a direct competitor: we might not. […] In the case of a truly direct competitor (and, so far, we have very few), we probably wouldn’t. And I don’t see that as malicious on our part: why should we burn bandwidth and CPU cycles sending stuff directly to their servers?

This, understandably, sparked some serious discussion. When responsible companies like Yahoo! claim that their services are open, allowing users to retrieve whatever data they put into them, do they have the right to make it less convenient when what you want to do with your data is move it to a competing service?

After some internal discussion of the issue, Butterfield reconsidered his position:

I actually had a change of heart and was convinced by Eric’s position that we definitely should approve requests from direct competitors as long as they do the same. That means (a) that they need to have a full and complete API and (b) be willing to give us access.

The reasoning here is partly just that “fair’s fair” and more subtly, like a GPL license, it enforces user freedom down the chain. I think we’ll take this approach (still discussing it internally).

This proposal that API openness between competing services should be a two-way street was applauded by O’Reilly’s Marc Hedlund, and dubbed "API Parity".

While the symmetry and inbuilt fairness of the arrangement does look attractive at a glance, and should resolve the issue to the benefit of the users quite neatly in the Flickr/Zooomr case, I’m not convinced this is a conclusive solution to this issue in general.

I’m not going to argue that all web-based services should be forcibly compelled to open the content created by their users for easy export. I certainly believe that it’s the right thing to do, and will ultimately contribute to user confidence in the service, but as Nicholas Carr points out, vendor lock-in is a proven business strategy that is unlikely to die anytime soon. But there will also always be a segment of users that insists on their data being openly available to them, and services like Flickr will cater for such users.

What I will argue against is any such open service intentionally putting up roadblocks for certain uses of a user’s data. Returning to the Flickr/Zooomr case, Flickr’s objection to providing unilateral API access to a competitor on the basis that it "burns bandwidth and CPU cycles" just doesn’t hold water. Flickr’s servers will run just as hot whether a user downloads the data and then uploads it to the competitor manually, or performs a direct transfer to the competitor via Flickr’s API. The only difference Flickr makes by denying its competitors API access is to arbitrarily reduce the convenience of its service when users want to take their data elsewhere.

So while so-called API Parity is certainly something to be strived for, it should not be a condition for an open service like Flickr to provide API access. It’s not fairness to its competitors that Flickr should be concerned with: it’s fairness to its users. A service just isn’t open unless that openness is provided uniformly and without bias.

  • This is just crazy, Flickr’s nothing special, just an idea that someone reached before anyone else.

    But now that people are catching up, they’re going to find it difficult to keep innovating to stay ahead, to keep their users… who now have no real reason to stay loyal to the brand as long as they can up-and-leave.

    The logic here is all wrong.

    Users should have their own APIs, a set of microformats for their data, and then use flickr/zoomr/whatevr to analyse and structure it.

    Obviously, this isn’t going to happen anytime soon, as nobody can use their desktop as a webserver… but eventually when machines are turned on all the time, and we have massive pipes to push data down, this will become less of an issue.

    But still, an interesting topic to follow, thanks for the post!

  • Dr Livingston

    well i know im proberly going to be shot down but i think flickr are fair enough to stand up and protect not only their own interests but those of their users as well.

    so if they want to prevent their competition from access to their software, they have that god damn right just to do so. that is commericialism for you. its nothing personal – its just business :)

    and just because there is that prevention in place in no way does that say that there is going to be an impact on the users and how they use the service in relation to other services available either.

    if you ask me if flickr actually do give the go ahead for this then this in its self actually restricts and prohibits not only competition but innovation as well.

    anyways, my message to flickr is to stand by your convictions and dont give any ground. you are right and just in this case and to give zooomr (crap name btw) access would be selling yourself short…

    all that hard work and for what?

  • I agree with Dr L here. Flickr are a company, who’s objective must only be to be a financial success. To release an API to a direct rival that allowed them to compete more effectively is business suicide, and I say well done to them for standing up and saying no!
    If I came up with a great idea and made a success of it then you can bet that I wouldn’t be do anything to make life easier for someone ripping off my idea.

  • Web 2.0 is driven by generosity. Video and photo hosting services are all suddenly free-of-charge. Digg just sends people *away* as quickly as possible. Etc, etc.

    However, this generosity is rightly directed at the end user.

    I’ve read much about this Flickr / Zooomr debacle and have concluded that, in my humble opinion, it’s nothing but exploitation of this wave of generosity.

    Giving away everything an end user could possibly desire in exchange for market penetration is one thing. Assisting rip-off ideas (come on, stop ignoring the fact that we can all see Zooomr is just that, even in name) is another bag of chips entirely.

  • Stewart Butterfield

    You’re missing the point. The path to true data portability does not start with the implementation of vendor-specific APIs.

    That may introduce a small amount of openness into the market, depending how many services implement each other’s APIs, but the only way to ensure broad interoperability is have an open *standard* (as opposed to API) for import/export.

    Most users organize their photos on their local computer and neither Microsfot nor Apple is ever going to implement the Flickr APIs so the metadata makes it back to the file system. Neither are the dozens of desktop photo management applications going start coming bundled with web servers to online services can hook up to them. It’s just not a realistic approach.

    On the other hand, ITPC is doing fairly well, and with some extensions to match the functionality of modern online services, it could be just the ticket.

    Finally, there are all kinds of reasons *not* to use the API for this: it opens the provider of the API to all kinds of risks and requires more monitoring, oversight and protection of users’ data and privacy. A malicious third party can’t do any damage when importing exported data, but they can through the API.

  • chrisb

    Agreed.. if you start insisting all sites must share data with competitors will just end up with patent issues rising up again – if they can’t stiffle competition by making it a little bit more difficult to move, they’ll just find ways to stop the competition getting off the ground in the first place and suddenly we’re back where we started and the user is worse off!

  • Nate Kirby

    I think the point about “burning CPU cycles” is misunderstood. It takes time for engineers to build thsi import/export facility. Those engineers could be adding user requested features instead of building the import export.

Get the latest in Front-end, once a week, for free.