Design & UX
By Alex Walker

DRM: Cutting Off Your Prose to Spite Your Face

By Alex Walker

Zebra text selection at MISAustraliaHow far should you go to protect your copyright?

Yesterday I clicked through to an anti-Twitter rant on (ironically via a retweet). While you can make your own call on the content, the thing that really caught my eye was the body font. Why on earth would a large, professional content site choose to display their content in an ugly, unreadable monospaced font?

Zebra text selection at MISAustraliaAbsented-mindedly I drag-selected some of the text and got another surprise — a pretty, checkerboard pattern on the selection area.


Hmmm… interesting. What’s going on here?…

Viewing the source, my jaw nearly hit the desk.

The insane scientists at MISAustralia appear to have built a content management system that automatically shuffles each paragraph into two piles, letter by letter.

Each pile is then dumped into its own DIV and padded out with non-breaking spaces, before they are precisely overlayed with each other to make them readable again.

Of course, this means copy-and-pasting the text ONLY touches the uppermost DIV, and explains both the zebra patterning and their choice of mono-spaced font.

Clearly their motivation is Digital Rights Management (DRM) by making it more difficult to copy-and-paste or screen scrape their content. In fact, their HTML source commenting refers to it as a ‘DRM Viewer’.

View Source: The DRM Viewer showing what Gooogle sees.

This seems astonishing to me on so many levels.

  1. Firstly it makes their content present as utter gobbledygook to screen readers and other assistive technologies. I am not a lawyer, but I’d suspect there’s the basis of a robust discrimination law suit in there.
  2. Secondly, it makes their content unreadable in any RSS reader and prevents them even offering an RSS feed.
  3. Thirdly, it necessitates the use of font that further erodes the value of their content.
  4. Finally, and most importantly, it makes their body copy (i.e. the heart and soul of their site and business) completely invisible to Google, Yahoo and every other search engine on the planet.

That last point is mindboggling to me.

An entire industry (SEO) has evolved for no other reason than to ensure Google sees and values your content. Companies live or die on their ability to make their content visible. Here is a company going to great time, effort and expense to actively obscure their work from the web’s largest traffic provider.

As far as Google is concerned, this isn’t just lowly rated content, it is ‘non-content’. It simply doesn’t exist. It was never written.

As a quick example, take this recent article Nine loses EPG battle.

Search Google for the non-DRMed article title and it comes up first. Perfect! Google clearly knows and visits them.

However, let’s step inside the article and search for a highly specific phrase, “Ice TV general manager Matt Kossatz said the ruling was timely”.

Result: Nothing. Blip. Nothing to see here, people, move along.

Of course that’s no surprise. How WOULD Google know what it was looking at?

I’m not even going to start with the huge accessibility issues for fear of turning this into a 10 page post. Before and After GreasemonkeyThe Final Irony

Now, if this was a foolproof solution to their copyright dilemmas it’s still highly debatable whether it’s worthwhile inconveniencing 99.99% of your everyday readers to stop the .01% of your visitors that are infringers.

Unfortunately this DRM is anything but foolproof.

Anyone running Greasemonkey and the MISaustralia text selector userscript (Hats off to Gustav Axelsson) can not only cut-and-paste their little hearts out, but they get to read it in a comparatively luxurious Verdana, Arial, or Helvetica typeface.

In fact, if you’re a Firefox user who reads this site, you’d be almost silly NOT to use this script, just for readability reasons.

Equally, writing an application that automatically parsed and republished every new article elsewhere would be just as trivial. And the cool bit is they don’t even have to compete with the original authors. They would ‘own’ that content as far as Google was concerned.

Now these guys are part of a large, generally, tech-savvy company (Fairfax).

Is this nuts?

  • Malendariel

    Every now and again I think the depth of human stupidity can no longer shock me – then I read something like this.

    Someone working on this site made a very stupid decision – I wonder if they even realise the impact of having their content obfuscated like this.

  • Anonymous

    Erm, yes!

  • cwd

    Ha! And they have meta keywords and descriptions while they are obscuring the content – the biggest factor in SEO!

  • It’s more than just nuts. It’s insane.

  • markfiend

    I can imagine the meeting: “hey, guys, I’ve got this great idea to hide our content from Google”.

    What the FUDGE were they thinking?


  • Outrageous!

    Good way to advertise that you know so little about how the web functions. Both at a technical, and user level.

    How can you maintain any authority in your online publication when it bucks so many of the basic rules, and when you show such contempt for your readers.

  • I think it’s funny that it’s all probably backfiring. Like you mentioned toward the end of your article – anyone who does steal the content (which I agree, it’s not difficult — at the very least, simply reverse-engineer it) will get listed in Google, while they do not!
    They might as well just turn it into a newsletter and simply mail it to everyone… or use a captcha in order to see it (but again becomes invisible to Google).
    I’m surprised they wouldn’t offer an executive summary at least for Google to eat up (why not combine that with a captcha idea?)
    This idea was clearly not thought through — it’s a frustrated man’s attempt to protect what is his (and ironically destroy it in the process)

  • Tarh

    “I’m sick of people stealing our content; we need a DRM solution. Any ideas?”
    “We could use a script to put all of the text inside of an image so it couldn’t be copied.”
    “Yeah but that has too many accessibility issues and can still be easily circumvented. What you guys need is our patent-pending obfuscation technology!”

  • Hilarious! If this was April 1, I would suspect an elaborate joke. The frightening thing is that I can imagine clients insisting on this and the poor web developer having a complete breakdown…

  • Xylex

    So you stop casual cutting and pasting, but then your site looks horrible with a monospaced font and search engines can’t find anything?
    Stuff like this reminds me of my retail loss prevention days. If we lock it up really securely, the theft rate goes down, but then sales also stop.

  • Simon Grae

    The final insult here must be the ability to combine both easily… how can an enterprise so large, with such resources, with obvious management ability fail to view the gross oversight of their employees. DRM is a big issue; in particular with content, and their lack of egress in these issues smacks of excess and laziness

  • Oh wow this is just beyond stupid. It makes my head hurt ><

  • Several points of this article aren’t necessarily true.

    Nothing about this stops the site from offering an RSS feed; the text of their articles is no doubt stored in a database, and there’s no reason the RSS reader has to format its output the same as their CMS formats it for the website. They can put it in normal plaintext in the feed.

    Similarly, nothing says that they are producing the same output for the search spiders as they are for you. They can easily identify the search spiders by IP and user agent, and serve them the raw text instead of the DRM’d version.

  • Let me get this straight, I can write a cURL script to get the content onto my website by replacing the div tags and such(scraped), Since I have removed their stuff I have nice formatted unique content.

    Defeats the pupose.

  • Nothing about this stops the site from offering an RSS feed; the text of their articles is no doubt stored in a database, and there’s no reason the RSS reader has to format its output the same as their CMS formats it for the website. They can put it in normal plaintext in the feed.

    Dan, certainly there’s nothing technically stopping them having an RSS feed, any more than there’s anything stopping them putting machine readable text on the site. It’s their philosophical position on making their text available that would seem to prevent RSS.

  • Shadow Caster

    I’m quite impressed by the method used. I wonder about the person who thinks of these things.

  • @Dan Grossman, if you look at the Google text cache of a MIS article, you’ll see the text is not being presented in full.

    You’re right, they could serve something else to a spider, but it doesn’t seem that way from here.

  • If you stopped for a moment and thought about it, you’d probably guess that someone who implemented something like what they use probably doesn’t realize the negative effects of their approach. If they don’t realize the negative effects, it’s doubtful they would do something special to correct it for other areas (or wouldn’t want to because they’d figure people would just still it from there).

    This is definitely a ridiculous case and hopefully an exception to the rule. However, I fear this type of thing is all too common. Typically I would never refuse to do a website for a client. If I client approached me with this type of thing, it’d definitely be a coin toss as to whether I’d keep it (and if I did, I’d likely raise my rate if they insisted on this).

  • Foo

    Wow, surely there is an easier and simpler way to not have your info known. Don’t tell anyone about it in the first place. That way all of the above energy would not have been wasted for a “nothing” outcome :)

Get the latest in Design, once a week, for free.