Internet “Brownouts” Expected From 2010

By Craig Buckler
We teamed up with SiteGround
To bring you the latest from the web and tried-and-true hosting, recommended for designers and developers. SitePoint Readers Get Up To 65% OFF Now

Internet brownoutConsumer demand for bandwidth will start to exceed supply from as early as next year according to a new report from Nemertes Research. Internet traffic is growing at around 60% per year, and that does not include the expected exponential rise in demand from India and China. The problem could lead to connectivity disruption and “brownouts” where internet devices could go offline for several minutes at a time.

The popularity of bandwidth-hungry websites such as YouTube is partly to blame. The video sharing website now generates more traffic than the whole of internet did in the year 2000. More people are also working from home or using their PCs for cheap entertainment: by 2012, the internet will be 75 times larger than it was in 2002.

US telecoms companies are currently spending almost $40 billion per year upgrading cables and network infrastructures to increase capacity. However, demand keeps increasing and monthly traffic has already reached 8 exabytes (a quintillion, or a million-trillion, bytes – a single exabyte is equivalent to 50,000 years’ of DVD-quality video).

The report warns that an unreliable internet would be devastating for business and Web2.0 applications could be rendered useless. It estimates that $140 billion must be spent worldwide to bolster network capacity. That amount of investment is unlikely, especially in the current economic climate.

The internet has transformed every aspect of our lives, and there is too much at stake to let it become unstable. In the short term, we can expect:

  • slower net access
  • strictly-metered bandwidth charges
  • ISP bandwidth management, where individual websites are charged for improved connectivity (a move that could compromise net neutrality).

An unreliable internet will also be used by fewer users, which could naturally reduce bandwidth demand.

Do you believe the Nemertes report? Is the internet approaching critical mass? What can be done to prevent the problem?

We teamed up with SiteGround
To bring you the latest from the web and tried-and-true hosting, recommended for designers and developers. SitePoint Readers Get Up To 65% OFF Now
  • dotwebs

    Why not create a second internet? One could be for business & adademia and the other for social networking? A huge operation I’m sure – but if it can be done once in a relatively short period of time, it could be done again. Or just create separate channels for business so that that side of it doesn’t get affected by the ‘brownouts’.

  • I think I’m going to have to wait and see this before I believe it.
    To think that a global network that has been running for the past few decades will all of a sudden grind to a halt seems a bit hard to believe, especially considering the vast amount of improvements that the networks are undergoing, and will continue to undergo in the coming years.
    I’m not saying that it’s impossible of course, I just find it hard to believe.

  • This sounds alarmist to me…

    The reality is there are many many organisations involved in how the internet works. It isn’t like a single person needs to spend $140 billion dollars to “fix it”. One company will notice this single piece of hardware is getting overloaded and will smooth it out. This sort of fixing on demand will continue. The telco’s will incrementally improve areas of the network over time.

    The internet didn’t start at multi-gigabits per second it grew to that capacity. It will continue to grow. Advancements will be made, solutions will be found, improvements will be made by many and the network will function.

    Will there be issues, sure, will they be sorted out, yes. Alarmists…

  • calbak

    Its not as simple as, make another internet. The internet was not made in a short period of time, it has slowly grown from something very basic to what it is today, also it runs over the existing established phone network.

  • 1prodev

    I agree with soregums. There is too much at stake to let this happen. This is just scaremongering.

    $140 billion? That’s nothing in these crazy economic times!

  • Anonymous

    I agree. This is just scaremongering (or maybe you just couldn’t find anything else to post about). But it’s definitely not going to happen. $140 billion world-wide… that’s nothing.

    # strictly-metered bandwidth charges
    # ISP bandwidth management, where individual websites are charged for improved connectivity (a move that could compromise net neutrality).

    Not possible. Is the whole world going to agree on how much each company/person/country recieves in bandwidth. I’m not hopeful that ISPs and Hosting Companies are going to voluntarily restrict bandwidth to improve global performance.

    And btw, Australia is spending billions on their internet infrastructure… India and China already have. You can see that the trend is to increase performance.

  • michael

    its not april 1… that makes you the fool

  • busynic

    With a such a fantastic growth rate (which translates to customer demand) why wouldn’t people invest in new/growing technologies?

    The real issue is “control” and “barrier tests.” Articles like this will become more and more prevalent. They will be shaped and disguised as different topics (bit like split testing). Each controlling entity (governmental and large market influencers) are trying to understand how they can “disrupt the level playing field” and tilt the edge back into their domain.

    The business/economic/community landscape has changed forever and we are not even close to understanding its full ramifications, nor its opportunities. There has never been a period in history where so many opportunities (of significant proportion) have been presented to the masses. Historically it was secret handshakes and power brokers who tilted the future (and “anointed” their heirs).

    Now we have the internet – or what some fear – the “millennium anti-cartel” (MAC)

    Is it possible that brown-outs will occur? Of course, but it won’t be caused by a lack of willingness to invest or by a lack of new technology………
    It will be driven by greed.

  • kumarei

    While this article sounds a little alarmist, I don’t think it’s totally off base. They are completely correct that user demand is outstripping the ability for the cabling to carry it. And it’s obviously a worry for me, since my continued employment currently depends somewhat on a functioning Internet. I don’t really think there’s much that I can do about it, though, so I mostly ignore it. If it happens, it happens.

  • Anonymous

    I think of it more like a move to panic the net user and easily pass website restriction rules which would badly harm our freedom as internet users, just another way to manipulate the free flow of information to the masses. Lets face it the technology behind the web is so old and it still handles the massive traffic and the 75 time expansion. I think its better to look for upgrade rather than to look for ways to limit the users. Its the main idea behind the web people can share and collaborate with no limits. Putting limits on it will be insane and only big corporations like google yahoo etc… will own the web making virtually impossible for new companies to born, ruining competition and thus making services worse because its competition that drives the need for perfection.

  • mr_than

    I remember reading all about the impending collapse of the internet when we were going to run out IPv4 addresses. Just like in that case, intelligent and adaptive solutions will emerge. Nothing to fear here me thinks.

  • Anonymous

    Does Craig Buckler work for Time Warner? Cable vision is about to roll out 101mbps high speed internet in the NY-NJ area with no bandwidth cap whatsoever. The sky is surely falling. Oh wait…

  • Tarh

    It’s just an excuse to kill net neutrality.

  • Chapindad

    This is pure fear mongering. I would be interested in who paid for the report. It will probably be the telecos or media outlets. The internet is a distributed network and so there is no set finite capacity on it . The capacity is distributed across the network. Telecos will invest in the upgrades or be left behind, when consumers leave there network for one that has the capacity.

    Besides there is an Internet II that only select organizations are apart of now and it is much faster and uses IPv6, so expect this to be opened upo if capacity becomes a problem.

  • Anonymous

    140 billion is nothing in the Global economy. This situation will come to a head and the Telecoms will be able to catch up without resorting to drastic measures. But alas, they will still resort to drastic measures. It’s articles like this that will make people feel like the telecoms are justified in raising their prices and think that creating 2 tiered internet is a great idea.

    The Telecoms need better ideas and better technology to help them with their bandwidth troubles. They have the money to make this happen.

  • Chris

    Looks like John Sidgmore of UUNet and Bernie Ebbers of WorldCom had it correct back in the day. We need more bandwidth.

  • TRitter

    I’d like to refer everyone to the actual report. The article in the London Times Online took great liberty to sensationalize the story. As you’ll read, Nemertes projects the fiber, backbone and metro layers of the Internet will scale to meet all projected demand. The problem is at the edge – the last mile in the US. This is where we project demand growing much faster than capacity. Also, we address the issue of IPv4 vs. IPv6. The bandwidth issues can be solved by running more fiber, deploying wi-max and upgrading copper plant. As you’ll read, the IPv4/IPv6 issues are far more challenging to solve:


  • JdL

    What a ridiculous article. Criag, you demonstrate a certain amount of immaturity by even discussing this type of article, let alone reading it.

    You need to do a better job of qualifying your sources and the content you read / report on.

    “Nemertes Research” makes its living by selling research reports etc. In this article and many others like it on their site, they are “predicting” as though the market-makers and major industry influencers are static forces, while the challenges they face increase. This type of “reporting” is indicative of a company doing whatever it can to increase readership — and thereby revenue — by telling half-truths.

    A realistic and objective report would show that the industry has and will continue to be a dynamic force, adjusting and enhancing as demand increases.

    There is no looming crisis. This is pure Fear-Uncertainty-Doubt (FUD).

  • It is without a doubt infrastructures will be improved to handle higher loads in the near future but now the big talks are bandwidth companies plan to pass along the spendings for this improvement to the direct consumers (thats us). Meaning your monthly unlimited bandwidth usage will soon be capped and will be charged for over usage.

    So if they need 140 billion to future proof the integrity of the internet, I rather everyone in the world who uses internet as a business form or another to pitch in $100 each. Added up, it would be more than enough. This certainly beats the monthly bandwidth cap limitation.

  • SpacePhoenix

    It would probably have an affect on the many online games (both browser based and ones that use a client). The main cost in avoiding the brownouts will probably the the upgrading of the old copper cable telephone lines to fibre optic telephone lines.

  • fproof

    Just like years ago some reports were published stating that the world was going to run out of storage capacity pretty soon, while one year later flash storage was introduced to the market…
    Besides, there is too much money being made by everything related to and depending on the internet that I can’t believe no investments will be made to increase bandwith sufficiently.

  • unamused

    Everything has a breaking point, sure, but I agree with Tarh: sounds like propaganda.

  • “Web2.0 applications could be rendered useless”

    Sounds like we’ll be able to finally go into web 3.0, ROCK ON!

  • @sinthus:
    Love the positive outlook (and the assumption we don’t skip straight to Web 4.0 :P)

  • I have wondered about this regarding the Australian National Broadband scheme. The talk of fibre to the premises with enormous bandwidth makes me wonder how the sites and servers supplying the data will keep up.
    How will a distant island like Australia get enough data into the continent if every user wants to access it at fibre optic speeds?

  • The interpretations of the report are totally overblown, as usual. There’s been a report like this every year for many years.

    Internet usage has increased steadily but not at an amazing rate each year, and the backbone has grown with capacity at an even faster rate. Everything but the last mile has been fiber for a long time, and that fiber is capable of pushing data at far faster rates than we’d ever need.

    And the edges are fine too. The big cable operators have finished rolling out DOCSIS 2.0 in most major areas of the country, and are already rolling out DOCSIS 3.0. The upgrades cost them something like $20-120 per customer on average, which is nothing when they make that back in a month or two in subscriber fees. With DOCSIS 2 and 3, they’ve got more capacity on the edge than any of the plans they sell, no issue there.

    All the while, the ISPs’ costs are pretty much fixed as far as transit. Most of it is peered with other ISPs anyway.

    There’s simply nothing to worry about here…

  • The other thing. If you host a website, you pay for people to access it. If your site is slow and you are making money off the site then you go about sorting things out so the site is fast.

    I think somewhere along the line people are forgetting the fact that it costs money to host your site. If you want dedicated 100Mbps connection that you can utilise 100% of, that is going to cost you a fair amount of coin. The network providers are going to supply it if you pay for it.

    Then we go to the other side, the consumers of these sites. If I want to have highspeed/lots of data, then I pay for it. The telco supplying me figures out how much to charge and works everything out so the network is able to deliver what they are selling.

    Everywhere in the network someone is being charged. The network doesn’t have “free spots”. The only problem that I see is the US model of $xx / month = unlimited internet downloads at as fast as you can get, this is what isn’t sustainable and going to cause issues. The problem is the revenue generated from customers is not greater then the cost to supply the service. AU’s data limits make sense, I’m paying for data+speed – ISP needs to make sure it is operating properly to maintain the product.

    It is all about supply and demand. I want 100GB @ up to 24Mbps, then I pay $149 / month. Now if the ISP is counting on me not to use what I pay for, then it is not going to work out in the long run. Products will be revised and the ISP do what it needs to, to fit with what is sustainable.

  • @SoreGums: The ISPs on the consumer end, and the datacenters on the website end, lay lines with certain capacities, they don’t pay by how many bits pass over the lines. Their expansion cost is in running more lines or upgrading routers to higher throughputs.

    There is nothing wrong with the US ISPs or any other offering “unlimited” plans, as they are only unlimited up to a certain speed they offer. That’s the same as they’re being billed, it makes perfect sense to bill the customer that way. $x/mo cost to run a 1000GB/sec line becomes a $y per customer charge for each 8MB/sec portion of it. It doesn’t matter how many gigabytes are pushed over it over the course of the month, only that there’s enough capacity for each customer to use it when they want it.

    Your 24MBPS line should cost $150 per month because you’re getting a 24mbps portion of their network which has a certain capacity in bits per second, not because you won’t transfer more than 100GB over that line per month. The cost is the same to everyone involved if you transfer 100GB, 200GB or 0GB.

  • @DanGrossman

    Yeah, I’m aware that is how it works in reality.

    So thinking about this a little bit more. Is it the ISP’s who pays for 10lines but sell 20lines worth of access, hoping that at any one point in time there will only be 10lines worth of utilisation, is the real issue?

    I get that the amount of data going over a network costs the same as none. And that what is being paid for, is the physical hardware in buildings and in the ground. There is a practicality part of this that starts to pop up though. If the sum of ISPs customer connections bandwidth = 10,000 and the ISP made sure their network could handle 10,000, that would be a large bill and completely impractical to implement.

    So then ISPs came up with another way to make it cost effective, give people a data limit. That way they don’t need to supply 100% bandwidth 100% of the time and are able to offer a great product at significantly reduced costs.

    Now combine my previous post with this one :)

  • You say it’s completely impractical yet that’s what they’re doing now, and you don’t hear Comcast, the largest cable provider in the US, complaining about it. It’s only these absurd sensational reports that come out each year about there not being enough capacity at some point a few years out (which is always a few years out!).

    My point is that data transfer caps and bandwidth caps are equivalent. They really are, just compute the number of gigabytes it’s possible to transfer per month by multiplying the MB/sec rate by 1000*60*24*30.

    A little overselling is good business and not an issue. The ISPs can manage how much each customer can transfer by limiting their speed in MB/sec. If they only have capacity for 3GB/sec for a neighborhood with 500 customers, then they can sell 6MB/sec unlimited service and never utilize it all, or sell it as 8/10MB service like they actually are and occasionally a few people might not get the full 10MB if everyone’s using their lines at once.

    Everyone’s happy and nobody has to worry about how much they’ve downloaded that month, or if using YouTube that day is going to lead to service being cut off.

    Data transfer based caps retard the evolution of the web, while bandwidth based caps, as long as they’re in the broadband range, have much less of an effect.