Bit caps, consolidation, and Clearwire

The news that Comcast, Time Warner, and AT&T are all considering capping use of their networks – so that “overuse” would trigger a charge – has prompted intense discussion of just why these network operators are moving in this direction.  One camp suggests that these operators have to do *something* to manage congestion, and because any protocol-specific discrimination plan raises howls of protest from the Net Neutrality side of the fence adopting bit-usage discrimination schemes is inevitable.  It’s the least-bad approach, following this view.

The Net Neutrality side, for its part, points out that (1) each of us will fall into the 5% of “over-users” at some point or another, (2) the operators want to make sure that they remain the chief sources of video content, rather than allowing internet access to video undermine their business plans, and (3) it seems odd to manage to scarcity rather than invest in improved access for everyone.  It’s as if the operators would prefer to keep internet access expectations at 2003 levels.  And if you really wanted to manage congestion you’d charge differently for usage at different times.  (Meanwhile, Korea.)

People in countries with experience in volume limits (e.g., Australia) tell us that it’s miserable having fixed caps and overage charges.  In Japan, they began with expensive metered access and left it as soon as they could move towards an unbundled/separated regime – now costs are low (and flat) and speeds are very high.

Bit caps are portrayed as similar to familiar cellphone models – getting a “bucket of minutes” for a fixed price.  But the history of internet usage hasn’t proceeded that way, and it should be hard to force users into these plans. :  Should be – but may not be, both because users here in the US don’t expect to be able to access enormous amounts of video online at high speeds, and because users don’t have a lot of choices for network provision.  If most of the big ones move in the bit-cap direction there will be few opportunities for users to vote with their subscription fees and escape.

Speaking of “most of the big ones,” the big ones may get bigger.  Verizon’s purchase of Alltel will mean that two companies – AT&T and Verizon – will “control 150 million of the 260 million wireless customers in the US.”:  (From Public Knowledge.):  Verizon will have 80 million of those customers alone.  All around the world, wireless providers are consolidating as they seek to “become national players in next-generation mobile networks.”

So here we are::  a retrograde move towards metered pricing, increased consolidation, and no necessary link between any of this activity and better internet access for everyone.

The Sprint/Clearwire transaction seems like a possible work-around (thanks to those of you who sent me the filing from last week – I still can’t link to it but I will when it’s available).  Their claim is that they’ll create “a new nationwide advanced wireless broadband network that will increase competition across the country and vault the US into a leadership position in the broadband innovation and deployment.”:  They’re planning to provide speeds that are five times as fast as current wireless speeds, as they roll out the “world’s first nationwide WiMAX network.”:  (It’s always good to appeal to our national pride.):  They’re planning to allow wholesale access – although perhaps only by the cable investors in the plan, Comcast and Time Warner. (There’s a vague mention of other unaffiliated firms, but no assertion that just anyone will be allowed to re-sell their network.) :  Now, they’re not giving up on “reasonable network management” or “no devices that harm our network.”: : :  But they’re asserting that wireless access is the future, that it’s the fastest-growing segment of the US telecom industry, and that a new competitor is needed.

There are worries – will the $3.2 billion from the investors be enough to make a nationwide network possible?:  Will the technology actually work – penetrate walls, go through anything?:  What about backhaul problems?:  Backhaul may be the big unexplored issue here – without a line taking all of those WiMAX communications somewhere, they won’t succeed, and those lines are controlled by incumbents who don’t have any incentive to charge market prices.  Because there isn’t a market.

So that’s today’s picture.  Head-scratching about bit caps, intense consolidation, and the glimmer of a possibility that a “third pipe” might emerge.  It all ties together, because without the cooperation of the incumbent network operators (the people who feel bold enough, market-powerful enough, to float metered pricing), the third pipe may not have a realistic chance of succeeding.

Meanwhile, the rest of the world watches US wireless policy closely.

4 thoughts on “Bit caps, consolidation, and Clearwire

  1. reed

    Although I like and respect David Gross, I would add: other countries don’t emulate the USA; they were required by the WTO Telcom Treaty to create independent regulators. Many now copy the policies of the FCC of the 1990s (see Korea, France, Japan) but hardly any copy the policies of the current FCC. Indeed, in Canada, by example, the auction of 700 mghz included a specific set-aside for new entrants; hardly the FCC policy. It’s not helpful to thinking in America to continue to believe we are the Middle Kingdom, when the mantle of leadership has passed, is passing, and will continue to pass, until and unless we obtain new government.

  2. barry payne

    threats of congested and interrupted service have been used for years in the utility industry to intimidate customers, regulators and legislators, to jack up prices higher than necessary to provide firm service, as recently demonstrated in the deregulation of electricity at the generation level

    broadband providers created the congestion – not the customers – with oversold capacity and artificial shortages, which would normally push prices higher, but given the degraded service, forced bundling and existing high prices, they’re so far up on the demand curve with the market power, they could experience drop-offs and reduced new subscribers at the point where continued price increases go elastic and reduce revenue

    metered pricing of gegabytes is a crude indicator and poor control of congestion, ignoring peak/off-peak differences altogether except by coincidence – to abandon existing pricing of bandwidth tiers in mbps
    – which does signal congestion cost – in exchange for metered pricing,
    clearly indicates a strategy less concerned with congestion than maintaining the current low-use subscriber base in terms of capped gegabytes, which will force some to higher prices and service grades while restricting others from substituting internet media for other sources

    as more of the total connection capacity as marketed, sold and made available is being used to the point of triggering congestion, it could easily be downgraded to match available network capacity, priced for peak/off-peak use, or network capacity could be expanded to match connection capacity – but any of these solutions threaten the current business model with net cost increases,
    so they’ve come up instead with metered gegabytes to sustain it, which also fits nicely with net neutrality since all unblocked use up to the cap is through a dumb, agnostic pipe from the user’s perspective (however the bandwidth is assigned and allocated internally)

    for some contrast, check japan, where congestion itself differs across competing providers and varies indirectly with price from very likely to very unlikely – these days, there’s plenty of u.s. consumers who would be more than happy to pay lower prices for broadband service subject to congestion, just to have it at all

  3. […] models | Tags: Broadband, business models, wireless |   Susan Crawford’s posting on ‘Bit caps, consolidation, and Clearwire’ makes some interesting […]

  4. Susan

    “And if you really wanted to manage congestion you’d charge differently for usage at different times” turns out not to match any empirical evidence on costs. Except for the problem of cable upstream before 3.0, the cost of providing plenty of bandwidth for nearly all the time (not katrina like emergencies) is so low the rest of the question – on wired networks – is irrelevant. Bell Canada just provided detailed data on “congestion” because teh regulator demanded it. Turns out the “problem” was an almost unnoticed issue because the connections out the back of the DSLAM were to small. Even then, the “problem” was that 5% of DSLAMs failed one in 200 tests to the 70-90% level. So the 1 in 4,000 packets that might be affected almost all went through, because the remaining 10%-30% of the bandwidth was still available.

    Having written a book on DSL, …, I was able to estimate the cost of upgrading the system to virtually eliminate that problem was $2 to $20 per customer – less than half a month charge gets you a virtually non-blocking network.

    The details of which were factchecked by Bell before I published.

    So no, on wireline networks only in very particular cases is the right answer to throttle, degrade, differentially charge, or put in a cap below perhaps 150 gig. (Bandwidth isn’t free, but it is cheap enough that Comcast is thinking a 250 Gig cap and NTT just proposed 900 gig a month upstream, unlimited downstream. )

    db

Leave a Comment