Response to Barbara van Schewick on Zero Rating, “users must have access to the whole Internet”

State of the Net featured a panel discussion on zero rating on January 25, 2016 where I debate the merits of zero rating with Barbara van Schewick of Stanford Law School Center for Internet and Society. See video here Fireside Chat: Zero Rating & the App-Accessed Internet – Can It Be Squared with Principles of Net Neutrality?

I addressed three of the statements that Barbara proffered regarding zero rating.  See Three Myths About Zero Rating and here.

Economist Bronwyn Howell, general manager for the New Zealand Institute for the Study of Competition and Regulation and a faculty member of Victoria Business School, Victoria University of Wellington, New Zealand watched the panel and had the following to say about van Schewick’s comment, particularly this notion that “user must have access to the entire Internet” with the further implication that all access must comport to a one-size-fits-all, treating all bits the same model.

What occurred to me while watching it is that is impossible to address the zero rating issue rationally without first acknowledging that internet users are extremely heterogeneous, and that purchase of a connection is a derived demand based upon users‘ valuations of the applications they use and not the connection in its own right.   The argument about having access to the ‘whole internet’ without any limitations implies that all content should be treated as if it is equally valued by all consumers.  This is blatantly fallacious.

If what is considered is access to applications that are perfectly homogeneous – e.g. cloud storage – then there is a risk that one provider could come to dominate all others, and it would be hard on any provider offering a perfectly homogenous replica product to enter the market. Only then will ‘perfect competition’ be the relevant way of thinking about what is occurring.  Perfect competition assumes no firm (or application) has any power to influence the market – invoking anti-zero-rating to impose this condition on the content market is chasing a pipe dream that flies in the face of the reality of internet markets.

The vast majority of internet content is not homogenous – it is almost perfectly heterogeneous!  And even more importantly, content is an ‘experience good’ – we can’t tell how good an app is or how we value it relative to others till we’ve tried them. Different consumers value different content differently.  That’s why 50% of users of the ‘free’ services don’t subscribe – in fact only 5% continue to use them without ‘upgrading’ because they value the content but can’t afford to pay! 45% don’t value it even if it is free!  This simply confirms that we are not dealing with a competitive market – content is not ‘free’ because we have to spend scarce time resources viewing it.  And everyone’s time is valued differently.  Time spent viewing is in effect a ‘search cost’.

The relevant competitive model is monopolistic competition with product variety and customer self-selection.  The starting position for this sort of competition is one of (limited) market power for each variant on offer – at least in respect of those consumers who have already tried and selected their preferred variant(s). The problem here is that consumers must incur search costs (akin to ‘brand preference’) to try something different. 

The ‘new applications’ here are like a new brand of ice cream.  Customers like their existing one, so have to be induced to try the new one with a free sample.  Simply putting a new ice cream in the shop at the same price as all the other ice creams is not sufficient to get people to buy it. Hence out of gazillions of online sites, most people only view 15 routinely – we all stick to our favourite ice cream brand and won’t try the new one if it is the same price as our favourite because we don’t want to risk paying the same price to get something that it turns out we like less than our existing favourite (i.e. welfare is reduced). The only way to overcome these established preferences is to pay the search costs – by giving a taste for free.  In this economic context, regulating to try and remove the market power of any particular variant is next to hopeless – as consumers will still have their preferred brands.  Except there will be fewer of them because new entrants can’t get ‘air time’. 

Indeed, the fact that new ISPs have railed against anti-zero rating simply confirms that even here, the market is not one of homogeneous providers offering homogeneous products – we all know that despite LLU and equivalence in Europe, incumbents  still have brand preference advantages that advantage them over new entrants. 

So in a nutshell, Barbara’s anti-innovation arguments pertain to a world that simply does not exist, even where there has never been internet access before, because content is not homogeneous.

The only way to deal with this is with competition law – taking each case on its merits, defining the market(s), establishing the extent of market power, evaluating whether it has been exerted and then assessing whether the harms caused are outweighed by benefits accrued. Just as the FCC has opted to do.  This does not mean that ongoing observation and analysis is unnecessary – indeed, it is prudent to do such analysis to come to grips with how these markets are operating!