Complementary and Contradictory Economies of Scale

The odd thing about economies of scale is that they operate on both the revenue and the cost side, and the two do not have to coincide.
I’ve been digging into some of the literature on network effects, which can drive positive economies of scale – e.g., Facebook is a lot more valuable because all your friends are on it.  This is common in many current-generation software companies based around networks, and the promise pitched to innumerable venture capital investors.  Many competitive spaces based around networks are natural monopolies for this reason, in that the value proposition to the largest player is naturally higher than its competitors.  The rich get richer.
In traditional industries, positive economies of scale are ubiquitous on the cost side. As your brand gets bigger, you can e.g. shift from expensive contract manufacturing to more inflexible but lower-cost in-house manufacturing.  For many traditional industries the economies of scale are highly positive on the cost side but next-to-nothing on the revenue side – the value of Tide is unaffected by whether your neighbor uses Tide.  For consumer brands that are big enough, the revenue economies of scale can turn negative due to reasons of market saturation, fashion, and taste.
A bit of a novelty are businesses with the opposite problem – strongly positive economies of scale on the revenue side and negative economies of scale on the cost side.  For example, a two-sided marketplace where an app mediates physical activities.  As the business gets bigger, issues like liability (legal and PR) become commensurately bigger.  For businesses where the profit model is based on externalizing internal expenses, e.g., shifting insurance and maintenance costs from the service provider to the service operator, that becomes less practical as the firm gets too big.
It strikes me that many of the current-generation sharing-economy companies fall into this category of wrong-way-round economies of scale.  When there are network effects to drive growth but difficult physical realities to work around, it could turn out to be easy for a company to outgrow its economics.  For example, if there are positive cost economies of scale for operating a car-dispatch service, why were there no globe-straddling car-service companies beforehand?  The contradictory economies of scale suggest a natural boom-and-bust cycle where operating cash flow plummets inexorably while revenue (and need for operating cash!) explodes.
Also a fun question – on a conceptual (not accounting) level, is paying to acquire a competitor in a marketplace business a capital or marketing expense?

Time After Time – Why Time & Uncertainty Make Legislating Difficult

In my last post on the GOP Healthcare debate, I mentioned “time consistency” and I wanted to delve a little deeper into that question and the problem facing GOP leadership.  I’d suggest this time-consistency issue is actually a major problem for the GOP in putting together an ambitious policy agenda on more than just healthcare.

Cox & McCubbins named possibly the driest and most boring theory in the already-dry field of Congress studies – the “procedural cartel“.  This theory of Congress suggests that party coalitions basically act like law firms or investment banks; “senior partners” (Paul Ryan, Mitch McConnell ) issue orders to “associates” (rank and file) under the expectation that loyalty will be rewarded and success shared.  Passing legislation is a classic collective action problem – legislators want goals, e.g., Obamacare repeal, but only want to cast a risky vote if the party will be on their side.  Parties choose leaders in order to provide coordination for this and related problems.

This all hinges on trust.  Legislators need to trust that the tough votes they cast today will result in support  – political cover, funds for reelection, etc. – from the party mechanism tomorrow.  That trust that leaders won’t turn around and screw the legislators is time consistency.  I think the Republicans face two big time-consistency problems: one is the character of the President, but the other is the nature of partisan media and this will likely bedevil Democrats when they take back Congress.

First and most obviously – the President is clearly untrustworthy.  He’s not an ideological conservative and has a short time horizon, and legislators should have little confidence he will expend political capital on them next November in exchange for a vote they’re taking today.  He may well see more short-term political benefit to turning on the AHCA – and on them.  Even on the most basic risk-assessment level – the man doesn’t pay his contractors. He’s a walking, talking, tweeting time-consistency problem.

The other, more subtle issue is that partisan media now controls a lot of the influence once held by partisan leaders.  Republicans have no idea whether Breitbart will carry the water for them on a tough vote, and many suspect it won’t.  Partisan media has poorly aligned incentives, as they sell to an extremist (and small) audience and don’t necessarily care about the party’s fortunes every other November.  If the bill is received poorly, or if it collapses, members who took a tough vote may find themselves receiving fire from every angle and it won’t particularly matter if Paul Ryan offers them $1M from the NRCC for reelection.

If Republicans want to pass an agenda that will involve some very tough votes, they need to figure out a way to solve this issue.  Ultimately it comes down to the President – he needs to give some sort of credible signal of commitment to the agenda and to having the party’s best interest in mind.*  What would constitute such a credible signal? That’s a question for another day.

 

*: The President may not care about passing legislation – I suspect he does not.

Theories of Congress and the Rocky Road Ahead for Repeal-and-Replace

The House is currently debating healthcare reform and I think that it’s worth revisiting the current legislative process from a political science view.  The real fight has not yet been joined – there’s relatively little hope for liberal activists to stop the bill in the House, where intra-conservative discord is the main potential stumbling block.  The action will happen in the Senate, and there are a few different theories from political science that shed light on the difficult path ahead for the Republicans.

  • Home-state concerns drive votes:
    • The framework: Senators vote on parochial concerns for their state.
    • The takeaway: In this light, the bill is in serious trouble.  There are a wide variety of GOP Senators whose home states have benefited a great deal from Obamacare, ranging from Senator Cotton (AR) to Paul (KY).   There’s a diverse coalition of groups that will oppose the bill, and opposition will intensify as various stakeholders (hospitals, etc) lean on Senators.
    • The path forward: It is difficult to see a way around the sticking point – the Medicaid expansion.  Repealing the Medicaid expansion drives opposition from both moderate and conservative senators, but without the Medicaid expansion repeal the cost of the bill balloons, ruining Speaker Ryan’s plan to pass permanent tax cuts through reconciliation.
  • Politics is unidimensional and ideological: 
    • The framework: Senators vote on a left-right spectrum..
    • The takeaway: In this light, the bill is virtually doomed – the GOP Senate Caucus has actually moved leftwards since 2012, contrary to popular wisdom.  The pivotal Senator here is probably Dean Heller, already an avowed opponent of Medicaid expansion repeal, and there are indications it could be difficult to round up even 45 Republicans to vote for Medicaid expansion repeal.
    • The path forward: Replace-and-replace is DOA. Again, repealing Medicaid expansion is the blocking issue – it’s a fairly far-right position and may even be to the right of the pivot point in the House, depending on whether the GOP moves up the deadline to 2018 as has been discussed.
  • Politics is multidimensional and ideological:
    • The framework: Senators vote on a left-right economic spectrum and a second axis that seems to encompass social issues (e.g. abortion).
    • The takeaway: The bill isn’t dead!  A relatively right-wing healthcare bill can be moderated in the second dimension in order to move it into acceptable range for passage, potentially by, say, removing provisions that defund Planned Parenthood.
    • The path forward: Unfortunately for the GOP there’s probably limited room to do these sorts of issues through the reconciliation process – however, a leadership could potentially throw other bones to moderate members in follow-up bills.   The difficulty here is twofold – one is a time-consistency issue, where moderates would have to trust leadership to follow up, and the other is that anything else leadership promises would have to pass through regular order and is subject to a filibuster. Also, incidentally, the second dimension of politics appears to be collapsing in Congress so this may be moot.

In short – most schools of analysis suggest that repeal-and-replace is in trouble. The legislative strategy GOP leaders have chosen is extremely difficult, especially the rest of their agenda depends on a quick passage of this repeal-and-replace bill.  If I were a Republican, I would be concerned that Speaker Ryan does not appear to have considered the complexity of this endeavor – if timely passage is a necessity, he should have come to the table with a bill that was mutually acceptable to pivotal members in both the House and Senate.  This oversight does not bode well for the rest of their ambitious legislative agenda.

For Digital Analytics, MDEs > Power

One of the most important concepts in designing and running experiments is statistical power – defined as the probability a test will yield a rejection of the null hypothesis at a desired confidence level.  In more colloquial language, statistical power is “how likely am I to get a statistically significant finding for an effect, provided that an effect is real”.  That colloquial language is not particularly colloquial, which is a shame because power is an incredibly useful tool for an experimenter to wield.  It lets you figure out ahead of time which tests are worth doing and which tests are not, as well as calculate the minimum sample sizes required in order to run a test.  The problem is that it’s incredibly difficult to explain the concept to people without a solid statistics background, and a good analytics practitioner is an interpreter and a salesperson as much as an analyst.

I’ve realized that a much better way to frame the importance of sample size is the minimum detectable effect, or MDE.  The minimum detectable effect is, colloquially “given a sample of size n, how powerful would an effect have to be in order to detect it reliably?”.  There are a few virtues that make this particularly useful in the context of a Web business:

  • It doesn’t rely on assumptions about effect size: Statistical power takes effect size as an input, but the problem here is that we don’t know the effect of an experiment until we try.  Instead of working from there, we work from what we already have – natural rate of variation in what we’re measuring and the sample size. Which is another strength, that:
  • Sample size is an input, so you can work with what you have: In many business contexts, the sample is fixed rather than something we can influence – for example, the number of people visiting our homepage is out of our control.  We can’t just increase the sample size of people on our email list, either. It lets you take the sample as it is, and ask “Given our existing constraints, how powerful would an effect have to be?”, which gets to the biggest strength:
  • It is expressed in terms a practitioner can understand.  We can take whatever sample we have, plug in the variance, and get out a result that’s comprehensible.  “Well, we’ll probably only see an effect if the new creative bumps open rates by 20%” is a sentence that a marketer or product manager can understand, and in turn decide on their own whether that’s plausible.

Analytics is ultimately a service for the rest of the organization, and designing good experiments is one of the most important things we can do.  Making that message heard is a crucial part of the task, and I think MDEs should occupy a bigger part of the communication toolbox.

The unsexy threat to government data

I don’t think it’s quite appreciated how much economic and civic life in the United States is underpinned by government data – from the Census to the Weather Service to the Bureau of Labor Statistics, the government produces a ton of data that is used on an everyday basis.  Businesses use ACS data to locate stores and markets, banks rely on the unemployment rate to make forecasts, courts rely on Census data to look for disenfranchisement, and so on.  This is an area that scares me.

The President is relatively indifferent to the needs of wonks and bureaucrats, and as a result much of this data will be under threat.  It is highly unlikely, given this indifference, that he will order meddling to juice the unemployment rate – but political appointees will be under a lot of pressure to produce good results, and this means giving him good news.  A lot of this data can be degraded through spontaneous efforts in the middle management layer of the government without a real “plan” from the top.

Researchers should keep an eye on official government data products – particularly looking out for markers of fraud such as violations of Benford’s Law or data that looks suspiciously normal.  It would be fairly easy to develop a battery of tests that will monitor official data outputs to see whether there is evidence of manipulation – this could be a good project for enterprising nerds out there.

The sad fact is that if the government is caught meddling in the data, most likely a lot of this data is ruined for use forever.  Institutional trust is easy to destroy, but difficult to rebuild, and the question of data integrity is one that will be warred over by rival groups of nerds with statistical instruments and arguments impossible for everyday citizens to adjudicate.  Once government’s data integrity is called into question the mere fact of the back-and-forth dispute, even if eventually resolved with a full solution, will serve as an argument in itself.  We’ll likely have fallen into a low-trust equilibrium, which is more stable than a high-trust one.

The knock-on effects aren’t good.

The Terror of Strategic Incoherence

At the hearings for Rex Tillerson and James Mattis last week, the incoming Secretaries of State and Defense had some things to say.  Most interestingly both endorsed a hard line on Russian adventurism diametrically opposite from President Trump’s vocal enthusiasm for Russia and Putin.  Does this mean that they will be edged aside in favor of Trump’s proposed US-Russia alignment, or conversely that they will be the “adults in the room” setting real policies.  I certainly don’t know, but the disconnect is frightening in itself.

The late Thomas Schelling conceived of war as a process of diplomatic bargaining.  By committing troops and suffering to a conflict, nations can assess by trial just how committed their adversaries are to their stated aims.  If country A pushes country B for a concession B does not want, A can escalate from polite negotiation all the way up to all-out war as a way of communicating to B just how important the concession is.  A would only escalate up to the point where it thinks B will back down.

Conflict happens and escalates when countries misjudge each other.  If an aggressor badly underestimates another’s commitment, that can lead to terrible conflicts.  A posture of strategic ambiguity – e.g., the Trump Administration’s mix of pro- and anti-Putin views – exacerbates this concern.  Foreign countries can choose to see whatever they want in the noise machine emanating from the White House.  Some adversaries might wrongly believe they can push further than they really can – and since no one knows what our real red lines are, it’s easy to imagine conflicts blowing out of control.

Why your city is bankrupt

Would you believe a story about municipal finance can be deeply disturbing? This one is – it’s the story of why Lafayette, Louisiana has no money and yours doesn’t either.  In short: the built infrastructure is so extensive that maintenance and upkeep have outstripped the tax base of the city.  The key passage (emphasis added):

All of the programs and incentives put in place by the federal and state governments to induce higher levels of growth by building more infrastructure has made the city of Lafayette functionally insolvent. Lafayette has collectively made more promises than it can keep and it’s not even close. If they operated on accrual accounting — where you account for your long term liabilities — instead of a cash basis — where you don’t — they would have been bankrupt decades ago. This is a pattern we see in every city we’ve examined. It is a byproduct of the American pattern of development we adopted everywhere after World War II.

As the authors point out, this is a merely human weakness due to temporal discounting – people are bad at accounting for the present value of future cash flows, whether the flows are income or expenses.    It also does a great job illustrating a key principle of institutional design – rules should be designed to beat human failings.

The idea that principals (e.g. local governments) are poorly incentivized to set their own decision rules is one of the better arguments against American-style federalism.  Governance arguments about federalism are rare, but much more convincing than traditionalist or consequentialist arguments.