Tag Archive | Software

Code is the New Capital

Albert Wenger points out that in the 21st Century, “capital” is increasingly information.  I’d agree with that, and specifically look at code.  What is the value of Facebook?  Its real estate and server farms are real enough, but form a trivial portion of its market capitalization.  Its user base is real enough, but it doesn’t have users without a product (and vice versa).  And its productive assets are its codebase.  That is the productive capital Facebook has.

Code is quite different than traditional capital, but has some similarities.  The biggest similarity is depreciation, believe it or not.  While there’s no wear-and-tear on a codebase, anyone who’s worked at a software company will tell you that the value of the codebase decays over time – databases exceed their scalability limits, incompatibilities with newer software arises, and the volume of quick “fixes” makes finding errors or building new functionality impossible eventually.  Maintenance is also another commonality, for the same reasons.

One neat aspect of “code-as-capital” is infinite scalability, which is really different.  A given piece of computer code can be replicated easily for zero marginal cost.  This doesn’t matter so much if you’re Facebook or another web app, because it doesn’t make sense to compete against Facebook by directly ripping off their codebase.  However, it matters a lot if you’re either buying or selling software!  As a producer, once your capital is in place, you can produce more at zero marginal cost.  As a consumer, it’s fabulous – because prices tend to drop towards marginal cost of production, software is getting cheaper.  If you’ve ever thought even casually about launching a business, you know this fact well – using off-the-shelf software and Amazon Web Services, you can acquire a full suite of business capabilities for basically nothing, including accounting and logistics software.

One under-appreciated aspect of “information as capital” is that thanks to the zero marginal cost of production, it massively increases the capital stock available to everybody.  As an individual, you can easily acquire a tremendous amount of productive capital that people a hundred years ago had to buy at great expense.  Actual machines for mass production aren’t getting any cheaper, but the costs of mass production always involved much more than just the expense of buying machines.  And the cost of production for producing goods (other than heavy machinery) is dropping every day thanks to the falling price of informational capital.  It’s not immediately clear to me whether this will be a force for greater or less inequality.  But it’s worth noting that informational capital, which has some very big differences from traditional capital, is only becoming more important in the 21st century.

Mechanical Turk Is Not the Future of Human Labor

This isn’t the first time I’ve seen this critique, and while I am sympathetic to the impulse I think it’s extremely off-base:

As Miriam Cherry, one of the few legal scholars focusing on labor and employment law in the virtual world, has explained: “These technologies are not enabling people to meet their potential; they’re instead exploiting people.” Or, as CrowdFlower’s Biewald told an audience of young tech types in 2010, in a moment of unchecked bluntness: “Before the Internet, it would be really difficult to find someone, sit them down for ten minutes and get them to work for you, and then fire them after those ten minutes. But with technology, you can actually find them, pay them the tiny amount of money, and then get rid of them when you don’t need them anymore.”

Outside of direct personal services like nursing or exploitative rent-seeking jobs in finance, this sort of small-scale machine assistance job is where the labor market will increasingly trend over the next couple of decades.

And it’s a completely unregulated mess. Just another sign of a broken, 19th-century economic system utterly inappropriate for a 21st-century world of globalization, mechanization, flattening and deskilling.

This is based on a misunderstanding of what these services are for.  Mechanical Turk and the like aren’t directly competing with more highly-paid labor – they’re competing with more expensive capital.  In this day and age, the dominant solution is definitely not to have highly-paid professionals doing routine and repetitive tasks.  The dominant solution is to have computers do them.  Mechanical Turk fills in the niche of “things that would be nice to automate but haven’t been automated yet”.

One major use for MT is actually to provide the means to get rid of it.  Firms will track the inputs and outputs and use that as a “training set” for machine learning.  These are algorithms that don’t directly accomplish the task at hand by design, but rather can simply match like-to-like.  So instead of designing an incredibly smart software program to mark content as “NSFW” (not safe for work), you take a huge set of stuff that has already been flagged as NSFW and use it as a feed for a dumb algorithm that simply figures out what that set has in common.

Mechanical Turk is not the future of work – by nature, it is marginal to the larger economy.  Anything that Mechanical Turk is used for heavily provides the data to move solely to a software-based solution.

Marx on Enterprise Software

Maybe this is just a guess, but most Silicon Valley entrepreneurs are probably not particularly well-versed in their Marx.  Marx may not seem relevant.  But Marx wasn’t a socialist first and foremost (well, okay maybe) – he was a student of capitalism.  And Max has one theory that stuck with me – the horrifically phrased “tendency for the rate of profit to fall”, or the much better “declining rate of profit”.  In short, profits are vulnerable to competition – competitors attempt to produce more efficiently, investing in efficiency and (temporarily) increasing their own profits but in so doing cutting into the total amount of profit available in a given sphere.  Then their competitors respond, and the total amount of profit available goes inexorably downwards.

Investments in enterprise software seem to fit this description.  It’s a fixed capital investment (software) that improves capital efficiency of existing.  Small changes in, say, shopper conversion at a retailer can drive huge improvements to the bottom line.  Costs at a retailer are mostly fixed, and so incremental margin often goes straight to the bottom line.  I’ve run the numbers many times for our clients – with certain cost structures, a 3-5% increase in marginal revenue can drive a 20-50% increase in total profits.  For a single retailer, is the issue.  Really good software can’t expand the total amount of retail spending in the United States, just steal market share.  And if your competitors invest more than you, you’re pretty hosed.  This has a name, by the way – a Red Queen’s Race.  Marx would project that enterprise software would eat up a greater and greater share of retailer profit.

I was reminded of this by reading famous business-school professor Clayton Christensen’s interview in Wired today.  He’s clearly stumbled upon the same insight, which is that within an industry the natural tendency (absent clear barriers to entry) is for the rate of profit to fall.  He approaches it from a different perspective – “disruptive innovation”, the term we all love in technology, wherein a fundamentally new approach solves the same problem.  For example, Redbox killing Blockbuster was disruptive – it devised a fundamentally new way to get DVDs into customer’s hands, rather than competing with Blockbuster on the metrics of the established rental industry.

A Marxist would refer to this as a large and permanent gain in capital efficiency, a fancy term for “doing more with less”.  Mature industries can’t compete with disruptors – they start with too high of an existing capital burden.  Enterprise software helps businesses compete with others sharing the same basic business model.  But against a competitor starting with vastly better capital efficiency – think Amazon vs. a mom-and-pop, there’s nothing much to be done.  The existing capital burdens on the legacy competitor are too high.

Another way to view this is how Marc Andreesen famously phrased it – “Software is eating the world.”  Intelligence in the form of code can render a lot of capital useless.  Marx, Christensen, and Andreesen just have different takes on the same phenomenon.

What’s the point?  Well, there’s nothing really prescriptive to be learned.  Analytics technology can be very effective at what it does, which is improving your fixed capital efficiency through better shopper conversion, better inventory management, supply chain optimization, and so on.  It can improve your business a lot, and it’s not your problem if in so doing you’re cutting into the profitability of the industry as a whole.  That’s kind of the defining characteristic of a Red Queen’s Race – you have to either participate or find a way to play a new game.  Just a reminder that there’s nothing new under the sun.

Except maybe there is – the defining characteristic of the Marxist/Christensenian model is capital efficiency.  Code has some maintenance costs (so do servers), but it doesn’t depreciate the way fixed-asset capital does.  Perhaps the disruption of capital by software really only happens once.  And then we really would be seeing something new under the sun.

Of Panzers and Excel

The ongoing debate over the sequester has gotten me thinking of software design.  Some in the United States military have argued that the next war we need to prepare for is a small war in the Middle East, others a massive naval conflict with China.  Failing to prepare for either would be disastrous, and so our military budget must forever expand as we prepare for both.  Given the perceived threat environment, I can understand why the Pentagon is yelping so much at some relatively mild trims to their planned budgets.  Yes, the sequester cuts are poorly planned and will incur financial/operational costs in terms of sudden contract disruptions – but the real reason the Pentagon is complaining is that they want this money in order to prepare for both these threat environments.  Plus Iran, and rapid humanitarian deployments, and pirates off Somalia, and God knows what else.

Here’s a provocative counterargument for the Pentagon – stop trying to plan for the next war, because we know you’ll get it wrong.  That’s okay – you’re only human.  Everyone always gets the plan for the next war wrong.  That doesn’t mean you have to stop running simulations and wargames – though sorry, our enemies will always come up with something we didn’t think of.  Again – they always do.  But stop trying to plan for the next war or any war because preparing for everything is impossible.  Instead of preparing for everything, prepare for anything.

Preparing for anything means a priority on building a flexible infrastructure for response.   The US’s greatest military challenges have been the Civil War and World War II, and we triumphed in both because we had the material and human infrastructure to develop appropriate responses to the threat environment and scale the responses really big, really quickly.  It’s hard to know what exactly that means in the modern context – that’s the real utility of running those wargames, in the hopes that across all the scenarios some common patterns start to emerge.  We’ll get it wrong, as we always do.  But by prioritizing flexibility over optimization we’re less likely to be disastrously wrong.

A parable from World War II – the T-34 versus the Tiger tank.  The Tiger was a massive piece of wonderful German engineering: the most armor, the most powerful engine, the most destructive main gun.  The T-34 was much smaller, and in a confrontation the Tiger would win every time.  Not even 1-on-1 – there are documented engagements where a single Tiger would wipe out 20+ T-34s without taking a scratch.  Within the engagement, the Tiger was the unquestioned superior solution.  Yet the Tiger never made a dent in the course of the larger war, and the T-34 is regarded as the highest achievement in tank design in the history of warfare.  Why?

A T-34

The T-34 was spectacularly well-suited to the actual problem at hand, whereas the Tiger was incredibly poorly suited.  The powerful engine of the Tiger was precision-made by hand, limiting production speed and capacity enormously.  It was also unreliable – and since there were so few Tigers and parts were handmade, parts were pretty darn scarce too. The T-34’s engine wasn’t built for raw power, it was built for reliability. Oh, and it was “precision-engineered” too – Soviet engineers worked tirelessly in order to reduce the number of parts and decrease the precision of machining required.  In the tough conditions of a Russian winter guess which one was the dominant approach?  Speaking of, there weren’t much in the way of roads out there.  Well, having the best damn tank in the world doesn’t do you any good when it’s so heavy it sinks instantly in soft ground and breaks 90% of the bridges out there.  The Germans built for the dominant solution, and the Russians built for the one useful in the most contexts.

A metaphor, stuck in some mud.

I’m speaking in software terms here because they’re extremely relevant.  We’ve all encountered the over-engineered solution many times.  For any given mathematical task there are many solutions that provide the dominant solution for the problem you want to tackle right now.  But if you asked everyone who uses numbers in their work the one program they couldn’t live without, it’s Microsoft Excel.  Imagine a map of America where Excel sits in the middle of Kansas and your desired use case may sit anywhere on the map.  Other solutions may be closer to your particular use case, but Excel has the shortest average distance to the destination.

This is just a long-ass way of saying that when the military tries to simultaneously prepare for a war in China with fancy stealth fighters and a war in the Middle East with COIN tactics, it basically guarantees building itself a Tiger.  Procurement budgets with unlimited money want to look for the dominant solution to every single use case, and build themselves the ultimate versatile toolbox with a million purpose-built tools.  To mix terms from two worlds, we don’t get to define our own use case – the enemy does.  If we assume that we’ll be wrong about the use cases that we face, it shows us that the “build many Tigers” approach doesn’t guarantee failure, but it guarantees a lot of wasted money and a strong possibility of failure.

Not all is lost, however – there is one way to guarantee we’ll be less wrong, which is to lessen the universe of potential use cases.  The relevant area here is political, not technological – the more America tries to do all things in all places, the larger the universe of things we can screw up in our preparation and the more wildly wrong we can be.