According to internal eBay testing, no. Randomized controlled trials with Google search engine marketing provided little to no results, and it seems like a large portion of their advertising budget is just wasted. Their budget mostly goes to show ads to those who would still have ended up at eBay anyway. As Tim Fernholz notes, there are a few gigantic caveats:
- Your mileage may vary. eBay is a gigantic retailer, and if somebody has not heard of eBay in the year 2014, they are not a potential eBay customer – they’re also probably not Googling things. This isn’t true if you’re a nobody, where effective search-engine marketing is often the single best way to spread awareness of the product.
- Long-term effects might be important. The tests show only immediate results – whereas in the long term, cutting down advertising might hurt sales a great deal. But running even very short-term RCTs at a company like eBay can be very difficult, it’s unlikely they’ll be able to do this for a year.
I would add a further point: eBay’s advertising makes competing with eBay more costly. Partially there is just the opportunity cost – your customers see ads that aren’t yours. More importantly, you’re helping your competitors’ bottom lines. Search engine ads are allocated via a competitive marketplace with competing firms bidding to place advertisements. Every bid that eBay doesn’t place makes the marketplace less competitive and allows their competitors to place ads more cheaply. This in turn makes their advertising more cost-effective. In many areas, for example online auctions, eBay is such a huge player that the spot prices for ads will probably dive a great deal, juicing their competitor’s return on investment. In the long term, that’s a serious competitive threat, which it is entirely within eBay’s power to avoid. Even if the return on advertising to them is low or negative, it might still make sense for eBay to spend billions on search engine marketing simply because they are the biggest and can best afford the cost.
Search engine marketing is an asymmetric weapon in a number of ways. It is more useful to the new and weak, and can drive growth very quickly. But when competing with an incumbent in a large category, it may well be too expensive. It also suggests that new consumer-facing startups competing with digital-native incumbents (e.g., eBay) will face systematically high marketing costs that will require massive amounts of capital. Interestingly, despite this logic and some high-profile news to the contrary, there does not appear to be a long-term upwards trend in the size of venture capital funding rounds. This might be an issue of incomparability – for example, perhaps in recent years staff-heavy enterprise startups have been supplanted by thinly-staffed consumer startups that are plugging a greater share of money into advertising. Impossible to say with the data publicly available.
If you are trying to strategically decide what kind of company to start or what market to enter, the takeaway seems clear – the idea of easily scaling up to competitive size with an established incumbent through SEM is probably an illusion. You will face systematically higher costs than you expect, and will need to deploy more of your capital than you think to advertising instead of staffing and product. As for Google, it doesn’t seem like they should be that concerned. The logic of the situation clearly suggests that even if advertising doesn’t work, the money should keep flowing in for the foreseeable future, either from established firms or heavily leveraged VC-backed startups but mostly both.
The startup world: an extremely elaborate mechanism to redistribute teachers’ retirement money to Google.
Today the Washington Post ran a story that should (but won’t) finally make government spying a household issue. Under the name PRISM, the NSA has had a direct line into the servers of leading internet companies – Google, Facebook, Skype, and others. For years, they have been able to tap into virtually all the information that these companies have collected about people, using cross-connections and logins to track people across the entire internet. The Post is somewhat unclear on whether the actual content is being collected, or metadata – for example, an email’s timestamp and destination is metadata, whereas the actual subject line and text are the content itself.
This is not only unconstitutional, but very obviously and blatantly unconstitutional. The Fourth Amendment to the US Constitution reads in full as such:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
Emails and Facebook data very clearly count as “papers and effects”, reading their data without their consent is obviously a “search”, and pulling indiscriminately from all web traffic is an extremely unreasonable search. Somehow I doubt that the NSA got warrants either. Their justification is that certain statistical signifiers are used to indicate at least 51% certainty that a target isn’t American – though of course even when they’re spying on foreigners they end up pulling tons of data on Americans as well (e.g., emails sent from Americans to the targets).
To state the obvious: this is illegal behavior from the NSA and horrifyingly shameful behavior from Silicon Valley. With all their self-righteous talk of privacy and user protection, this is craven and disgusting behavior from companies that aspire to be trusted partners for all Americans. As for the NSA, those responsible should be fired and preferably jailed.
On the bright side, it’s kind of funny that it turns out all the conspiracy theories about the NSA have turned out to be correct. For many years, kooky nuts have insisted that the NSA has been watching every electronic communication in America. It generally focuses on the ECHELON system (the NSA sure seems to be fond of all-caps names, incidentally) but it turned out to be called PRISM. Responsible adults generally respond by pointing out that such a vast conspiracy would be impossible to keep secret, and furthermore would be so obviously illegal that the NSA’s lawyers would steer clear. Well, the responsible adults were wrong and the kooks were right.
Amazon is taking on Dropbox in an aggressive way, with a cloud-based storage service at about half the price. Dropbox has survived Apple’s iCloud, but this is much more in Amazon’s wheelhouse than iCloud was in Apple’s; Amazon has revolutionized cloud-based Internet-delivered infrastructure services with Amazon Web Services (AWS), and it can definitely offer this service for a substantially lower internal cost than Dropbox. However, this is a consumer web product and Dropbox just works, insanely well – we’ll see if that can be enough to beat Amazon, but I personally have no idea. Interesting times ahead for Dropbox.
The big irony here: Dropbox runs on Amazon Web Services. It uses Amazon’s S3 storage cloud to do the actual storing of all user data, rather than having a dedicated data center handling all this stuff. This isn’t unusual these days – in fact, for new cloud service offerings launching these days, especially computation/data-heavy ones, running on AWS is the default. It can scale up or down instantly, it doesn’t require buying real estate or servers, and you pay only for what you actually need. It’s a lot cheaper to use AWS rather than build out a data center for an enterprise-scale application, and a lot easier to use it for a new product than have to worry about setting up your own server infrastructure and scaling it with growth.
Scott Weiss succinctly explains why AWS is the easiest and cheapest option to outsource transactions to:
The new data center designs use only commodity “vanity free” components procured directly from the original design manufacturers (ODM) — the current incumbent’s suppliers. For easy serviceability, components are velcro’d together versus mounted in a box. All bells and whistles are stripped off and the hardware is purpose built for a specific application and therefore carefully tuned. As compute utilization rates skyrocket from virtualization and parallel processing, the CPUs are running harder and hotter and therefore the new expense bottleneck is all about power and cooling.
Locating in cold climates and next to super-cheap hydro power has become de rigor. Power distribution, cooling and building layouts have been redesigned from the ground-up to maximize mechanical performance and electrical efficiency of the datacenter. And unfortunately for Intel, the relentless march of Moore’s Law no longer affords them differentiation, as customer needs have shifted from performance to power efficiency, an area where they lag rival ARM processors.
Hence you have the weird spectacle of Dropbox and Amazon competing on a consumer service, while each are attempting to make more efficient use of Amazon’s own backend data center engineering.
Amazon’s dominance of backend data and processing is emerging as a serious anti-competitive concern. This may seem dull and prosaic in the flashy world of cloud computing, but Amazon has pretty serious market power over what is increasingly becoming an essential part of technological infrastructure. If Amazon continues to expand in both consumer web services and backend web services, it will begin walking on a pretty tight tightrope. It’s a position of enormous market power, and those positions are usually abused – unless Amazon is a great and altruistic exception, I think we can expect to see greater and greater anti-competitive action out of Seattle.
In an awesome but perhaps over-mathematized post by “Peter C”, the answer is simple: employees who learn too much become irreplaceable. A standard piece of career advice is to become irreplaceable at work for this reason: the harder you are to replace, the more of the enterprise’s economic surplus you can capture. Bosses prefer to be managing people in McJobs, jobs where everything can be learned very quickly – this means that everyone can be replaced. It’s very good and makes a lot of sense. However, I think it also does a poor job of explaining the behavior of employers with regards to the people with extremely slow learning curves.
Namely, that the best way to create McJobs for most of your employees is to have a small number of extremely highly-skilled workers with relatively little supervision. You absolutely need the people that automate the tasks, that design the systems of people that can deskill jobs effectively, and so on. Peter C’s model treats all jobs equivalently, and suggests huge rewards for getting rid of people that have jobs that will take a long time to master.
Working in a small software company, I can go ahead and tell you that this model doesn’t really accurately reflect the desired mix of staffing. It probably does reflect the desired staffing ratio for back-office functions like Finance, but there’s a huge demand for the actual irreplaceable people in the functions where success is very unevenly distributed, like Sales and Engineering. Managers don’t want replaceable talent in those areas, because replaceable talent in Sales and Engineering has a negative expected value. The key off-point assumption here is that all employees are actually replaceable for the right price. The market for skilled labor, especially in technological sectors, doesn’t just have “transition costs” – it is sharply discontinuous.
If you want an skilled data scientist with experience in predictive modeling, analytics engineering and pre-sales, fluent in Python/C++/R and Hadoop engineering, you cannot find that person for any price. Google could. A hot new Sequoia-backed startup could. But basically any company would love to have them, an insight backed up both by common experience and the well-known “10x rule” – a top programmer is more productive than 10 mediocre ones, and a bad programmer has negative productivity. So, in short – Peter C’s model is interesting and makes sense, but the more specialized the skillset involved, the more illiquidity becomes the defining factor of the employment market and the less relevant the model is.
Yesterday it was leaked that Google plans to release a same-day competitor to Amazon Prime. Even better, they’re going to undercut it on subscription price – $69 buys you all the same delivery you can eat. The main operational difference is that Google won’t be provisioning the goods, they will be the payments and logistics platforms and local/chain merchants will be actually supplying the goods. This is pretty neat – that way Google doesn’t have to invest an insane amount of money to do troublesome things like “hold inventory”. Plus, once Google combines “Google Shopping Express” with the Google Car, it will be amazing – if I were Google, I’d be perfectly willing to burn cash on this for a few years to establish a market presence in order to prepare for the amazingness that will be Google Car delivery.
The aggressive undercutting of Amazon Prime smells like a price war, which is great. Normally it’s only great for consumers, but in this case it doesn’t seem like any of the main parties involved will suffer too badly. Google has $48B in cash-equivalents just, you know, sitting around. Amazon doesn’t, but has never really been in striking range of solid profitability and investors don’t seem to care.
As for merchants, this will give all of them the ability to have a meaningful ecommerce channel where that just wasn’t possible before. Some goods will take margin hits, but this allows them to have a dual-tier pricing structure. They’ve always been competing with Amazon on price, though many of them don’t see it as such. With Google Shopping Express they can sell retail-price goods in the store and Amazon-competitive goods online. And by outsourcing the logistics to Google, retailers might actually be able to beat Amazon on price.
Pricing power comes down to volume. In a pure price competition environment, businesses that spend more on their goods have more power to move prices. There’s a ton more that goes into “Cost of Revenue” (distribution, etc.), but in retail goods are the biggest part by a long-shot. Amazon’s cost of revenue in 2012 was $46B. Wal-mart’s was $352B. Target’s was $51B. Walgreen’s was $51B. Of course, it hasn’t been a pure price-competition environment since none of these retailers have been able to put together an ecommerce operation that rivals Amazon. But if somebody else does it, it’s far from clear that Amazon will win the resulting price war – Amazon’s pricing power pales next to Wal-Mart, and there are a number of other retailers that have the scale to give Amazon a very serious fight.
As I wrote the other day, even with a driverless car cost of multiple hundreds of thousands of dollars, it’s fairly easy to see them becoming a reasonable option for certain enterprise functions pretty soon. After all, a half-million driverless car might seem to be prohibitively expensive compared to paying for people, but if it can last a few years you can amortize the cost. Yet in a column about how driverless cars are 10 years away, Richard Nieva really buries the lede. Namely, that the equipment for the Google Car now only costs about $150,000 plus the cost of the car. Given that it can operate 24 hours a day and doesn’t need any pay, that seems like a win for the logistics operators of the world.
If I were running a logistics company and I could get my hands on one of those for $150,000 I’d have it on the road testing as soon as humanly possible. The only things standing in the way are the lack of production vehicles and regulatory issues, and resolving the latter will probably resolve the former very quickly. In short, I think the main piece of evidence cited for the self-driving car being far off demonstrates just how close they might be.
Incidentally, I would be terrified if I were FedEx. Their ability to take advantage of this new technology is pretty limited – even testing this would probably cause a massive revolt from their employees for obvious reasons. Their infrastructure and brand are great positioning for them to try piloting the driverless car, but total internal opposition will be rough to overcome. Once someone with a fair amount of capital and no employees decides to take them on, they will eat FedEx’s lunch.
As I mused on the other day, the advent of driverless cars has the potential to completely upend a lot of things about the political economy of America. A particularly astute reddit commenter, in a post being shared around, lays out the implications pretty well – as automated vehicles become cost-effective and widespread, they will simply destroy large portions of the economy. One aspect that I didn’t really dwell on but probably should have is that this transition has the potential to be pretty murderous for rural and even suburban dwellers. Why? Let’s follow the train of logic a little further.
The key insight here is that the driverless car substitutes capital for labor, like all automation. This means that driverless cars first become economical as part of a Lyft-like service in a big city because the more it is used, the faster the capital investment is amortized. Right now, the Google Car is bespoke and costs 600K+, but depending on its durability it’s possible to imagine the GLyft might make sense in a big city. Well, maybe – let’s do some back of the envelope calculations.
The standard taxi rate in San Francisco is $2.75 per mile, and let’s ignore pickup/dropoff charges for now. Let’s assume that the Glyft gets 25 mpg, and that gas in SF is around $3.70 (spot-quoted on 2/2/13), or $0.15 per mile. Maintenance on cars tends to run about 5.3 cents per mile, but these are complex machines so let’s figure double that, or $0.11. If we assume a fair amount of congestion and an average speed of around 20 mph, that translates to $49.80 in profit per hour. Not bad, right? So, is that economical for a $600,000 investment in a Glyft?
It all depends on a combination of two factors – the lifespan of the car and the amount of utilization it is able to get. The former is simple – the longer-lived a car, the more time it has to make back its costs. The utilization is the reason why you want to focus in big cities, because the greater the density of the people the fuller utilization you’re able to get and the more profit a car can generate. So you can look at it from either angle, but I would suggest looking at utilization because it’s easier to estimate. Let’s assume, conservatively, that there’s enough demand every 24-hour period for 8 hours of vehicle-miles-traveled (VMT) in San Francisco – that’s around 400 dollars per day in profit. Which seems pretty good…that’s a yield of about 25% on your $600,000 investment.
But that means it takes 4 years to earn back your investment, which is quite a while and I’d be surprised if it didn’t fall apart long before then with 8 hours per day of continuous use. I would guess that 14-16 hours per day in a dense city is probably more realistic, and a two-year recovery horizon makes it look far more possible. With a $600k driverless car, we’re talking about something which I think is an infeasible investment today…but not crazy.* With a $300K driverless car, which will come very very soon, I think it makes sense (independent of all the regulatory nonsense). As economies of scale grow and car prices drop, cost per VMT will plummet.
I think the math demonstrates that this is probably going to happen much sooner than we think.
The flipside? Transport for those who live outside of cities will become relatively substantially more expensive. Since utilization rates in rural areas will be so low, cost per VMT for TaaS (transport as a service) will stay much higher than within cities. It likely won’t be an economic option. But buying a driverless car of your very own will, most likely for a while, be too expensive for all but the quite wealthy. That’s not an issue as long as people can continue to buy and own their own human-driven cars. But then there are either two possibilities – either human operation is banned eventually (unlikely in the US for various cultural reasons) or it continues to exacerbate the rural-urban divide. I think the net effect of this would be to push greater migration to the cities and the rural developed West continuing its slide into relative poverty.
*: I could see depreciation allowances pushing it into realistic territory.