technology

Spectacles and $SNAP's $20B Valuation

Some quick numbers:

2016 Revenue: $515M (up about 6x year over year due to artificially restricted advertising revenue prior to 2016)

2016 Expenses: $919M

2016 Operating Loss: $404M

To justify a $20B valuation, the market is suggesting that, at some point in the future, Snap would need to generate financials that look something like this (assuming no discount for risk/time):

Revenue: $10B

Operating margin: 20%

Operating profit: $2B

P/E multiple: 10x

Growing revenues and profits by about 10% / year each

I don’t see any way Snap’s current business can achieve these numbers. Revenue needs to grow 20x, and margins must expand dramatically. I won’t dive into cost structure in this blog post, but let’s think through how Snap could grow revenue 20x.

Snap’s revenue is a function of two numbers: number of users, and average revenue per user (ARPU). In order to achieve 20x growth, Snap needs grow both of those metrics 4–5x. Let’s look at each figure.

User Growth

Snap’s user growth came to a near stop after Instagram launched a direct clone called Stories in September of 2016:

Snap’s daily active users grew just 3% in Q4 2016. At its current growth rate, it will take Snap 47 quarters = 141 months = 11.75 years to grow its user base 4x. It’s possible that growth could accelerate in the future, but given the law of large numbers and fierce and unrelenting competition from Facebook — read this excellent anecdote from a Snapchat influencer — this seems unlikely. Snap’s first earnings report as a public company will be telling. User growth will be, by far, the most watched number.

ARPU

Next let’s look at Snap’s ARPU growth. See the gray line in the chart below from Stratechery. ARPU is the grey line plotted on the right-hand axis.

ARPU, the grey line, is growing quickly. By looking at this graph, one can see how ARPU could grow 4–5x in the next few years, maybe more.

However, there is some fundamental limits at play that’s not visible in that line. ARPU is a function of time spent in the app. The more time people spend in Snapchat, the more ads Snap can show them. This is significant because there’s a fundamental maximum ARPU since Snap is competing for a fixed pool of time. People still have to eat, work, etc. so there’s a natural limit to ARPU.

This begs the question: how much time do users spend in Snapchat today compared to other social networks, and how will that number change over time?

Facebook just reported that users spend on average 50 minutes per day (United States only) across the Facebook apps — Facebook, Instagram, Messenger, and WhatsApp — and that this number is growing. It’s up from 40 minutes in July 2014. However, it’s premature to suggest that Snapchat will follow a similar usage model. Even by July 2014, Facebook’s core application had effectively saturated the US market, with more than 80% of US adults on Facebook. It’s seems very likely that the growth in time spent in app per user was driven not by the early majority, but by the late majority and laggards (see below chart below from the famous book Crossing the Chasm about how to think about which customers are adopting a given product).

Facebook doesn’t breakout usage by time-driven cohort, but intuitively, this makes sense. Among my millennial peers, I don’t think usage has grown by 50% in the last few years. But I can certainly see that Facebook usage has indeed grown among my mother’s and grandmother’s peers. 2 years ago my grandmother didn’t have Facebook. Now she’s on it every day!

Moreover, Facebook’s reported daily engagement numbers include Instagram. Had Facebook not purchased Instagram, Facebook’s aggregate numbers likely would have dipped as millennials have largely abandoned Facebook for Instagram and Snapchat. So there’s real risk to Snap that the innovators and early adopters may not maintain their engagement in the future.

My point is this: as Snap grows its user base, it’s unlikely that the early and late majorities will spend as much time in the app as the innovators and early adopters. That means it’s likely that Snap’s average time spent in app per user is likely to decrease, which will be a significant strain on Snap’s total ARPU. Facebook was able to buck this trend, but that’s because Facebook purchased Instagram. Snap has no guarantee that its users won’t migrate elsewhere, or that it’ll be able to purchase its analogous Instagram. As Snap grows, they’re likely to find that the next 150M users simply will not use the app as much as the first 150M users.

In summary: Snap’s current business doesn’t justify a $20B valuation. Snap’s user growth has nearly stopped, and although ARPU is growing at a healthy rate, there are very real risks that could slow or stop ARPU growth. And to top it all off, Snap isn’t offering voting rights to public market investors, which should discount the stock price further.

How can one justify a $20B valuation for Snap?

Innovation at Snap

Directly from Snap’s S-1:

“We invest heavily in future product innovation and take risks to try to improve our camera platform and drive long-term user engagement. Sometimes this means sacrificing short-term engagement to introduce products, like Stories, that might change the way people use Snapchat. Additionally, our products often use new technologies and require people to change their behavior, such as using a camera to talk with their friends. This means that our products take a lot of time and money to develop, and might have slow adoption rates. While not all of our investments will pay off in the long run, we are willing to take these risks in an attempt to create the best and most differentiated products in the market.”

Snap has introduced a panoply of well-regarded features over the years: stories, face and location filters, memories, discovery channels, etc. Their track record in product innovation has been superb:

All of these features have been designed to drive engagement in the current line of business, which ultimately exists to increase time-spent-in-app, which drives ARPU, which has natural limits as discussed above. These innovations have been great, but within the confines of the current business line, Snap is going to push up against natural ceilings that will ultimately suffocate growth.

Snap is asking investors to bet on its ability to innovate its way into revenue. The product that could most likely justify Snap’s $20B valuation is Spectacles.

These are exactly what you think they are — sunglasses with a camera that automatically upload to Snapchat by connecting to your phone via Bluetooth. That’s it. The functionality is intentionally limited.

The Massive Market Opportunity For Spectacles

Apple CEO Tim Cook recently told the media:

“I regard [augmented reality] as a big idea like the smartphone. The smartphone is for everyone, we don’t have to think the iPhone is about a certain demographic, or country or vertical market: it’s for everyone. I think AR is that big, it’s huge. I get excited because of the things that could be done that could improve a lot of lives.”

Global smartphone revenue is about $420B. The CEO of Apple is suggesting that the opportunity in AR is on the order of $400B. Many tech pundits think the market opportunity in AR could be even larger than that.

What’s The End State For Augmented Reality?

The end-state for desktop computing was achieved in the early 1900s: a folding laptop with a screen, keyboard, and trackpad that control files and applications using a virtual desktop metaphor. The end state for smartphones is a multi-touch piece of glass with a grid of icons and a rich notification system.

The likely end state of augmented reality, in raw hardware functional terms, is conceptually simple: a set of glasses or contact lenses that can render any virtual 2-d or 3d object or text in 3-dimensional space, with or without physics that interact with physical or virtual objects. If you were wearing legit augmented reality glasses, you should be able to blend the virtual and physicals worlds seamlessly… or not if you wanted to “break” the laws of physics J.

But we’re a long way away from this dream. Although the rendering engines and computer vision are rather mature, we’re still years away in terms of packaging all of this in a sleek glasses-like form factor. The primarily bottlenecks are battery and heat dispersion (CPU/GPU can be offloaded to the cloud, and 5G should be fast enough for real-time, cloud-driven computer vision).

To be clear, I can’t forecast the details of how an augmented reality OS should work. How should Google search results appear, how should you navigate them, or how should you read a CNN article versus a Kindle book? I don’t know. But I can say with confidence that the augmented reality OS of the future will have to incorporate everything outlined above as those are some of the key experiences that are unique to augmented reality glasses.

The details of how an augmented reality OS should work are incredibly complex — far more than desktop or mobile computing. With an effectively infinitely large canvas and almost no restrictions, there are far more ways to deliver a crappy user experience. This actually works in Snap’s favor. I’ll touch on this more in a bit.

Snap’s Path To AR Dominance

Controlling the underlying OS will be paramount to capture value. As users look around and interact with the world, the OS vendor will dictate the rules in which applications and cloud services can interact with the real world and the user.

To capture any material part of the $400B that Cook is forecasting, Snap will need to control the underlying operating system on which the glasses function. There are two ways Snap can do this: manufacture their own glasses and bundle their own operating system- a la Apple — or offer an OS to other manufacturers — a la Microsoft and Google. They could in time transition from one model to the other; just because they’re manufacturing their own hardware today doesn’t mean they have to in the future.

In short, Snap faces an extremely tough battle in which they are both under resourced and in which they must catch up on many fronts. But they may have one strategic, “ladder-up”, path that could allow them to vault past the competition.

Apple is obviously working on augmented reality in earnest per Cook’s comments. It’s been widely reported that Apple has had hundreds of engineers working on AR and VR for some time.

Google obviously is as well with the recent launch of Daydream VR. Google also likely has the best computer vision experts, datasets, and algorithms on the planet.

Microsoft has been working on this for about a decade. The first real implementation is the Hololens.

The same can be said of Facebook and its acquisition of Oculus, along with its recent full-frontal assault against Snapchat AR filters and lenses. And again, Facebook has extensive image and computer vision data and expertise.

These companies will invest 10–100x the resources that Snap has to commercialize augmented reality in pursuit of hundreds of billions of dollars of revenue.

Snap is not only out-resourced, but they must also play catch up. Although they’ll likely fork Android, which is open source, for basic OS components, Snap doesn’t have anywhere near the level of cloud services footprint that Apple, Google, Microsoft, and Facebook do. Snap will need to build foundational cloud service APIs that tech giants have been building for 5–10 years such as maps, identity, authentication, etc.

Snap’s Ladder Up Strategy

But Snap does have one major asset that the giants lack: a more clear ladder-up strategy to get there. What is a ladder-up strategy? From Stratechery:

Netflix started by using content that was freely available (DVDs) to offer a benefit — no due dates and a massive selection — that was orthogonal to the established incumbent (Blockbuster). This built up Netflix’s user base, brand recognition, and pocketbook

Netflix then leveraged their user base and pocketbook to acquire streaming rights in the service of a model that was, again, orthogonal to incumbents (linear television networks). This expanded Netflix’s user base, transformed their brand, and continued to increase their buying power

With an increasingly high-profile brand, large user base, and ever deeper pockets, Netflix moved into original programming that was orthogonal to traditional programming buyers: creators had full control and a guarantee that they could create entire seasons at a time

Each of these intermediary steps was a necessary prerequisite to everything that followed, culminating in yesterday’s announcement: Netflix can credibly offer a service worth paying for in any country on Earth, thanks to all of the IP it itself owns. This is how a company accomplishes what, at the beginning, may seem impossible: a series of steps from here to there that build on each other. Moreover, it is not only an impressive accomplishment, it is also a powerful moat; whoever wishes to follow has to follow the same time-consuming process.

Amazon has pursued a similar strategy as they built out the Everything Store, and AWS.

Amazon started off by selling just books because books were easily shippable, easy to search for, and because shipping was slow and readers would be ok with slow delivery of books vs other goods. The next categories were CDs, DVDs, VHS, and video games because they fit the same general criteria. As Amazon built its logistical and server infrastructure, it systematically moved into one new vertical at a time: shoes, kitchen appliances, etc.

By the early 2000s, Amazon had built such a massive server capacity for the holiday season that they had massively under-utilized server assets for 85% of the year. So they began selling that excess server space in the form of Amazon Web Services, which is today far more profitable than Amazon’s retail operations although AWS is about 13 years younger than Amazon’s retail business.

It would have been impossible for Amazon to launch as the Everything Store on day 1. And it would have never made sense to build out a data center as large as Amazon’s just to rent out server space back in the early 2000s. Amazon has continually laddered up.

Back to Snap. Snap knows it’s under resourced, and materially behind its competition in raw technological development. But Spectacles could offer the unique ladder-up strategy that could help it control the augmented reality glasses market.

There are a few steps between Spectacles and the end-state of AR. It makes sense that these will progress through the lens that Snap CEO Evan Spiegel has described: as a camera company.

2016/2017 Spectacles — capture video on the glasses. Manipulate, share from the phone.

Another layer — add a transparent screen that just layers Snapchat-like geo and face filters (possibly other broader “life” filters) into Spectacles. Move some basic image/video manipulation functions from the phone to the glasses.

Another layer — hand detection / finger control / ring control for more rich interactions with filters.

Another layer — full blown general purpose computer vision in which you can manipulate and interact with any digital object in a physical world.

The layers I’ve described are vague, but give you a sense for how the product could evolve. However Spectacles end up evolving, they’re likely to be extremely camera focused. Spiegel has repeatedly described Snap first as a camera company, not as an ephemeral social network.

It’s likely that in the early days, glasses will be inferior to smartphones for most computing tasks that smartphones currently perform: reading, messaging, capturing images/videos, sharing content. It’s very clear that Spectacles today are just focused on the latter 2 jobs.

Similarly, although the iPhone was a general purpose computing device, it launched as just a really nice phone. Apple could never have forecasted how people would use apps like Instagram, Uber, Flipboard, or any of the myriad games. They key was getting the general model for touch and mobile computing in front of people, and iterating from there and unlocking more value over time through software and hardware tweaks. Spectacles are taking the same evolutionary approach. Get the product out there with a very specific use case — capturing and sharing videos — and iterating from there.

This is problem that the other tech giants have. Apple needs to find 1–2 super compelling use cases to drive people to purchase their glasses. Perhaps that use case is capturing images/videos and sharing them on any social network other than Snapchat. Recall that this was one of a few things Google really emphasized with Google Glass. And that’s exactly Snap’s opportunity — people share many more images/videos on Snapchat than any other social network due to the fundamental nature of the app (camera first, not text first, ephemeral) and network.

Google Glass faced exactly this problem. Although the device was broadly capable, it wasn’t excellent at anything, and because it looked strange, consumers never wore it. Snapchat really understands the image/video capture use case better than anyone, and will optimize smart glasses around that first, and then add general purpose compute later. I spent a long time thinking about consumer applications for Google Glass, and watched hundreds of people try on Glass for the first time. By far, the most compelling use case for most people was frictionless image capture. Snap has a massive lead on this front, and will likely double and triple down on it to pioneer the smart glasses revolution.

$SNAP, The Call Option On Spectacles

If Snap nails Spectacles and can control the augmented reality OS for a significant fraction of the market, $20B will be a bargain. But if it can’t, it’s going to take Snap a long time to the achieve financial performance necessary to sustain a $20B+ valuation.

In the interim, the most important number to watch is user growth. If user growth doesn’t re-accelerate, it seems extremely unlikely that Snap will ever grow its audience to be large enough to justify its current valuation.

Why Are Tech Giants Investing in AR and VR?

All of the tech giants are investing at virtual reality (VR) and augmented reality (AR). For players like Apple, the target business model is obvious: sell high-end, polished consumer experiences at healthy margins. The same generally true of other hardware players such as Samsung and HTC. Sony and Microsoft are investing not to profit on hardware, but to profit on the ecosystem around the hardware.


But why are advertising companies like Facebook and Google investing in AR and VR? And how will direct response (Google’s dominant revenue stream) and brand advertising (Facebook’s dominant revenue stream) work in AR and VR?


Facebook and Google are investing in AR/VR as defense. The historical precedent is clear: Google bought Android as a defense mechanism to reduce the risk that Microsoft would dominate mobile. Although Android itself isn’t directly materially profitable to Google, Google leverages Android as a source of control over the technology ecosystem to ensure unfettered access to Google’s revenue engine: search.


Mark Zuckerberg has stated that he wishes that Facebook controlled a mobile OS. Why? Because he would prefer that the OS support social sharing through Facebook’s social networks as effortlessly as possible. Zuckerberg wants to create a social OS. This would make Facebook’s products even stickier, draw more engagement to Facebook properties, and ultimately generate more profit.


Given the power that Apple and Google exert over their respective mobile ecosystems, it’s natural that both advertising-based tech giants are investing heavily to control the AR and VR ecosystems of the future. What revenue opportunities do VR and AR offer Google and Facebook?


AR and VR present the greatest advertising canvases conceivable. AR and VR UXs will offer, on a per person basis, orders of magnitude more ad inventory that can be hyper targeted more precisely than ever before. That is the perfect combo for advertisers: scale, precision, and context.


OS makers dictate the rules of the game for their respective hardware form factors. The OS explicitly allows and disallows certain actions by 3rd party apps. Beyond supporting modal, foreground applications, iOS and Android define how apps can interact with the lock screen, home screen, notifications, hardware controls, and silicon components. Android is far more extensible than iOS, but even Android places explicit limits on developers. For example, 3rd party developers can’t replace the notification engine.


Mobile has eaten the world because smartphones have come to consume the white spaces in our lives: people turn to their phones to tweet, SnapChat, and check Instagram/Facebook while waiting at traffic lights and subway stations, at restaurants while waiting for a friend, and even in the bathroom. This has created an enormous opportunity to profit: attention is the world’s most valuable commodity. This is why Facebook has absolutely crushed it on mobile. Facebook controls a significant majority of attention for most users in a new advertising canvas (white space of people’s lives), and Facebook is selling access to that attention for enormous profit.

But mobile has, on a relative basis, hardly touched the active moments of our consumer lives: driving, eating, playing sports, watching movies, socializing with friends, etc. Yes, people play with their phones intermittently while doing all of the above, but one cannot read an actor’s bio and watch a movie at the exact same moment in time. Although Google Maps and Uber have transformed how all of us get around, none of us need to actually interact with our phones while we’re driving or Ubering (and in fact, we shouldn’t as a safety precaution). Audio cues are sufficient. No one uses their phone while playing sports.


AR and VR present an opportunity to layer in ads contextually into active parts of our lives. I’ll cover VR first, then AR.


Technically, VR is a modal activity. You can’t be doing something in VR and the physical world concurrently. But there will be virtual worlds that people can explore for hours on end with all kinds of virtual activities — games, Major League Drone Racing, virtual white boarding spaces, movies, porn, etc. These virtual environments will represent the ultimate advertising canvas for brand and direct response advertising.


For example, between drone races, Facebook/Google will present ads to buy similar drones and register for drone racing lessons. This represents the perfect combination of brand advertising — knowing who you are and creating purchase intent — with direct response advertising — efficiently finding and buying what you know you want.


Or while you’re drawing out the next UI in a VR whiteboarding space with a colleague who lives 500 miles away, you’ll see an ad for an app that helps you build better wireframes. The ad will show how you can literally drag and drop wireframe elements with your hands in virtual space, and interact intelligently with your whiteboard.


AR presents myriad awesome advertising opportunities. As you drive down the highway at 3PM, Google/Facebook will show you an ad to pull into McDonald’s in the next 15 seconds for a 15% discount on chicken salad. Google/Facebook know you haven’t eaten lunch today based on some health tracking, that you’re on a low-carb diet, and McDonald’s knows its slow time between lunch and dinner and will be glad to generate lower margin revenue during off-peak hours. AR is the perfect advertising medium for the physical world. AR presents the ultimate medium for gaming human psychology around scarcity. The opportunities for limited time offers are infinite!


Despite the huge opportunity mobile has presented, AR and VR represent advertising canvases that are orders of magnitude larger. The limitations that iOS and Android impose on 3rd party developers will be insignificant to advertisers relative the limitations that AR/VR OSes will impose on 3rd party apps and advertisers. There will no longer be a lock screen or home screen. Literally the entire world, virtual or physical, will be the “home screen.” The advertising opportunities are nearly infinite, and as such Facebook, Google, and the other tech giants are going to duke it out to dictate the rules of that experience.

The TV OS Of The Future Will Decouple Video And Software

I was at the Formula 1 US Grand Prix a few weeks ago. It was pretty awesome, even though I couldn’t see the cars travel the entirety of the 3 mile track due to the hills. To mitigate this, I used the F1 iOS app to track driver standings in real time. While I repeatedly shifted my glance between the track and my iphone, I realized that there should be drones hovering over the entire track, and that the camera feeds from the drones should be select-able from the app so that I can focus on a particular hairpin turn with replay controls. And that I should be able to follow a particular driver through the track from the various drone cameras.

It was a cloudy day, so I thought about enjoying that same experience at home on my couch. And then it occurred to me that the new Apple TV should enable the experience I had just imagined. Indeed, Apple is envisioning exactly the type of experience I just described as evidenced by their close work with the MLB. They recognize the opportunity of interjecting software into a video feed.

The MLB demo, as awesome as it is, demonstrates the single greatest problem with the Apple TV: content providers need to decouple video feeds from apps. To be fair to Apple, they can’t control this, but it will adversely impact app quality and innovation on the platform, ultimately to the detriment of consumers.

So what specifically is wrong with the MLB app that Apple showcased? It’s great to be able to see the scores for all of the games, and then jump into a game. I’m sure the product management team for the MLB recognized this as low hanging fruit for baseball fans. This isn’t the problem. Rather, the problem is that I may want my MLB experience to be highly integrated with Twitter and fantasy league(s) that I’m in. As the a hypothetical consumer that pays for MLB all access, I probably follow quite a few baseball-related twitter accounts, and I probably play a lot of fantasy sports across a few leagues.

Or maybe I want to see exact flight path of the ball in slow motion compared against the last 3 balls thrown by the same pitcher. Or the guy on 1st base creeping off to steal 2nd base, even if the main camera isn’t showing it.

The possibilities are endless. The MLB could theoretically build all of these functions, but they won’t. Depending on the “use case” for watching baseball, viewers will want to layer in different content and watch the game in different ways. The MLB will never accommodate everyone’s desires.

The solution is to decouple the video feed from the MLB app. If ABC/NBC/FOX/CBS were to provide all of the camera feeds from a game as an API for Apple TV, 3rd party developers could implement those feeds to create unique experiences for every kind of fan - the data junkie, the selfie-aholic, the fantasy nut, etc.

Just imagine the endless possibilities to make live football smart. I should be able to switch to any camera in the stadium, rewind any play, draw the flight path of the ball, the running path of the receivers, mark up the play with with my own X’s and O’s, etc. As the NFL layers in sensors into player’s pads and helmets, I should be able to see the force exerted in a particular hit, and Tweet a video of my buddy sitting next to me spill his beer in awe from the camera that’s built into my TV.

The official apps from the NFL and the broadcasters will never provide the ultimate experience for any fan. The only way accomodate everyone’s “use case” for watching football is to decouple the content from the app itself. This will empower 3rd party app developers to layer software into the video-consumption process in a way that accommodates any use case.

Although the new Apple TV has launched with official apps from most of the major networks and sports leagues, the apps are universally primitive UIs to select particular video segments and highlight reels. I expect that this will be the case for a while. But at some point, it will become clear that content and software should be separate.

Thoughts on Drones

I’ve been thinking about drones a lot lately. Obviously the technology component is really cool, but more importantly, drones shatter some fundamental assumptions about business. Below are some thoughts on drones in no particular order:

Amazon will unveil Amazon Drone Services (ADS) that will allow anyone to ship anything (within ADS size/weight constraints) anywhere in a given geography in X minutes. This will change local commerce, and will have huge implications for the current crop of food focused startups (Favor, Doordash, Postmates, etc). Aggregators such as Instacart will be better positioned to manage this transition as they provide specialized value beyond raw delivery.

There will be a convergence in “form factor” for the most important use case: delivery. There will certainly be drones of various sizes for carrying different payloads varying distances, but the underlying designs will likely converge, and the software that powers the drones will be shared.

There will be unique use cases - such as oil pipeline inspections and maintenance - that may require unique hardware that ADS will not accommodate. There will be niche markets to accommodate these use cases. The technology vendors building these solutions will offer a unique combination of integrated hardware and application-layer software focused product for specific industry verticals. For these niche use cases, hardware and software will be provided by a single vendor, though the vendor will probably use open source technology like AirWare.

Once drones converge in form factor, then there’s no reason why Amazon or Google won’t become the dominant infrastructure provider of most drone-based intra-city logistics. It seems very unlikely that any startup today - even the largest drone manufacturer, DJI - could muster the capital and resources to compete directly Amazon or Google head on. Given the roaring success that has been AWS, Amazon’s very public work on drones, and Google’s aggressive robotics research and acquisitions, it’s clear that Amazon and Google are going to fight to the death to become the dominant drone-logistics providers for cities. Startups simply won’t be able to compete.

This also means that the tech giants will drive most of the innovation in drone operating systems - computer vision, collision detection, wind management, electronic virtual highways, etc. Amazon and Google will open source their drone OSes to drive standardization in hardware around their respective software defined ecosystems, just as Google has done with Android. Startups won’t have much of an opportunity to drive innovation here in the long run, though there could be some significant early exits to tech giants. AirWare could be a notable example of this.

There are certain parts of the supply chain that Amazon probably won’t get into, such as on-demand medical rentals (rent an X ray when you break your arm for use by consult by DoctorOnDemand), fresh-baked bread (will Amazon build a bunch of ovens?), and pharmacies (too much regulation for Amazon). There will be unique opportunities to reshape many of these businesses by leveraging ADS.

The businesses most likely to be reshaped are those that have historically been very local and fragmented - such as bakeries and meat delivery. There could be an enormous opportunity to disrupt Sysco by rethinking the SMB supply chain through drones.

Because drones will reduce the cost of transportation by 10-100x, restaurants may choose to order materials “just in time” rather than every 1-2 days. This can improve profit margins by reducing waste, and by freeing up significant working capital. Drones will reshape the entire food supply chain, and will in the process create opportunities to launch new companies at every layer of the food value chain built around drones. Drones reduce the need for working capital by enabling just-in-time everything. Drones are complementary to Bitcoin.

Bitcoin and drones go hand in hand. By reducing the cost of physical transportation 10-100x, there will be a need for a low friction, instant, low-transaction fee payment system for small transactions. Bitcoin is the obvious solution.

Drones will open new opportunities in “space arbitrage,” just as the Internet has enabled labor arbitrage by empowering Indian labor to take care of white collar American tasks. The use case that most immediately comes to mind is an infinitely large, virtual closet for people who live in big cities with small closets - think NYC or SF. I can see a future in which you can pick out your clothes for the day and have them delivered in the morning, freshly washed / dry cleaned. Or better yet, you’ll be able to rent clothes Friday at 8pm and have them in your hands at 8:15pm. This naturally lends itself to interesting businesses in clothing rental. There will be other opportunities in "space arbitrage" built on this idea.

There are an enormous number of drone applications in construction, architecture, agriculture, and oil and gas. These are fairly obvious applications today and have the potential for immediate ROI. Most application-layer drone investments today are focused on these verticals because the end-users can achieve near immediate ROI.
 

Cost is King

I recently listened to an a16z podcast about disruption theory. I’m a big fan of the theory and have been thinking a lot about it lately. The most important facet of the theory is cost.

Incumbents are disrupted because their cost model - typically measured by gross margin - is too high. When a smaller company devises a way to deliver the same value as an incumbent with a lower cost structure, disruption can occur. Thus the key to disruption is cost, or rather reducing cost.

Although we like to think about innovation in terms of novelty - think electricity, airplanes, or computers - the history of technological development over the course of human history can be more cohesively understood as a systematic removal of cost. Very few innovations were truly novel along a dimension other than cost.

For example, airplanes dramatically reduced the cost of long distance travel. However, human didn’t need airplanes to travel long distances. Millions of people travelled across large swaths of land and sea over the course of human history without planes. Airplanes just made long distance travel much more economical. I don’t mean to understate the power of this particular cost reduction - planes substantially changed how frequently humans could travel long distances, which has shaped every facet of business and personal life across the planet.

Even computers can be described through the same lens. Travel agents did what Kayak does. Telephone operators did what computers do. Analysts do what SaaS dashboards do. Uber drives do what computers will soon do. Because computers are programmed by humans, computers can only do what humans instruct computers to do. Through this lens, computers can be recognized as infinitely cheap “humans.” Electricity, silicon, and software costs pale in comparison to paying people’s wages - and thus mortgages, cars, meals, clothes, etc.

It thus stands to reason that the best businesses are those that methodically remove cost. Business cases to justify new innovations are by definition more compelling the more significant the financial impact of the innovation. For most companies, the largest source of cost is human labor. Thus the largest opportunities in business will be those in which technology can automate what humans used to do.

This notion that cost is the central challenge in humanity is perhaps best described by this quote by Jeff Bezos, founder and CEO of Amazon. “There are two kinds of companies, those that work to try to charge their customers more and those that work hard to charge their customers less. We will be the second.”

Cost is indeed king.