Passwords be Gone!

This post was originally featured on HIStalk

One of the least-talked about aspects of Google Glass is the proximity sensor. It’s extremely powerful and will change healthcare workflows.

Glass knows when the user takes the device on and off, meaning that the user has to authenticate only once per wear. Contrast once per wear with traditional computing platforms that require users to type in 2-3 passwords to unlock the device and access the EHR. Every clinical professional I know types their password in at least 30 times per day. On Glass, they will log in once per day, or perhaps twice if they decide to take Glass off for lunch.

Of course this begs the question — how does one type in a password on Glass? Glass doesn’t exactly provide the most robust input options. People won’t type passwords at all. A lot of Glass security developers are experimenting with asking users to take a picture of an instantly-generated QR code on the user’s phone. They are modeling this authentication mechanism after how users configure Glass to join Wi-Fi networks.

This is silly and overcomplicated. Why not just ask the user to repeat a programmatically generated sentence (one that couldn’t have been pre-recorded), and cross reference the user’s voice? Children’s diaries have implemented primitive forms of this technology for over a decade. Much more robust voice-authentication libraries exist. I’m sure developers such as Silica Labs are building voice-driven authentication solutions that other Glass startups can implement. This is a significant problem with an obvious solution that every Glass app must deal with. That screams that the function should be outsourced and licensed. Sell pickaxes during a gold rush.

The power of authentication isn’t limited to Glass itself. Once a user is logged in, Glass can authenticate other devices on behalf of the user via Bluetooth. That means that an authenticated user on Glass could auto-authenticate against any other service on any other platform. Glass — the most secure, personal, and intimate computer — would become the authentication tool for the all other computers. That simple use alone is a compelling enough reason for hospitals to purchase Glass units for every employee.

There are lots of companies selling proprietary, single sign-on authentication hardware mechanisms, but they will be just as expensive, if not more so, than Glass. Glass will be cheaper than almost everyone expects. Coupled with Glass apps that deliver real clinical operational value, Glass will streamline virtually every human-computer interaction in healthcare.

It’s a good time to be a healthcare Glass startup. There are incredible opportunities for Glass. The iPhone App Store just hit its fifth birthday, and at least 75 percent of doctors are using smartphones at the point of care. Glass will change healthcare in ways that we can’t even think of yet.

Healthcare Glass startups, make yourselves known. Join Stained Glass Labs, list yourself with theGoogle Glass tag on AngelList, and engage with the healthcare Glass startup community.

The Challenge of Developing Engaging Consumer Apps on Glass

This post was originally featured on the Pristine Blog

In my presentation, Glass Insights, I present the case that it will be very difficult for consumer Glass apps to make any money. There are a number of compounding reasons:

1) Low hardware volumes (Glass volumes will be at best a few percent of smartphone volumes). Consumer apps require large volumes.

2) The cost of wearing Glass, coupled with an inherent lack of use cases. Glass competes with non-consumption = smartphones. Smartphones are quite good at what they do.

3) Inability for Glass apps to create addictive, regular behaviors.

I haven't talked much about the third. It will be the focus of this post.

Massively successful mobile consumer apps have to be sticky and addictive. Without exception, every major consumer focused technology startup has thrived on a large community that's dedicated to using the app regularly. In most cases, regularly means more than once per day, if not a dozen of times per day. See Dropbox, Facebook, Foursquare, Mailbox, Pintrest, Snapchat, and Twitter as examples. Users of these services interact with them every day. These apps derive value from interaction that creates addictive, repeatable behaviors. Interactivity is key. That's exactly why Fred Wilson of Union Square Ventures passed on the opportunity to invest in Pandora. Pandora isn't conducive towards interactivity; users simply turn on Pandora and let it run in the background.

Glass is an inherently passive form factor. Apps can't be very interactive because Glass simply lacks the input mechanisms that provide the foundation for interactive apps. To recap, the input mechanisms are:

1. Trackpad (what a useless piece of unnatural, frustrating shit)

2. Voice

3. Camera

4. Accelerometer

5. Proximity sensor (wink!)

No combination of these input methods can create a compelling, engaging experiences. Glass wasn't designed to create engaging experiences. Per Google, Glass is "there when you need it, and out of sight when you don't." Glass isn't "a cool platform that you can play with on your face all day."

Moreover, humans love touching things, and Glass eschews touching in favor of voice. Voice is the best input method on Glass today, and will be until there's a 20x leap in battery technology that supports recording video all day. I think most would concur that voice is intrinsically less personal than touch, creating another barrier for addictive apps.

So Glass apps can't be engaging or interactive. Can they at least be used regularly? There's no reason they couldn't be, but I've yet to find a single regular use case for Glass that more than 2% of society would find useful. Glass presents opportunities for millions of trivial, one-off consumer use cases. But Glass simply isn't useful for 98% of people's daily activities. I recognize that there are uses for Glass in everyone of the activities presented below, but no more than 1-2% of people will actually find Glass useful in these scenarios:

Wake up, get ready for the day

Eat

Go to work / school

Go somewhere for a meeting / meal

Go to a bar / coffee shop / lounge after hours

Go to a restaurant for dinner

Go to a park / movie theatre / bowling / fun place

Come home, watch tv, talk with family / friends, eat

Prepare for sleep

I stand by my prediction: In Glass's current incarnation (screen floating in the corner of your eye, no eye tracking, no hand tracking) not a single Glass-native consumer app will exit for more than $100M.

 

The Challenge of Developing Contextual Consumer Apps on Glass

This post was originally featured on the Pristine Blog.

I've argued that context is king in eyeware computing. I'd like to take that one step further and clarify when apps should and shouldn't exist on Glass. They operative term in that sentence is "when."

Most of the major consumer web services on Glass (Facebook, Twitter, Gmail, CNN, NYT, etc) are content services. These services thrive because they provide users with an enormous amount of fresh content every day. Glass isn't a content driven form factor (unlike the iPad, which is very content-centric). Glass is a contextual form factor. There's a mismatch between the major consumer web services and Glass. None of them are Glass-centric. Phrased differently, would any of the apps listed above have been written Glass-first?

I like to phrase things in salient terms that Google never would, but probably should: if your Glass app isn't relevant to what the user is physically doing at a given moment in time, your app shouldn't be in view at all. My stern language is actually a superset of Google's more friendly-worded Glass development guidelines. The window of time in which relative, useful information can be presented is narrow, usually just a few seconds. This is an inherent problem for all of the traditional consumer web services, and the Glass Mirror API exacerbates this problem. These services have no way of knowing what you're doing RIGHT NOW; they are pushing information to users that have nothing to do with what the user is physically doing.

I understand Google's thinking behind the design of the Mirror API. The Mirror API makes it extremely easy to develop apps to send bundles of HTML-encoded information to the user. The problem is that that Mirror API pushes information without enough context. Both the Mirror API and natively written Android apps on Glass have no way of knowing what the user is physically doing at a given point in time, which means that the information being pushed can't be all that contextual. Yes, Glass supports geo-fencing, which provides some location-based context. Even still, location-based context rarely correlates precisely with what the user is physically doing within a given five second window. When information is pushed to Glass, it's only immediately viewable for a few seconds. Given the intrinsic latency of the Mirror API and lack of specificity provided by geo-fencing, it's practically impossible to push truly contextual information to Glass.

I've spoken to quite a few individuals that have app ideas for Glass. Glass is a unique platform with specialized marginal value. Successful Glass apps must be contextually driven and must take advantage of these unique traits. Context is king in eyeware computing. Developers that would like to make a significant sum of money must hold themselves to that standard.

The Marginal Value of Google Glass, Continued

I've been traveling the country presenting about Google Glass from the perspective of a Glass startup. As part of the presentation (available here), I present a thesis on how to think about usability on Glass, uses cases, and monetization.

The first step to thinking about usability and use cases is to define the fundamentally unique characteristics of the Glass platform. The best Glass apps will take advantage of the unique characteristics of the form factor.

The first post I ever wrote about Glass, back on February 25th, was titled The Marginal Value of Google Glass. I identified 3 unique characteristics of Glass:

1. Hands free

2. Heads up display

3. Friction free (always there)

Well, I've since identified a 4th. And it's probably one of the most obvious.

4. First person camera + microphone

Somehow I had missed it for months. I still think there may be 1-2 other unique traits that I've missed. If you can think of any, please let me know.