The Cost of Wearing Glass

This post was originally featured on the Pristine Blog

I started the Austin Google Glass Meetup. For our opening meeting this past week, I gave this presentation. It opens with an introduction to Glass, both the hardware and software, and then delves into usability, use cases, and how to think about Glass app markets and monetization.

My goal when building the presentation was to take all of the insights I had gleaned and written about Glass and present them in a single coherent presentation. While I was giving the live presentation, I made perhaps the most profoundly simple, concise, and powerful insight yet with regards to monetizing Glass apps. There's an explicit cost of wearing Glass: apps have to be compelling enough to justify actually wearing the device. There's never been an explicit, direct cost of actually using a major computing platform before Glass (electricity doesn't count since it's not paid for immediately).

People don't like putting stuff on their faces. They will if there's material utility, but they won't just because Google (or Apple) made it. Do people wear glasses to be stylish, or because they're functional? Sure, people want their glasses to be stylish, but no one wears empty frames. Glasses have a purpose: to help people see. People have been called "4 eyes" for years, and they've dealt with it because the cost of not wearing glasses (and not being called "4 eyes") is being blind. Being called "4 eyes" is better than being blind.

So for all of the consumer focused Glass app developers, please don't waste your time writing trivial apps. If you intend to make any real money, your apps need to be so good that that they justify the cost of wearing the device itself. That's no small feat. There are most certainly consumer niches that will use Glass for specific hobbies, but there are only so many hobbies to be tapped into. If your app assumes the user is wearing Glass just to wear Glass, you're guaranteed not to make any significant sum of money.

 

How Does One Herd a Few Hundred Thousand Sheep?

 This post was originally featured on HIStalk.

Medicine is one of the most non-standardized industries. Pricing varies per carrier, region, and procedure, often by an order of magnitude. Before EHRs, every physician designed their own paper templates, and even in the EHR era, many doctors still use highly customized digital templates. Most laymen assume that medicine is a repeatable science, where there’s a best way to do things. Apparently not.

Although complete standardization is bad, the status quo is 0.1 percent standardized. Every doctor practices his or her own unique flavor of medicine. The ideal lies somewhere in between the two extremes. The benefits of more harmonious and coordinated documentation would be felt throughout the healthcare system: more effective training for residents, better communication among care providers, more efficient back-office work (i.e. coding and health information management), simpler audits, and maybe eventually patient readability.

How on earth are clinicians going to be trained to adopt better, more standardized documentation practices? They aren’t. I would pity the pour souls whose job it is to tell hundreds of thousands of doctors and nurses how to do something in the new "right" way (which implies that they’ve been documenting the wrong way.)

But what if there were a different way? What if clinicians didn’t have to be taught new documentation standards from an overlord? Could a change in daily behavior be driven through a bottom-up approach instead of top-down? What would the bottom-up approach look like? How would it work?

Peer pressure is perhaps the most effective behavioral change mechanism of all time. It has proven to be the single most effective lifestyle change to help people lose and keep off weight. What if clinicians pressured one another into better, more consistent documentation practices?

Richard Vaughn, MD recently posted a brilliant idea on the listserv for the American Medical Directors of Information Systems (AMDIS): let doctors rate the quality of other doctors’ clinical notes in the EHR on a five-point scale.

Every doctor would have a "documentation quality" score that would be viewable by all the other clinicians at the hospital. This would be a sensitive issue. It would need to be designed and presented in such a way that it’s not a rating of clinical care ability or quality, just a rating of documentation. The score should only be available to peers, not available to people who don’t share the same job role or to the public.

Or it could be gamed. It would be an interesting experiment nonetheless. Hospital management would learn a lot about bottom-up behavioral change mechanisms that could be applied to future initiatives. Perhaps companies that try to drive quality improvement changes, such as KaiNexus, could tap into it.

The Pristine Story: Peeking out of the Shadows

This post was originally featured on the Pristine blog.

It's been a busy couple weeks for us. We can't believe how fast things are moving, or how many people have been offering their help and support. It's been really incredible. I can't possibly express my gratitude enough. Thank you thank you thank you to everyone that's helped!

We're finally in a position where we can talk about what we're actually developing… well kind of. We can't provide any product details, but we can say that we're building Google Glass apps for surgery to improve patient safety and efficiency. We're developing a suite of apps to support the entire surgical flow: pre-op, intra-op, and post-op across all major job functions in the OR: surgeons, anesthesiologists, and nurses. We've built demos for 2 of the most compelling use cases in the OR. We're only demoing investors and surgeons right now; we'll talk more publicly about what our apps do in couple of months, after we've proven efficacy with a pilot site. Speaking of which...

We're currently in talks with at least 8 hospitals and care providers about piloting Pristine; most are in Austin, though we're also talking with organizations in Phoenix and NYC. We're hoping to finalize those details and sign pilot contracts in the next few weeks, and begin pilots in August.

I was in New York last week meeting with investors, partners, and doctors. We secured our first $100k investment from an angel! We still need another $300k, but we're off the ground. At least we can feed ourselves now :). I hate to beat a dead horse, but if you know anyone that would be interested in this kind of an investment opportunity, please put us in touch. We've got an investor package ready to go out the door.

As I mentioned in our last Pristine Story update, we're hiring. We've been frantically interviewing people everyday between meetings. We've extended our first offers, and expect our first non-founder employee, perhaps even two, to start on July 8th. We're still looking for more top-tier Android developers, device integration specialists, and healthcare-integration developers though. If you know any developers looking for crazy new opportunities, please put us in touch!

This past week, two healthcare IT blogs reached out to interview me about Google Glass in healthcare. I'm expecting those interviews to be posted within the next few weeks. Additionally, I was invited to present at the Converge conference in Philadelphia July 9-10. We're also working on a segment with a local TV station in Austin to get the word out about what we're doing in the local community. We're hoping the local TV exposure can bring in a flood of new applicants.

I started the Austin Google Glass Meetup. Our first meeting will be Tuesday July 2nd at 7PM at Capital Factory. We're expecting it to be pretty crowded, so get there early if you can. I'll be presenting all of our insights on Google Glass - strengths, weaknesses, development strategies, use cases, and how to think about Glass app markets. After the presentation, we'll open up the floor for everyone to try on the 7-8 Glass units that we expect to be there. You're invited! Just RSVP at the link above.

I flew out to Minneapolis this weekend for the American Society of Echocardiography annual conference. I'm helping Dr. Partho Sengupta with a small piece of the keynote presentation of the conference. He'll be receiving the most prestigious award in cardiology research, the Feigenbaum Lecturer Award, for his work. We're extremely honored to help.

We've got a busy travel schedule ahead of us. I'm going to Philadelphia for the Converge Conference July 9-10. Patrick and I are going to Phoenix for a major presentation on July 17. I'll also be going to San Francisco a week later, July 24 - 29 for meetings and a family wedding. And then to NYC July 31 - August 5 to take more meetings.

And last but not least, I've been blogging:

HIStalk

Learning from the Signs

(International) White Collar Healthcare

What if Google Does it?

The Pristine Blog

Glass Insights: Input-Output

My Blog

Understanding Social Responses to Glass

Flirting with Glass

Samsung@Home

Smile

 

The Pristine Story: Launch!

This post was originally featured on the Pristine blog

I send out an email to everyone that we're talking with called "The Pristine Story" every few weeks to provide updates on what we've done and where we're going. I sent out the first one a few weeks ago, so this is a bit dated. The rest will be more timely moving forward.

It's official: my cofounder and CTO Patrick Kolencherry and I launched Pristine! We’re trying to redefine doctor-computer interaction on Google Glass. Check out the teaser video on our website.

We have our hands on Glass here in Austin. We’re expecting a second unit from Google soon, and looking for more. Patrick has been developing for almost a month now, and the progress has been amazing. We’ve built out a robust back-end, and over 2-dozen screens on the front end.

I’ll be coming to NYC June 18-24 to meet with investors and partners, and will try to squeeze in a little R&R and time with friends too ☺.  Please reach out so we can catch up if you’re in the city. Also, Patrick and I will both be going to Phoenix on July 17th for a meeting with a very prominent group of doctors that want to pilot Pristine.

Patrick and I are all in, and paycheck-free. At least for now. We’re raising a $350k seed round from angels. If you know any investors that would be interested in this kind of venture, please put us in touch. We have an investor package ready to go out. Glass apps are hot, and we’ve been receiving a lot of interest. We’re doing our best to let everyone invest that wants to.

We’re hiring! We’re looking for top-tier Android developers who want to change how doctors practice medicine. We’re also looking for developers that know the Mirth interoperability engine. We’re offering great compensation, benefits, developer perks (Retina Macbook Pros for everyone!), and an opportunity to work on some of the most incredible technologies in the world. We want to find the best Austin has to offer. Please, if you know anyone that would be interested, send them our way.

To succeed, we need to pilot Glass in clinical environments with doctors that are just as crazy as we are. We’re in the process of talking with doctors and hospitals who want to pilot our apps on Glass. We’ve found a few interested parties, and are courting them now. Again, if you know anyone doctors that would be interested, please put us in touch.

And of course, I haven’t stopped blogging. I’m now blogging across 3 entities: HIStalk, thePristine blog, and my personal blog. Healthcare focused posts will go to HIStalk, Glass development posts to the Pristine blog, and everything else to mine. In the past few weeks, I’ve written:

HIStalk

What if Google does it?

Bringing the principles of couch surfing to healthcare

The third screen revolution in healthcare is before us

Where is the Aereo of the hospital EHR industry?

Pristine

Choice be damned

You don’t need Instagram on your face

My blog

What if I could?

In defense of email, subjects, and threads, and a follow up: Google, thanks for listening

On stepping outside of your comfort zone

Eyeware computers circumvent logging in

Context is king in eyeware computing

Glass Insights: Input-Output

This post was originally featured on the Pristine blog.

This is the first in a series of posts that will illustrate how Glass is different from all of its computing predecessors: PCs, smartphones, and tablets. This series will cover every aspect of developing for Glass: programming and technical details, UX and ergonomics, use cases, and more. Glass is a unique platform, and everyone is still trying to understand the nuances of the strengths, limitations, and opportunities. We'd like to contribute to that open and ongoing conversation.

To kick things off, we're going to discuss what is perhaps the most fundamental aspect of Glass: how the user interacts with the device.

Glass, like any computer, requires input, and delivers some output. The input options for Glass are extremely limited:

1. Trackpad on the side

2. Voice

3. Accelerometer/gyro

4. Camera

5. Winking (hacked only; not supported out of the box)

Glass is not conducive towards interactivity. The more interactive the application, the less desirable it will be to use. Physically "using" Glass is simply a pain. Try connecting to Wi-Fi, and you'll know exactly what I mean.

Why is Glass so painful to use? Swiping along a trackpad on the side of your head is unnatural. In the pre-Glass era, how many times did you rub your temple?

But what about voice? Google's voice-to-text technology is by all accounts the best in the world; most technology enthusiasts and bloggers agree that it's quite accurate. The problem with relying on voice - especially for any command longer than 2-3 words - is that the opportunities for error multiply exponentially. Every word is a potential point of failure. If Google's transcription service messes up one word, the entire command can be rendered effectively useless (that's why Siri attempts to account for transcription errors). When a voice command is rendered useless, it takes at least a few seconds to reset and try again depending on the exact context. Per Google's Glass development guidelines, content on Glass must be timely. One of the defining characteristics of Glass is that you don't have to spend 5 seconds to reach into your pocket and unlock your phone. If it takes longer than 5 seconds to initiate an action, then you might as well have pulled your smartphone out of your pocket. There are exceptions - surgeons in the OR wouldn't be able to use their hands - but generally speaking, failing a voice command means that you could've and should've used your phone instead.

The accelerometer and gyro are useful to wake Glass from sleep, but I'm having a tough time visualizing apps making use of those functions for any form of meaningful engagement. The human neck isn't design to move and bend all that much; accelerometer and gyro based movements need to have significant triggers. Glass defaults to 30 degree head tilts to wake from sleep to prevent ambient waking. Developers can use the accelerator and gyro, but they must do so conservatively. They cannot be used interactively.

The camera provides by far the most raw input data, and thus holds the most potential. However, given Glass's screen size, positioning relative to the human eye, and the challenges of implementing intelligent, dynamic object recognition, the camera is probably a long ways off from becoming the defining input mechanism for most Glass apps. Trulia's real estate Glass app uses the camera in conjunction with GPS and the compass to show you data about the real estate you're looking at. This is a methodology I'm sure dozens of other apps will employ: using camera + GPS + compass to overlay data from a database. However, because of Glass's screen size and positioning relative to the eye, the camera can't deliver a lot of interactivity. It can feed lots of data to a database in the cloud, but it can't provide for interactive apps, yet.

Winking will provide for lots of fun apps. It's a great trigger event. Coupled with other types of context - voice, camera, and location - it could provide for some unique forms of interactivity, though I'm not exactly sure what they'd look like. No matter the app, I don't think anyone wants to wink all day.

In the near term, voice will be the most compelling and useful input mechanism. Most commands will be brief to reduce chance of failure. Glass devs that are hacking Glass to run native Android APKs are already using voice to navigate their apps. We are too. It works quite well. "Next", "previous", and "lookup [x]" work 99% of the time in reasonably controlled environments. At a bar, forget about it, but in a clinic or hospital, even with people talking nearby, voice is a compelling input.

Longer term, I expect the camera to become the most powerful input. It provides such an incredible amount of context and data. If Google decides to implement a larger screen that's more aligned with the human eye, the camera could become the defining input mechanism for most apps. Meta-View and AtheerLabs are already working on that dream. We'll see if Google decides to go that route. Given that Google's been position Glass as a consumer device, I have my doubts, but perhaps they'll shift in Google's product strategy, or an eyeware computing hardware portfolio for different use cases.

The other very promising input mechanism for Glass is the MYO Armband. It can mitigate one of Glass's greatest weaknesses: limited inputs. MYO delivers a very elegant input solution that complements two of Glass's three unique traits: hands free and always there. We're excited to integrate the MYO armband into Pristine's apps.