Friction-Free Computing

I've previously outlined the 3 fundamentally unique characteristics of Glass:

1) Hands free

2) Heads up display

3) Omni-present/friction-free (it's always there)

I think #3 is by far the most profound. As I've outlined before, every new computing platform reduced the friction between the user and the computer. Glass is the next step after smartphones.

Computers are phenomenally good at looking up and displaying information extremely quickly. See Google search as the prominent example. However, translating real world input from a smartphone (or any other existing computing device) in real-time is too troublesome. Who's going to walk around the world holding their phone in their hand? That's where Glass comes into play. Glass provides an incredible new opportunity to compute on an enormous amount of data - a raw audio/video feed of what you're seeing - and display contextual information in real time. There have been myriad augmented reality apps released for smartphones, but most of them have failed to gain any significant traction because there was too much friction between the hardware platform and the user; the software was dead on arrival. Glass creates enormous new opportunities.

The best Glass apps will be those that take advantage of this unique trait. The implications are absolutely incredible. In the not too distant future, you'll be able to contextually pull up information about almost anything in front of you. There will be  opportunities in manufacturing, healthcare, education, and a host of other industries.

Market Segmentation in Computing

Today I reserved a rental car, and I noticed something interesting. According to Enterprise, the automobile market is segmented into at least 27 different sub-markets. Wow, that's incredible. I didn't know there were that many kind of motor vehicles to choose from.

Rental Cars.png

Then I remembered this Steve Jobs quote from March 2010: "PCs are going to be like trucks. They are still going to be around… they are going to be one out of X people." And I remembered this post and picture that I read just over a year ago:

Note that this graph only looks at extensible OS platforms where 3rd party developers can easily write applications for the platform. This excludes all "dumb" platforms, including traditional cellphones and embedded devices. Dumb platform units have numbered in the billions for years.

We are witnessing a remarkably rapid segmentation of the computing market. Between the decade spanning 2007-2016, the world will have completely shed itself of the Microsoft monopoly. And even within Microsoft's domain, which used to be comprised of laptops and desktops, Windows 8 is being adopted in a variety of factors. It's still in the early days, so it's hard to know which form factors will fail and which will succeed, but there will surely be more a larger variety of form factors than ever before. Some examples: touchscreen laptops, detachable tablets, slate tablets, fold-around tablet/laptops, dual screen laptops, and more. Each of these form factors caters to the needs of different customers.

The old platforms could not adopt to the new usage models. They were simply unfit. Per Steve Jobs: "When we were an agrarian nation, all cars were trucks because that’s what you needed on the farms. Cars became more popular as cities rose." We are witnessing the same thing in computing. Smartphones and tablets are good enough for most computing most of the time. Why would you drive a truck (PC) when you could drive a car (tablet) instead?

Given my fascination with Google Glass, you might ask how Glass fits into this analogy. It kind of does, and it kind of doesn't. Glass is not good enough to replace many of the most common computing functions: email, browsing the web, reading, music, and video. Glass is inherently a passive and complimentary computing experience. It can excel where PCs, smartphones, and tablets fail, but it cannot replace any of these devices. It can only function in addition to these other devices. Glass and its competitors will grow to take their own chunk of the computing market, but they will not directly compete with modern computing form factors.

Looking forward, I expect the computing market to continue to fragment. As CPUs shrink, computers are showing up in all kinds of new places. 50 years from now, I wouldn't be surprised if there are more than 27 different segments in the computing market.

A Balanced EHR Copy Forward Solution

This post was originally featured on HIStalk.

 There’s been a recent wave of media coverage surrounding the topic of EHR copy forward functionality. Many have suggested that this function should be outright banned. The reasons vary, but in general most of the problems cited are related to the fact that the copy forward function in EHRs creates garbage and bloat in the patient’s record.

As someone who has experience designing and programming EHRs, who has deployed an EHR in inpatient and outpatient (PCPs and specialists) environments, and who has talked to hundreds of doctors about the subject in various presentations, I have a unique perspective to offer.

Lyle Berkowitz, MD, CMIO of Northwestern Memorial Hospital in Chicago, recently posted on the subject. He’s right. EHR copy forward is a great tool if used correctly. The problem is that EHRs make it too easy to abuse. Most of the copy forward functions in EHRs look at the last note and quite literally copy every field forward into the current note. This is problematic because full-note copy forward allows the doctor to copy forward too much information before all of it can be digested and understood.

There are easily dozens if not hundreds of data points in a given note. Doctors shouldn’t be encouraged to copy hundreds of data points into the current note before having a chance to complete the current assessment. It’s too much, too early in the examination process. The EHR should make it easy to copy forward information in manageable pieces.

I lead the original design of a function in my company’s EHR called Copy to Present in the latter part of 2011. It’s similar to the copy forward feature in most modern EHRs. The primary difference is that it doesn’t copy the entire note forward, just the active area of focus. The function is available in conjunction with a date dropdown on all major sections of the chart.

Image.jpg

For example, the physical exam page contains a date dropdown at the top of the page. When a doctor visits the physical exam page, the date dropdown defaults to the current date. Doctors can quickly review an old physical exam summary by selecting from a date in the dropdown, which is populated with dates of previous physical exams for the active patient. When looking at an old date, the Copy To Present button appears. Clicking it copies forward the selected physical exam to the current note. The Copy to Present button doesn’t affect any part of the chart other than physical exam; all other areas are left intentionally untouched. After clicking the Copy to Present button, the physical exam data is editable as if the doctor had entered the data by hand.

A video demonstration of Copy to Present is above and here.

Copy to Present and the date dropdown are useful for data points that need to be collected and updated during every examination. Examples include chief complaints, physical exams, review of systems, and assessments and plans. In these scenarios, the Copy to Present function allows the doctor to understand what they recorded last time before copying forward to the current note. It provides the quick copy-forward function doctors want and need, while still allowing fine-tuned control over what’s copied forward.

However, Copy to Present is irrelevant when dealing with other types of information. For example, allergy lists, medication lists, problem lists, lab results, medical history, and surgical history. The most up-to-date versions of these data points should always be shown regardless of who last updated the list across any care setting (inpatient, outpatient, ED). EHRs should understand (but most don’t) that these pieces of information aren’t part of a particular note as much as they are relatively static pieces of data about the patient. Once labs and allergies are recorded, they should be available to any clinician that needs access to them, and they should always be up to date independent of any clinical note.

EHRs need to understand the kind of information they’re handling. Different pieces of information should be handled differently depending on what the information is, who is accessing it, and what that person needs to do with it. EHR vendors have a responsibility to ensure they provide the tools to make sure clinicians can get what they need, when they need it, and understand it as quickly as possible.

The Major Shortcoming of Google Glass

So far I've only written about the opportunities for Glass. Everything I've written has been overwhelmingly positive. But Google Glass has many shortcomings.

The greatest shortcoming is the lack of a significant UI that the user can actively engage with. The point of Glass is to make the technology go away, not put it in your face (no pun intended). Unlike PCs, tablets, and smartphones, Glass is intended to be as close to invisible as possible. Thus, it's much more difficult to provide complex user input signals.

For the 1.0 launch, Glass will recognize the following input methods. It is unclear if and how developers will have access to some of the input methods, but the hardware can recognize:

1. Taps/swipes on the side of Glass

2. Audio/Voice

3. Image/Picture

4. Video

5. Accelerometer

Unfortunately, none of these input mechanisms are particularly useful for detailed, powerful user interaction. Voice does provide some granularity, but the more complicated the voice command, the more NLP required. NLP technologies are still in their infancy.

Google has said that there will be iOs and Android SDKs to go along with Glass. Based on my conversations with Google's employees yesterday at SXSW Interactive, I don't think these SDKs will be ready to go on day 1. Thus, people won't be able to use smartphones as a remote control for Glass on day 1. Given that Glass already accepts taps and swipes on the side panel, it would be nice to accept those same inputs via smartphone as a complementary UI mechanism.

Perhaps the most obvious form of granular control with Glass is pointing with the human finger. This would require the camera to be on, significant processing power, and sophisticated video/image-recognition algorithms. I sincerely doubt Google will have these APIs ready to go on day 1, if ever. Perhaps 3rd party developers will develop such algorithms and make them available to other developers via APIs.

I think Google knows the UI opportunities of Glass are limited. And they also see a proliferation in new UI mechanisms - see the Leap Motion Controller and the MYO Armband as examples. I find the MYO Armband + Glass to be a phenomenally compelling combination. Unlike the Leap Motion Controller, the MYO Armband maintains one of the key characteristics of Glass - hands free. You could initiate or stop the camera by simply rubbing your fingers together correctly. You could snap a picture anytime you clap your hands or snap your fingers. You initiate a live stream without even taking your hand out of your pocket. You could manipulate the camera imagery by waving your hands and fingers in the air.

Vitamins and Pain Pills

Every company attempts to solve a problem. Some problems cause enormous pain. Others do not. For the problem-ridden customer, the product is either a vitamin or pain pill. Unfortunately, many entrepreneurial companies are vitamins, not pain pills.

In my experience in talking with a few hundred people about Glass at SXSW 2013, the immediate application opportunities people think of are vitamins. For example, ChatRoullete for Glass, driving with Glass, finding strangers in a crowd with Glass. Unfortunately, 99% of the ideas people have for Glass are vitamins. And that's true not because the ideas are bad ideas, but because of the nature of Glass itself.

The #1 sales challenge for any application developer that wants to write on the Glass platform is the cost of Glass itself, $1500, especially given that everyone already has iPhones and Androids in their pockets. In order for a Glass application to be successful, the application needs to be a pain pill, not a Vitamin.

As Glass becomes more popular, more novel vitamin applications will spring up. But in the first year the device is on the market, the only successful applications will be those that solve a big problem that iPhones and Androids cannot. I will venture to guess that at least 90% of the successful Glass apps in Glass's first year of availability will be commercial applications. Companies will pay to make their employees more effective and efficient at their jobs.

Glass Developers: please, ignore the silly consumer applications. If you want to do anything on that platform, write something that companies will pay for. In the first year Glass is on the market, consumers won't.