Market Segmentation in Computing

Today I reserved a rental car, and I noticed something interesting. According to Enterprise, the automobile market is segmented into at least 27 different sub-markets. Wow, that's incredible. I didn't know there were that many kind of motor vehicles to choose from.

Rental Cars.png

Then I remembered this Steve Jobs quote from March 2010: "PCs are going to be like trucks. They are still going to be around… they are going to be one out of X people." And I remembered this post and picture that I read just over a year ago:

Note that this graph only looks at extensible OS platforms where 3rd party developers can easily write applications for the platform. This excludes all "dumb" platforms, including traditional cellphones and embedded devices. Dumb platform units have numbered in the billions for years.

We are witnessing a remarkably rapid segmentation of the computing market. Between the decade spanning 2007-2016, the world will have completely shed itself of the Microsoft monopoly. And even within Microsoft's domain, which used to be comprised of laptops and desktops, Windows 8 is being adopted in a variety of factors. It's still in the early days, so it's hard to know which form factors will fail and which will succeed, but there will surely be more a larger variety of form factors than ever before. Some examples: touchscreen laptops, detachable tablets, slate tablets, fold-around tablet/laptops, dual screen laptops, and more. Each of these form factors caters to the needs of different customers.

The old platforms could not adopt to the new usage models. They were simply unfit. Per Steve Jobs: "When we were an agrarian nation, all cars were trucks because that’s what you needed on the farms. Cars became more popular as cities rose." We are witnessing the same thing in computing. Smartphones and tablets are good enough for most computing most of the time. Why would you drive a truck (PC) when you could drive a car (tablet) instead?

Given my fascination with Google Glass, you might ask how Glass fits into this analogy. It kind of does, and it kind of doesn't. Glass is not good enough to replace many of the most common computing functions: email, browsing the web, reading, music, and video. Glass is inherently a passive and complimentary computing experience. It can excel where PCs, smartphones, and tablets fail, but it cannot replace any of these devices. It can only function in addition to these other devices. Glass and its competitors will grow to take their own chunk of the computing market, but they will not directly compete with modern computing form factors.

Looking forward, I expect the computing market to continue to fragment. As CPUs shrink, computers are showing up in all kinds of new places. 50 years from now, I wouldn't be surprised if there are more than 27 different segments in the computing market.

A Balanced EHR Copy Forward Solution

This post was originally featured on HIStalk.

 There’s been a recent wave of media coverage surrounding the topic of EHR copy forward functionality. Many have suggested that this function should be outright banned. The reasons vary, but in general most of the problems cited are related to the fact that the copy forward function in EHRs creates garbage and bloat in the patient’s record.

As someone who has experience designing and programming EHRs, who has deployed an EHR in inpatient and outpatient (PCPs and specialists) environments, and who has talked to hundreds of doctors about the subject in various presentations, I have a unique perspective to offer.

Lyle Berkowitz, MD, CMIO of Northwestern Memorial Hospital in Chicago, recently posted on the subject. He’s right. EHR copy forward is a great tool if used correctly. The problem is that EHRs make it too easy to abuse. Most of the copy forward functions in EHRs look at the last note and quite literally copy every field forward into the current note. This is problematic because full-note copy forward allows the doctor to copy forward too much information before all of it can be digested and understood.

There are easily dozens if not hundreds of data points in a given note. Doctors shouldn’t be encouraged to copy hundreds of data points into the current note before having a chance to complete the current assessment. It’s too much, too early in the examination process. The EHR should make it easy to copy forward information in manageable pieces.

I lead the original design of a function in my company’s EHR called Copy to Present in the latter part of 2011. It’s similar to the copy forward feature in most modern EHRs. The primary difference is that it doesn’t copy the entire note forward, just the active area of focus. The function is available in conjunction with a date dropdown on all major sections of the chart.

Image.jpg

For example, the physical exam page contains a date dropdown at the top of the page. When a doctor visits the physical exam page, the date dropdown defaults to the current date. Doctors can quickly review an old physical exam summary by selecting from a date in the dropdown, which is populated with dates of previous physical exams for the active patient. When looking at an old date, the Copy To Present button appears. Clicking it copies forward the selected physical exam to the current note. The Copy to Present button doesn’t affect any part of the chart other than physical exam; all other areas are left intentionally untouched. After clicking the Copy to Present button, the physical exam data is editable as if the doctor had entered the data by hand.

A video demonstration of Copy to Present is above and here.

Copy to Present and the date dropdown are useful for data points that need to be collected and updated during every examination. Examples include chief complaints, physical exams, review of systems, and assessments and plans. In these scenarios, the Copy to Present function allows the doctor to understand what they recorded last time before copying forward to the current note. It provides the quick copy-forward function doctors want and need, while still allowing fine-tuned control over what’s copied forward.

However, Copy to Present is irrelevant when dealing with other types of information. For example, allergy lists, medication lists, problem lists, lab results, medical history, and surgical history. The most up-to-date versions of these data points should always be shown regardless of who last updated the list across any care setting (inpatient, outpatient, ED). EHRs should understand (but most don’t) that these pieces of information aren’t part of a particular note as much as they are relatively static pieces of data about the patient. Once labs and allergies are recorded, they should be available to any clinician that needs access to them, and they should always be up to date independent of any clinical note.

EHRs need to understand the kind of information they’re handling. Different pieces of information should be handled differently depending on what the information is, who is accessing it, and what that person needs to do with it. EHR vendors have a responsibility to ensure they provide the tools to make sure clinicians can get what they need, when they need it, and understand it as quickly as possible.

The Major Shortcoming of Google Glass

So far I've only written about the opportunities for Glass. Everything I've written has been overwhelmingly positive. But Google Glass has many shortcomings.

The greatest shortcoming is the lack of a significant UI that the user can actively engage with. The point of Glass is to make the technology go away, not put it in your face (no pun intended). Unlike PCs, tablets, and smartphones, Glass is intended to be as close to invisible as possible. Thus, it's much more difficult to provide complex user input signals.

For the 1.0 launch, Glass will recognize the following input methods. It is unclear if and how developers will have access to some of the input methods, but the hardware can recognize:

1. Taps/swipes on the side of Glass

2. Audio/Voice

3. Image/Picture

4. Video

5. Accelerometer

Unfortunately, none of these input mechanisms are particularly useful for detailed, powerful user interaction. Voice does provide some granularity, but the more complicated the voice command, the more NLP required. NLP technologies are still in their infancy.

Google has said that there will be iOs and Android SDKs to go along with Glass. Based on my conversations with Google's employees yesterday at SXSW Interactive, I don't think these SDKs will be ready to go on day 1. Thus, people won't be able to use smartphones as a remote control for Glass on day 1. Given that Glass already accepts taps and swipes on the side panel, it would be nice to accept those same inputs via smartphone as a complementary UI mechanism.

Perhaps the most obvious form of granular control with Glass is pointing with the human finger. This would require the camera to be on, significant processing power, and sophisticated video/image-recognition algorithms. I sincerely doubt Google will have these APIs ready to go on day 1, if ever. Perhaps 3rd party developers will develop such algorithms and make them available to other developers via APIs.

I think Google knows the UI opportunities of Glass are limited. And they also see a proliferation in new UI mechanisms - see the Leap Motion Controller and the MYO Armband as examples. I find the MYO Armband + Glass to be a phenomenally compelling combination. Unlike the Leap Motion Controller, the MYO Armband maintains one of the key characteristics of Glass - hands free. You could initiate or stop the camera by simply rubbing your fingers together correctly. You could snap a picture anytime you clap your hands or snap your fingers. You initiate a live stream without even taking your hand out of your pocket. You could manipulate the camera imagery by waving your hands and fingers in the air.

Vitamins and Pain Pills

Every company attempts to solve a problem. Some problems cause enormous pain. Others do not. For the problem-ridden customer, the product is either a vitamin or pain pill. Unfortunately, many entrepreneurial companies are vitamins, not pain pills.

In my experience in talking with a few hundred people about Glass at SXSW 2013, the immediate application opportunities people think of are vitamins. For example, ChatRoullete for Glass, driving with Glass, finding strangers in a crowd with Glass. Unfortunately, 99% of the ideas people have for Glass are vitamins. And that's true not because the ideas are bad ideas, but because of the nature of Glass itself.

The #1 sales challenge for any application developer that wants to write on the Glass platform is the cost of Glass itself, $1500, especially given that everyone already has iPhones and Androids in their pockets. In order for a Glass application to be successful, the application needs to be a pain pill, not a Vitamin.

As Glass becomes more popular, more novel vitamin applications will spring up. But in the first year the device is on the market, the only successful applications will be those that solve a big problem that iPhones and Androids cannot. I will venture to guess that at least 90% of the successful Glass apps in Glass's first year of availability will be commercial applications. Companies will pay to make their employees more effective and efficient at their jobs.

Glass Developers: please, ignore the silly consumer applications. If you want to do anything on that platform, write something that companies will pay for. In the first year Glass is on the market, consumers won't.

On Being Wrong and Learning

SXSW is going on this weekend. I've spent the past 2 nights out on the town mingling and socializing. I woke up early this Saturday morning catch up on life, then head out and do it all over again. And while I was in the shower, I recalled a great TED video I watched well over 6 months ago.

You are never wrong. You can have been wrong. But you never actively go about your day knowing and accepting "I am wrong". There's a pivotal moment between the state of being wrong and not being wrong. That sliver in time is special. For some, it's a painful moment. Many of those people never experience it. I love it. I find it to be fascinating. Because once you learn to deal with the initial discomfort of being wrong, you can learn create an enormous number of new opportunities.

The single most prominent characterization that affects your ability to remember things is vividness. There's reason why you can't remember what you ate for breakfast yesterday, but why you can remember your wedding and graduations from 20 years ago so clearly.

Synthesizing the 2 ideas above effectively explains proverb "learn from your mistakes."

I love being wrong. Because being wrong creates a distinct memory that you can learn from. The more wrong you are, the more likely you are to internalize and learn from your mistakes. The more adamant you were, the more humbled you should become after being wrong.