The Resurgence of the Command Line UI

This is the 3rd post in a 3-post series examining physical UIs. The first 2 posts focused on the UIs that are dominant in the PC environment. The first post focused on the keyboard, detailing the ins and outs of Alfred, and the second post focused on the trackpad, illustrating the extensibility of BetterTouchTool (BTT).

In the BTT post, I asserted that BTT is the end-all-be-all of trackpad based UIs given the hardware constraints of modern trackpads. In short, BTT allows you to map any gesture that the hardware trackpad can recognize to any software function. In input-output terms, BTT can map any trackpad-recognizable human input to any desired software output. Nothing more can be done in this front.

However, in the Alfred post, I concluded by stating that Alfred is not the end-all-be-all of "command line" user interfaces. There is still significant room for improvement in command line UIs. This post will explore what that end-all-be-all might look like. But first, let's review what command line UIs are, and look at how they've evolved over the past 30 years.

What is a command line UI?

The definition of a command line UI is quite broad. Command line UIs are not just traditional command lines, but can take many forms and shapes. A simple definition: if you can command the computer to do something in a character-driven format (i.e. the keyboard), then it's a command-line UI. That means that the mouse and trackpad are out of the question since they don't utilize characters. Thinking a bit broader, command line UIs would also exclude reading brain signals, Kinect-like motion controls, or any other input form that cannot be translated into unambiguous human-readable characters. It's important to note that command line UIs aren't limited to they keyboard - command line UIs can manifest if the computer can take another form of input and translate it into unambiguous characters.

A history of command line UIs

The fist computers utilized the most low-level form of command line UIs. You told the computer exactly what you wanted it to do: which memory addresses to put data into, which memory addresses to read data from, and how to manipulate data. It doesn't get much lower-level than that. Of course, laymen could never utilize such a interface to get anything done. The requisite knowledge to use this kind of computer was beyond daunting. With time, computers evolved so that the command lines could understand higher level commands, such as reading and saving files and launching applications within a filesystem, as opposed to memory addresses. They abstracted away the complexity of memory addresses and other low-level system functions so the user didn't have to worry about them. Even still, they were hardly friendly. Could you imagine teaching your grandmother to type "launch Windows.exe" to boot up her computer? That's exactly how I booted into Windows 95 on my first computer.

In 1984, Apple unveiled the Macintosh, with the promise to banish the command line and all of its unfriendliness and ugliness. And it did. Except that it didn't.

As  graphical UIs (GUIs) grew through the 80s and into the 90s, the command line of yore was relegated to a simple OS-level file search. And it was a slow and ugly search hidden behind multiple UI layers in both Windows and Mac OSes. Do you remember how many clicks it took to access the search function in Windows XP? 3. That's horrific.

Then, Google happened. Google re-ignited the concept of using a single text box as the entry to point any content you could conceivably want to find on the Internet. There's 999,435,454,456,876,655,345,945,888,234,457,962,235,654 pages on the internet, and Google realized that there's no way you could ever browse through that volume of data using a mouse. After all, the mouse is slow and inefficient. Luckily, the keyboard makes it extremely easy to solve the problem of sifting through hordes of data. Here's a simple way to think about how the keyboard does that: lets say there's 37 keys on the keyboard - one for every letter, number and space. Under this assumption, there are  37^10 = 4.8 * 10^15 possible combinations of 10-character queries. You can enter your desired character combination, and Google can cross-reference your character-combination against every publicly available document on the Internet to give you 10 blue links.

As Google grew in size and prominence, Microsoft grew jealous of Google. In Windows Vista, Microsoft completely revamped search and brought it directly to the Start menu. And it was a pretty good search, spanning not only local-OS files, but applications and system settings. Apple did the same in one of the first few releases of OSX and called it Spotlight. However, both Microsoft and Apple failed to to plug their native OS search engines into web search engines.

As Google grew through the 2000s, so did its competition. Everyone wanted a piece of Google's search revenue, not just Microsoft and Yahoo. Thousands of smaller vertical search engines, such as Amazon, eBay, Zappos, Wikipedia, Yelp, Mapquest, MySpace, Facebook, ZocDoc and Quora knew they could search specific domains far better than the generic Google could. As each of these companies grew, they iterated and developed their searches. Some became quite good within their respective fields, but no matter how good they became, they all faced the same accessibility problem: you have to navigate to their websites before you can search them. What a pain!

In the mid 2000s, a developer recognized this growing accessibility problem, and created QuickSilver to solve the accessibility problem. Quicksilver, like Alfred, made it very easy to invoke any web-search engine from anywhere in OSX. Unfortunately, the QuickSilver developer stopped developing the app even though its users loved it. In QuickSilver's absence, Alfred emerged to take QuickSilver's place as the prominent 3rd party search utility for OSX, doing everything that Spotlight should have done.

And so that brings us into the 2000-teens. What's next for the command line?

Search engine overload

Today, there are thousands of vertical search engines. Each is suited to its particular field. In many cases, these vertical search engines can do a better job than Google can within their respective areas of expertise. So why don't people stop using Google and use these other search engines that can deliver better results?

The beauty of Google is that it's incredibly easy to remember and to use. If you need to find something online, just Google it. Or you could remember Quora for questions, Wolfram Alpha for computations, Zappos for shoes, IMDB for movies, Amazon for random merchandise, Kayak for airplane tickets, CNN for the news, TechCrunch for tech news, Economist for economic opinion pieces, etc. Computer geeks and power users like me won't have a tough time remembering which engines to use for which purposes, but most people will. Google solves the accessibility and memorization problem, even if Google delivers inferior search results compared to an optimized domain-specific search engine. For that incredibly simple reason, Google wins while everyone else loses.

Unfortunately, Alfred doesn't solve the search-engine-memorization problem. It reduces the friction between you and the search engine that you'd like to use, but it still requires that you know which one you want to use, and that you specify that detail to Alfred every time you search. In this sense, Alfred isn't particularly intelligent. It can't plug into anything beyond an inherently limited set of pre-configured search engines.

The technology that solves the memorization problem is the future of command-line interfaces. And such technology is commercially available today, sort of. Google, Apple, and Microsoft all recognize this problem and the tremendous value they can create by solving it. But they're employing very different strategies to solve it. Let's start with Google, the command-line incumbent.

Google's solution to this problem is "Google Now". Google Now allows you to speak (which is translated into text in the background, hence making it a command line UI) in human language. If Google Now can make sense of your question, it will attempt to answer it succinctly. Google Now is paradigmatically asymmetric to the Google we've all come to know and love. The original Google didn't answer your questions - it sent you elsewhere for someone else to answer your question. Google Now is fundamentally different. Google Now attempts to understand the meaning of your question and answer it, without returning 10 blue links. Of course, Google Now isn't that smart; you can stump it pretty easily, and when you do, Google Now reverts back to the old Google and gives you 10 blue links. But there's a second aspect to command-line interfaces beyond looking up information, and that's "doing stuff," such as sending emails, creating reminders, and sharing content. Google Now only attempts to answer your questions. Unlike Alfred, Google Now doesn't "do" much. It can't send emails, it can't create reminders, and it can't add songs to playlists; but it's great at looking up flight information.

Apple's solution to the search engine overload problem is Siri. As far as the user is concerned, Siri tries to do same thing as Google Now - it takes your query (your command) and tries to give you an answer, or do what you tell it to do. Unlike Google Now, Siri is actually quite good at "doing stuff", like playing a song, sending something to a friend, or creating a reminder. However, for everything Siri does right with regards to "doing", Siri fails when it comes to searching. Apple hasn't indexed the entire Internet and built an enormous Knowledge Graph on top of it. Right now, Siri is very limited in the information it can lookup. Siri relies on a variety of vertical knowledge bases to lookup information, including Yelp for restaurants, OpenTable for reservations, Yahoo for sports, IMDB for movies, and Wolfram Alpha for computations. Ironically enough, when Siri knows that none of these vertical search engines is sufficient for the task at hand, it reverts to Google, who gives you 10 blue links. Siri is limited in what it can do: Siri today only works with a select few services that Apple integrated into iOs. However, I suspect that iOs 7 (and if not iOs 7, then iOs 8) will usher in Siri APIs so that developers can plug into Siri. When that happens, Siri will determine which app is best suited for a given command, then hand off the search or action parameters to the app so it can search or "do" using context-specific criteria. This has never been done, but it's the logical progression of Siri technology; the foundation is in place and it will happen, it's just a question of when and how well it will work. Make no mistake about it, Siri is Apple's Google-competitor. Apple is smart enough to know that it can't beat Google head-on. Apple has watched Microsoft try to compete with Google head-on: Bing isn't as good at search, has never been close to profitable, and has no viable path to profitability given its current trajectory and strategy. Apple knows that the only way to take down a giant like Google is to compete asymmetrically (see The Innovator's Dilemma and The Innovator's Solution by Clayton Christensen for more on asymmetric technological competition). Siri asymmetrically competes with Google by attempting to make Google irrelevant.

And last, and most certainly least, is Microsoft. Microsoft hasn't released a search or action driven solution yet to compete with Google Now or Siri, though it has taken steps to give you the "answer" you're looking for instead of 10 blue links in Bing. However, Bing only tries to return answers for common searches such as celebrities. I'm certain Microsoft is working on a consumer-facing product to compete in this space with Siri and Google Now, it's just not out yet. I would guess that whatever Microsoft does, it will leverage Facebook's new Graph Search to better tailor the answer. Microsoft knows it's the underdog and late-comer relative to Google and Apple in this front, and it needs all the help and data it can get to compete. Given Microsoft's technological and financial relationship with Facebook, utilizing Facebook's Graph Search seems even more likely.

All for one, or one for all?

Apple and Google are are approaching the same problem - how to implement the best the command line user experience - coming from opposite directions. And it shows. Today, Siri excels where Google Now fails - doing stuff - and Siri fails where Google Now excels - looking stuff up. This makes sense given the historical strengths of each company.

As Google Now and Siri mature, they will add features where they're currently lacking. However, Apple and Google are going about this in radically different ways. Google is attempting to determine the answer to your question, and do whatever it is that you need it to do, all on its own. Apple sees itself as a middle man, whose job it is to determine which of its developers is best qualified to perform a given task, be it search or doing stuff. Once Apple has made that determination, it will hand off that the relevant parameters to that application so that it can deliver the results that you want.

My first blog post examined this topic in quite a bit more depth, looking at the problem in a historical light of "monolith vs an army of developers." Historically, the army of developers have won all the wars that I'm aware of. However, given how dominant Android is today, and how limited iOs is because of physical distribution and price-point limitations, it's highly likely that both strategies continue to co-exist for quite some time, even if one method proves to be materially better than the other. When Steve Jobs declared "thermonuclear war" on Android in early 2010, he really just meant that Apple was buying Siri, with hopes that Siri could eventually destroy Google. Apple bought Siri in April 2010.

Siri and Google Now represent the end-all-be-all of command line UIs. There's literally nothing you as a human can do to generate the kind of nuanced queries that computers can parse to deliver a desirable result that's faster than talking in your native tongue. Sure, you can swipe your trackpad far more quickly than you can speak "go to the next tab", but can you wave at your Kinect and have it search for the name of the general of the Confederacy during the American Civil War? Even technologies like Google Glass, even if perfected, wouldn't provide a faster UI mechanism than speaking for complex queries (though Google Glass will most certainly open new UIs for certain types of queries, namely image-driven queries); in fact, in all likelihood, one of the headline features of Google Glass 1.0 will be seamlessly integrated Google Now. Google Glass will perpetuate Google Now, not replace it. There's nothing better than asking a question at any time, and hearing the answer spoken back by a computerized female immediately…

Unless, of course, the computer can deliver the result you want before you explicitly tell it anything at all. Serendipity anyone? To be continued…

The Best Desktop UI, Part 2

In my first post of this 3 part series, I detailed why Alfred extends the PC keyboard into the ultimate productiivty tool. This post will focus on another free OSX app, BetterTouchTool (BTT). BTT allows you to map any recognizable gesture to any system command or keyboard in OSX, at the global or application-specific level. But before we get into the details of BTT, let's look at the evolution of the computer mouse.

Gestures > point and click

How do most 1-year-old children express that they want something? They point. It's a beautifully simple concept. It's fundamental to human interaction and communication. If you want stuff, you point at it. If you see something of interest, you'' point. If you want to show your friends something cool, just point.

The original conception of a computer mouse was based on this principle. Just point (and click).

This worked well when you couldn't do much with a computer, and when you didn't spend all day sitting at a computer. But today, hundreds of millions of people spend all day sitting in front of a PC. Every few seconds, people demand something new of their computer: open a new email, send an email, open file browser, select a folder, select another folder, select another folder, copy, switch to another application, paste. I Googled to find the average number of mouse clicks per person per day. I couldn't find any well-crafted studies but the preliminary Google results suggest 472. I probably click 4-5x as much as the average user, putting me over 2000 clicks per day.

To make matters worse, you often have to click at opposite ends of the screen. In both OSX and Windows, the 3 primary window management controls (red/yellow/green and minimize, de-full screen, and X) are at the top of the screen, and the primary application management tools are at the bottom of the screen by default (Dock and Task Bar). So that means you have to incessantly move the mouse across the entire screen, and make sure that your mouse lands on a small target every time.

This UI paradigm is stupid. How many times have you clicked on the button next to the one you wanted, such as "Don't Save" instead of "Save"? How many times have you clicked "X" instead of Minimize? How many times have you clicked on Chrome instead of Mail?

Because the mouse offers 1 pixel of accuracy (the human finger offers 50-70 pixels of accuracy), PC OSes and applications have been designed with UI elements that are only a few pixels apart. Although the mouse offers the resolution to differentiate between these UI elements, humans don't have nearly the dexterity to quickly move the mouse and land on small targets in quick succession. The mouse, coupled with modern desktop OSes and applications, promotes frustration.

BTT makes all of these problems go away. It solves every problem that the mouse created. And does so much more. BTT accomplishes all of this by circumventing the entire concept of the mouse as you knew it.

Back in the dark ages of computing when mice were still a novel concept, they couldn't do more than point-and-click. At any point in time, mice tracked a single data point - which direction do I move next: up, down, left, or right?

Contrast the mouse of 70s and 80s with the modern Apple trackpad. The modern Apple trackpad can track up to 10 fingers at once. It doesn't just track them in a single position at a single point in time, it can understand and recognize 10 different fingers moving across a fixed x-y plane with absolute coordinates at different speeds and angles. It's not location or time bound. It tracks raw input data across 2 dimensions, location and time, that the original mouse never could.

So what can you do with a trackpad and OS that can make sense of all of this data? Only 1 thing, actually. Make beautiful art using IOGraph. I crafted this spectacular piece while writing this blog post.

IOGraph.png

Just kidding.

With BTT, you can just perform just about any software function in your Mac using your trackpad. BTT lets you map any gesture that your Apple trackpad can recognize to any keyboard shortcut or system command. These commands can be global or application-specific.

That was quite a bit to digest, so let's walk through BTT step-by-step. After that, I'll offer a few examples that really illustrate the power of BTT. Remember, BTT operates in the background and never appears on the screen, unlike Alfred. The graphical BTT UI exists to configure how BTT behaves in the background.

Here's a screenshot of BTT. The red numbers and lines are courtesy of my exceptional PhotoShop skills.

BTT.JPG

For now, ignore the red 1. Although that radio control is the highest organizational level of the BTT UI, we'll come back to it later.

The red 2 shows a list of apps running down the left side of the screen. These are apps that I've configured BTT to work with. Note that "Global" is listed at the very top.

The red 3 shows a list of gestures that I'm using for the selected app in the red 2 list. I've implemented global commands for the gestures: four finger click, four finger swipe down, three finger click, and three finger tap. Some of those gestures are repeated because the repeated entries use modifiers. We'll revisit modifiers in a bit.

The red 4 shows the commands that I've mapped to each of the gestures in red 3. So for the "four finger click" gesture, if I click the trackpad with four fingers at anytime while using my Mac, it will open a new Finder window, and if Finder is already the foreground app, BTT will open a second Finder window. For the second gesture, if I swipe down with 4 fingers at anytime, the BTT will simulate pressing Control+Space. In my Mac, Control+Space invokes a window-management app, Divvy, that I frequently use for granular window management. For the fifth gesture, if I click the trackpad with 3 fingers at anytime, BTT will simulate pressing Command+W, which is the global OSX command to close the active window.

Lastly, the red 5 shows modifiers for a given gesture. To understand modifiers, look at the third gesture in the screenshot above. If I hold Command while clicking the track pad with 3 fingers, BTT will maximize the active window. Although modifier functionality technically breaks the trackpad-only designation that I gave BTT, it still falls in line with the thesis of "you don't have to move your mouse."

Next, I'd like to show you my BTT configuration for Chrome, where you can see BTT really shine.

BTT Chrome.JPG

In Chrome, if I rotate right, BTT simulates pressing Command+R = page refresh. What an intuitive gesture! If I swipe right with 3 fingers, BTT switches to the tab to the right of my current tab. If I swipe left with 3 fingers, BTT switches to the tab to the left of my current tab. These 2 commands alone make BTT incredible. I switch tabs dozens of times everyday. Moving the mouse onto a tab, then moving back into the content view of Chrome to click on links and content is horribly inefficient. BTT is 90-95% more efficient in this use case. In fact, given that BTT gestures never take more than .1-.2 seconds,  BTT is always 90-95% more efficient than the alternative, if not more.

If I swipe up with 3 fingers, BTT opens tab-Expose/Mission Control within Chrome. This is fantastic, because it's completely intuitive given that OS-level Mission Control is a four-finger swipe up at the global level. Three fingers up shows all my open Chrome tabs, four fingers up show all my apps. It's an elegant and intuitive setup.

Lastly, I'd like to revisit the red 1 from the first BTT screenshot. That radio control exists as a parent to the app list of red 2. BTT allows you to customize gestures per application per input device. I don't have any configurations saved for anything other than the native trackpad, but for people that utilize other input methods, this feature is a blessing.

Once you first begin using BTT, you might not find it to be all that useful. Your brain has been hardwired to the point-and-click mentality. But it can be rectified. With a little practice, your brain will discover the potential of the modern trackpad. As you develop gesture fluency, you can swipe your way through your OS and apps with ease. You'll find that your productivity will increase, especially when you're in the zone.

I attribute a healthy proportion of my raw-computing productivity to Alfred and BTT. Together, these apps reduce the friction of modern OSes across every interaction between the human and the computer.

??? > gestures > point and click

In my last post, I concluded by suggesting that the Alfred command line wasn't the end-all-be-all of "command line" computing UI. It isn't. On the other hand, BTT is, within the context of the modern multi-touch trackpad. BTT has fleshed out everything that modern trackpad hardware can recognize. To see why this statement is true, think about what BTT does: BTT allows you to map any gesture that the hardware trackpad can recognize to any software function. In input-output terms, BTT can map any trackpad-recognizable human input to any desired software output. I'm not suggesting we won't see new physical UIs, but within the world of the trackpad, BTT is the end-all be-all. BTT is still being actively developed, but the upcoming features aren't changing the nature of the beast.

Stay tuned for my last post in this 3-post-series on PC UIs, in which I'll discuss the intermingling of the keyboard and trackpad, and consider what's next for "command line" UIs.

The Best Desktop UI, Part 1

One of the center pieces of the age-old Mac vs PC debate has been about UI. Who creates the best UI in the desktop/laptop environment, Microsoft or Apple? Most people who know me would say that I would say Apple, given my self-proclaimed "Apple enthusiast" (not fanboy) status.

If the only choices to the question above are Microsoft and Apple, then I choose Apple in a heartbeat. But if I'm not restricted to a binary response, then I choose Apple + Alfred + BetterTouchTool.

This blog post is the first in a 3-part post. The post will focus on the keyboard, the second on the mouse/trackpad, and the third on the future of UIs. The first 2 posts are an will lay out use cases that demonstrate why the Apple + Alfred  + BetterTouchTool trifecta is the perfect UI in the desktop/laptop environment. The third post will discuss the intermingling of these apps and possibilites for future UI improvements.

You only interact with a PC in 2 fundamental ways

PCs have 2 physical forms of user interface - the keyboard and the mouse/trackpad. Sure, there are ancillary ways to interact like the webcam and microphone, but would you really say that your primary form of interaction with your computer is your webcam? Or microphone? Exactly.

Thus, we're left with 2 forms of physical interaction. The keyboard, and the mouse/trackpad. Alfred and BetterTouchTool are apps for OSX that extend the functionality of the keyboard and mouse tremendously. Each of these apps optimizes its UI mechanism to the point of perfection. This post will focus on Alfred, and my next post on BetterTouchTool.

Alfred: Command line > software UI

Alfred is Spotlight++. It's everything that the built-in search function on modern OSes should be. Both Microsoft's and Apple's built in search functions pale in comparison to Alfred. Even Google's desktop search for Windows is lacking in comparison. This is particularly ironic given that Google is the mother of all search companies.

So let's start with the basics of Alfred. What does it do? How does it function?

At any point in time, you can invoke Alfred by pressing a user-programmable keyboard shortcut. By default, OSX uses CMD+Space to invoke Spotlight. Both of these keys occupy prime real estate on the keyboard, so I disabled Apple's Spotlight search and mapped Alfred to CMD+Space. When you launch Alfred, you get this simple search box.

Alfred.JPG

On screen, the search box is automatically horizontally centered, and a little off-center vertically. You can easily drag-and-drop the floating search box anywhere, and it will remember the position for future searches. Of course, Alfred searches local files on your computer. In the settings, you can use a simple software UI to specify which folders Alfred should index to optimize performance. If your Mac has an SSD, you can index pretty much everything other than system files, and Alfred should run without a hunch.

Of course, as soon as you start typing, Alfred starts searching your computer given the string of characters you've entered. Alfred will show you a list of matching results below the search box, as shown below. You can quickly use the arrow keys or keyboard shortcuts shown on screen to navigate to the file you'd like, and press Enter to open it. Nothing special so far, right?

Alfred Results.JPG

Well, let's say you want to find a file, buried deep in your folders and email it. Just type "find " then the file you're searching for. If you want to open the file in a new Finder Window, press Enter, or if you want to email the file, press the Right Arrow key and you're presented with a few actions you can perform on the file, including email. If you select email, Alfred prompts you for a contact's name, and when you press Enter, Alfred opens a new compose window in your default mail program with the recipient specified, file attached, and subject inserted. Doesn't sound very special, right? Well, maybe it is. In order to find out, lets conduct a an objective comparison between a purely mouse driven approach, and an Alfred driven approach.

Mouse:

1. Open Finder

2. Navigate to file (this may be many clicks)

3. Right click and copy

4. Switch to Mail

5. Compose a new email

6. Specify a recipient

7. Specify a subject

8. Paste file

9. Send email

Alfred:

1. Press CMD+Space

2. Type "find " + file name

3. Press Right Arrow

4. Select email

5. Specify recipient (after this, your mail application opens up with a new compose window with the recipient, file, and subject already set)

6. Press CMD+Enter to send email.

That's a 33% reduction in steps, but probably an 80% reduction in actual time. Given that I type over 100 WPM, I see a 90% time reduction = 1 order of magnitude.

Of course, this quick comparison doesn't even consider how all of the other options that Alfred presents along the way, or all of the other services that Alfred could plug into along the way instead of email - that's right, Alfred is an open platform that other apps can plug into, so you can use Alfred with your favorite services to expedite almost any workflow on your computer.

Ok, so Alfred is great for emailing files. So what? Well, it can do a few other things too.

Last week I was reading Tim Ferris's The 4 Hour Chef, in my Kindle app on my Mac, which I always keep in full-screen mode so I can read without distractions. Ferris suggested that I should buy a probe thermometer. So naturally, I wanted to search Amazon to find a probe thermometer to purchase. Let's conduct another step-by-step comparison, using the Kindle app in full-screen mode as a starting point.

Mouse:

1. Switch to Chrome

2. Open new tab

3. Navigate to Amazon.com

4. Search for probe thermometer

Alfred:

1. Press CMD+Space

2. Type "amazon probe thermometer" and press Enter

After completing step 2, Alfred switches to your default web browser, opens a new tab, and searches Amazon for the product in question. This time around, Alfred reduces the number of steps by 50%, and the time required by 80-90%.

Alfred comes with about 2 dozen predefined web-searches spanning a host of web services. All of these web searches act identically: type in a pre-defined token (ie "amazon ") and your search term. Alfred switches to your default browser, opens a new tab, and generates a URL including your search term. The defaults are quite good. They include: Google, Google Images, Google Maps, Gmail, Google Docs/Drive, Google Reader, Yahoo, Bing, Facebook, Wikipedia, Amazon, LinkedIn, and IMDB, to name a few.

On top of the built-in web-searches, Alfred lets you create your own custom web searches. This tool is extremely powerful.

As it turns out, when you search Google Finance for a stock, the URL for the ticker is formulaic. For Google Finance, the URL for any ticker is always http://www.google.com/finance?q={TICKER}. Alfred lets you take that formulaic URL, and plug in anything into it that you want. You can also define your token so that Alfred knows the right queue to search that specific search engine. I set my Google Finance token to "$ ". So if I want to look up a particular company's financials, I just hit CMD+Space to invoke Alfred, then "$ aapl" and Alfred takes me to Chrome, opens a new tab, and goes straight to Apple's stock page. Awesome.

I have defined custom searches for a suite of other web services I frequently use, including Quora, Google Finance, Bugzilla (bug-tracking system we use at work), RottenTomatoes, torrentz, and IGN, to name a few.

Using Alfred daily

If you couple Alfred with OS-wide text-expansion tools, such as the excellent and free Dash Expander (OSX only; there are many clones available for Windows too), you increase your raw computing productivity substantially. I probably implement text expansion shortcuts 30-40 times/day, many of which feed into Alfred to create awesome workflows.

Although I have only presented 2 use cases, Alfred does quite a bit more. I find that these 2 use cases are the most important and useful, and have really changed how I use my computer, and how I think about the keyboard as a UI mechanism for humans. Alfred also offers an iTunes player, system commands, calculator, dictionary, contacts search (this function is superb), and more without ever leaving your active app or taking your hands off the keyboard. And developers have developed all sorts of plug-ins for various file operations and integration with a number of popular web services. In short, Alfred can do almost anything on your computer with a quick keyboard command. It makes your keyboard the most fast and effective tool to do almost anything on your computer.

I regularly use Alfred 40-50 times/day while at work. I have on some occasions broken 100 uses in a single day = 10 times/hour = once every 6 minutes. Alfred is that good. Alfred even tracks how much you use it. As you can see, I've used Alfred over 4,000 times since I got my 15" Retina Macbook Pro in August 2012. Here's my usage over the last few months. The dips represent weekends and travel.

Alfred Usage.JPG

I hope this post has convinced you that it's worth exploring Alfred, or other free Alfred alternatives such as QuickSilver. Alfred is free on the Mac App Store, and there's a bonus Power Pack available for 15 English Pounds. It's really quite superb. It features an enormous number of features that make your keyboard the ultimate UI tool. And if you ever need any support just tweet to the developer @alfredapp, and he usually responds within an hour. If you find Alfred to be valuable, please purchase the PowerPack from the developer. Developers of this caliber need to be rewarded for their excellent work.

??? > command line > software UI

To wrap things up, I'd like to revisit the assertion that I made in a sub-header earlier in this post: Command line > software UI. Although that's not technically true, once you start using Alfred, you'll find that to be true with regards to your productivity on your PC. It's extremely ironic that today, the general public bemoans anything that resembles the command line interface from the early days of computing. Everyone praised Steve Jobs for transforming computers away from the command line and into a mouse and GUI driven interface. Although the mouse and GUI revolution was outstanding and necessary to bring computing to the masses, the mouse and the GUI are very inefficient for getting things done. If you abstract the traditional command line up to a much higher level language, it becomes an extremely powerful tool. But of course, this begs the question: what happens when Alfred reaches the highest level language, human language?

I'll answer this question in the 3rd and final post of this series.

Long live the command line!

New Years Resolutions 2013

Leading up to NYE, I decided upon 2 distinct new years resolutions. I chose these particular goals because I felt they are areas of my life that are materially lacking, and are negatively impacting my ability to kick ass at life.

Cook at least 1 new meal per week

Going into college 5 years ago, my cooking skills were laughable at best. I could successfully operate a microwave, sometimes. Junior year, of college, while living at 45 Wall Street with my good friends Jimmy Li and Pranav Kanade, I dipped my feet in the water. Jimmy had done some cooking while living at home, and he helped me get over my initial fears and hesitation. As it turns out, it's pretty easy to 1) order food from freshdirect.com 2) cut it up, 3) throw it in skillet for 10 minutes with some sauce and stir. All it took was a little outside push, and I had quickly made it over the initial hump.

After finishing junior year of college at NYU, I moved back to Austin, where I lived with my parents for 13 months. During that time, I hardly cooked. But once again, when I moved out on my own, I had to cook. For the past 6 months, while living with my best friend TJ, I have increasingly taken to cooking. At first, cooking was a real chore and I shopped for foods based primarily on convenience and cooking effort required (though I never resorted to living off of microwavable meals, thankfully). But as I cooked more and more, I found cooking to be increasingly therapeutic. As it turns out, I LOVE cutting things. There's something particularly satisfying about using a sharp objects to cut things up. Over the past 6 months, I've developed my cutting skills quite a bit - now I know how to cut up most fruits and vegetables, with a few notable exceptions such as pineapples. Unfortunately  I've only developed my cutting skills. I've been cooking most items on the George Foreman because it's convenient - too convenient, in fact. I have explored a bit by baking pizzas (no cheese!) and making whole wheat pancakes on the weekends, but I haven't really gone outside of my comfort zone. Going into 2013, my cooking primarily consisted of cutting things up and throwing them in the George Foreman.

As new years approached, something revolutionary dawned on me. I'm going to eat everyday for the rest of my life, and I'm not even 23. If I live to be 90, that means I will eat approximately 67 * 365 * 3 = 73,365 more meals in my life, plus snacks. That means the ROI of learning to cook is enormous.

A few days before NYE, I purchased Tim Ferris's new book, The 4 Hour Chef. I haven't ready any of his other books, though I've heard great things. I checked out a few summaries of 4 Hour Chef, and it turns out that the premise of the book isn't about cooking, but a methodology for learning new skills as quickly as possible, using cooking as the primary example. I'm sure I'll write more posts about this methodology further down the line. The fundamental premise to this methodology is the 80/20 rule - a general rule that 20% of use-cases/knowledge are good enough to get you to 80% of your desired outcomes in a particular field, and the remaining 80% of use-cases/knowledge help achieve the last 20% of desired outcomes. TJ and I have been calling this the "S curve" (see below for a picture) for some time, which encapsulates the same principle as the 80/20 rule. Tim Ferris offers 14 recipes that help readers learn the all of the foundational cooking skills. The 14 recipes are specially selected and sequenced to help readers master the core 20%.

I'm about 20% through the book now (see what I did there?!). And I'm loving it. TJ and I made the first recipe yesterday, pot roast, in our recently purchased crock pot. It was the best tasting food I've ever made in my life, and it took all of 2 minutes to prepare. It's really easy to throw a bunch of stuff into a big pot and let it sit for 10 hours. We have ingredients for recipe #2 in the fridge. We'll make it this week.

Blogging 3x weekly

Most major technology figure that I admire manage their own blogs.  See Fred Wilson, Marc Andreesen, Larry Lenihan (my former professor at NYU!) and Chris Dixon as a few examples. Why don't I? Well, I do now.  And I'm going to penalize myself if I don't blog 3x weekly, courtsey of an amazing self-improvement platform, stickk.com. Stickk is quite simple - you set a goal, create a penalty, give them your credit card, and if you fail to achieve your weekly milestones, they charge your credit card and give the money to charity. Given how strongly I dislike the NRA, I decided to pledge $10/week that I fail to write 3 blog posts.

A blog is a phenomenal way to let the world know what you think and who you are, and develop your writings skills. Before I started this blog, I hadn't written anything of substance in close to a year. This is my 4th post, and I can already see a difference in the quality of my writing. It's true: if you don't use it, you lose it. And lastly, blogging is a way to open discover and engage with new career opportunities. After all, I can't work for my dad forever...

At age 22, I've had many opportunities that virtually no one my age has had, including the opportunity to interview over 200 people, every single one of whom has been older than me. Being on the employer side of the table, I've come to realize that resumes are simply a filtering mechanism for employers. Conversely, as an applicant, resumes only exist to get you in the door. Once you're in the door, the only thing that matters is what comes out of your mouth.

I believe that I have an extremely unique knowledge-base and skill set; if I were to venture a guess, there are only a handful of people in the country with my collective knowledge and know-how spanning healthcare, technology, communication, organization, leadership, and analytical ability. It may sound arrogant, but I don't think many employers will be able to find someone who can intelligently discuss the operational challenges of most major job functions in a hospital, speak to programmers in their terms, read and write large and complex SQL queries spanning dozens of tables, perform 8-hour long all-day demos spanning all hospital job functions (our competitors split up all-day demos across 4-6 people), manage 2 hospital deployments as project manager while simultaneously acting as product manager for 20 developers across 2 code bases (to be fair, 1 code base is written on top of the other, so they're not entirely separate), and critique and review training documentation with our training and documentation teams.

And then something occurred to me. There's no way that I can ever express the preceding paragraph in a 1-page resume, especially given my age. It's simply impossible. No sane person would read the above paragraph and believe me; if anything, they'd throw me out because they'd think I'm lying.

So I decided to start blogging. This blog is my showcase to let the world know who I am, how I think, and what I think about.

This blog is my real resume. Please, render judgement.

A sample S Curve

A sample S Curve

Predictions for 2013

Technology and 3D printing

1. 3D Systems ($DDD) and Stratasys ($SSYS) both see 50% growth in stock price, if not significantly more. The reason: an increasing number of investors are realizing the revolutionary potential of 3D printing with regards to manufacturing and distribution of plastic (and eventually all) goods. Revenue and earnings growth has been about 25% for both companies.  Even if neither company see a strong ramp in growth, I still expect their stocks to both outperform significantly as investor demand grows.

2. Apple substantially reduces cycle time between product launches. Apple's greatest competitive weakness today is the annual cycle time between product launches.  During the first 6 months after a product launch, the device sells extremely well.  In the latter 6 months, sales tend to slow because people are concerned about buying a product that they know will be obsolete soon, and because competitors time their product launches to take advantage of the 6-month oldness. I'm not sure if Apple can cycle down to 6 months for all of its products, but I do expect to see an acceleration across the board. This would also provide a unique competitive advantage that almost no one else can achieve because Apple is backwards integrating into chip design and production and tooling (see monumental cap-ex growth over the past 2 years).

3. RIM BB 10 doesn't take off.  If Microsoft, the mother of all software companies, couldn't pull of a platform refresh starting in October 2010, there's no way that the beleaguered RIM can do it in 2013.  They are simply too little, too late.  Although early impressions are positive, the platform is decidedly non-revolutionary, just as Windows Phone has been. Some hardcore BlackBerry fans may upgrade to BB 10 proudly, but that market is quickly eroding, and developers won't jump onboard.

4. Windows Phone 8 doesn't take off in the US.  It may begin to gain some traction in other markets where the iPhone is less prevalent because Windows Phone can serve as a viable alternative to Android.

5. Chrome moves firmly ahead of Firefox as a % of people who browse the web. (I made this prediction for 2012. Chrome ended up flatlining to be about equal with FireFox for the latter half of 2012. In 2013, Chrome will move firmly ahead).

6. Chrome continues its upward march past 60% of all pages viewed on the internet.  Although Chrome is only used by 20-25% of the internet-browsing population, Chrome users consume significantly more web content, enabling Chrome to jump past 50% of desktop/laptop web traffic in 2012. I expect this trend to continue into 2013 as Chrome continues to grow.

7. Apple Television is released. I had predicted this for 2012.  I am almost certain it will happen in 2013.  See my post on the Apple Television for more details on this subject.

8. Intel-based smartphones begin to make a dent in the smartphone market. Although Intel was able to prove power efficiency prowess in 2012 with 32NM based Atom processors, they failed to garner the attention of OEMs because of the lack of integrated 4G/LTE. Intel is expected to release an updated 22NM based Atom SoC with significant power savings (putting it further ahead in the performance/watt race), along with reference designs that OEMs can use to reduce R&D costs. Although the 2013 models will lack integrated 4G/LTE, Intel will still garner a few notable OEM wins with the raw computing performance and power efficiency lead. I expect the first devices featuring this SoC to receive heavy promotion and sell quite well.  

9. AMD continues to flounder. They may file for bankruptcy if no one wants to acquire them.

10. US passes 65% smartphone adoption. Learn more.

11. Netflix continues to struggle, with little hope for change. Their business model is fundamentally worthless because they are a middle man with virtually no valuable assets, and thus no pricing or negotiating power. Because Netflix is a public company, it's financials are public.  It's key suppliers, the 5-6 major movie studios, will continue to collude and raise their prices in accordance with Netflix's subscriber and revenue growth. Credit to TJ for laying out this thesis in fall 2011. It still holds true today.

14. Repeat number 11, but change "Netflix" to "Spotify" and change "movie studios" to "record labels". Although Spotify is private, it receives enough public attention that the record labels have a pretty realistic sense of Spotify's revenue and subscriber subscriptions and growth rates.

13. Apple's stock passes $800. Full disclosure: I am invested in Apple.

14. Android passes 1B cumulative activations.  I expect Android to pass Windows in total actively used devices in 2015.

15. The consumer web bubble pops. VCs shift focus to enterprise-oriented startups, because enterprise are actually willing to pay for valuable services.

16. Square becomes the widely recognized leader in the race to become the dominant xRM service for small retail merchants across the country.  The other primary competitor in this space is Groupon.  However, Groupon's biggest challenge is that it doesn't have data on non-Groupon retail sales. I expect to see Square eventually encroach on GroupOn's territory (marketing and customer acquisition), but I don't think Groupon will successfully be able to encroach on Square's turf (payment processing, cookie-cutter sales data analytics for small merchants). This may not happen in 2013, but I expect to see Square compete more directly with Groupon by utilizing its vast trove of consumer purchasing data. I will write another post on this soon.

17. Zynga continues to struggle.  It, like Netflix, has no reason to exist. Its games are not special, and are too prone to being quickly copied. It delivers no fundamental value, and has no unique and valuable assets. With increasing competition in the social games space, Zynga is struggling to find new players, and keep them meaningfully engaged for an extended period of time. I don't foresee any fundamental changes that will reverse this trend. The only potentially saving grace for Zynga is online gambling, which may sustain the company, but this growth will stall quickly.

Healthcare IT

1. Athena sees strong subscriber growth as more providers realize that they nor their staff can keep up with increased scrutiny and regulations, especially considering the upcoming shifts to ICD10 and capatation-based payment models. I will write a whole post about Athena's business model at a later time.

2. Epic continues its dominance. From what I understand, they are re-writing their back-end in .NET.  I don't think the re-write will be publicly released in 2013. I think that will happen in 2014.

3. Allscripts continues to flounder.  Their inpatient solution simply isn't competitive, and they don't have a good track record.

4. HIEs continue to prop up, and fall down a short time later due to sustainability issues, barring any major regulatory changes. The #1 driver of this phenomena is that none of the parties connecting to an HIE have an incentive to contribute to its sustainability. They all enjoy reaping the benefits, but none of them care enough to pay.  This is a classical public goods problem.  One of the most effective solutions to this problem is government mandated taxes...

But, as health systems further integrate and evolve closer to the integrated delivery network (IDN) model, they will have an incentive to effectively own and manage HIEs. The large health systems and IDNs will never be able to bring all hospitals and providers onto a single database anytime in the near future, so they will need an HIE platform to share information within the health system. Perhaps, as this infrastructure is built out, they will also develop interoperability between HIEs as the marginal cost will be significantly less than setting up the HIE to begin with. I will write a whole blog post on this subject at a a later time.

5. Small EHR vendors begin to die or are acquired in increasing numbers. As healthcare providers continue to consolidate and the big EHRs get better, there's simply no reason for many of the smaller vendors to exist.

6. Personal Health Records (PHRs) and other patient-engagement driven HIT products continue to fail to gain any substantial traction. Most people don't want to manage their healthcare.  They just want to be healthy at all times.  When they get sick, they may care enough to engage with their doctor briefly, but after they're better, they quickly forget about their health until they fall ill again (I've read that 1/3 of new prescriptions are never filled). I do recognize that I am generalizing about people, especially Americans. There is significant number of people, perhaps 10% of the US population, that does care. Within this 10% crowd, the vast majority of people are already in good shape and don't need the additional patient-engagement tools to help manage their health, though the apps do provide some value even to that 10%. For the 10% that care today, the current tools are good enough.

7. Most patient-facing healthcare IT companies continue to fail to establish real business models.  Of those that currently exist, only a few have devised a scalable revenue model, and even those are hardly "engagement" tools (for example, ZocDoc).  Although lots of patient-facing healthcare startups are being founded, very few will establish a scalable revenue model. In my view, almost all of the money in healthcare IT is to be made by improving healthcare delivery and reducing administrative overhead costs, not through patient engagement tools.

8. Larger hospitals and health systems continue to experiment with cloud technologies for smaller applications, but virtually no progress will be made in moving ERP and EHR systems of larger organizations into public-cloud, multi-tenant databases. In fact, I don't think that larger organizations will ever move to this model because their vendors (Epic and Cerner) won't. They will, at most, transition to private-clouds to take advantage of economies of scale, infrastructure-as-a-service (IaaS) cash flow, and outsourced IT management, but never to Amazon Web Services (AWS)-esque public clouds. However, I don't think private-cloud migration will happen in 2013. It may begin to pickup steam in 2014, but I expect the market to really begin large-scale transitions to remotely hosted private clouds in 2015 and beyond, once EHRs and meaningful use stage 3 are firmly understood and completed.

9. Cerner experiences at least 3 major publicly reported cloud outages.  If AWS had 4 major outages in 2012 I don't expect Cerner to do much better in 2013.

10. The HIT market in the UK lights up as NHS trusts ink deals with Epic and Cerner. Virtually no other American Hospital Information System (HIS) companies will see any success in markets outside the US. Internationalization of software and deployment across time-zones and language barriers are enormously difficult (even Cerner has failed at this many times), and these 2 companies are the only ones that large international organizations can trust, especially after witnessing the awry national NHS project that was almost entirely wasteful.