Sprinting

I've been sprinting recently. It's been a blast. I haven't sprinted since Spring 2006, when I ran the 400M in track during my sophomore year of high school. Back then, I was 5'11"" and 160lbs. Now I'm 6'1" and 210lbs. Wow, 50lbs really makes a difference. It's far more difficult to accelerate now than it was then.

I'm not yet at a point where I can run a 400M at sprint speed. But I'm getting somewhat comfortable running 200Ms. In high school, I ran a 59 second 400M. I'm hoping to get to 65 seconds in the coming months. I'll update my blog as I progress.

I had forgotten the feeling of intensity of sprinting. It's very different than the intensity I've experienced while lifting weights or performing gymnastics over the past 2-3 years. Although those activities are extremely intense in their own rights, neither involves the rapid pump and force that sprinting demands. Moreover, sprinting involves giving it everything you've got 100% for the entirety of the run. Most other forms of exercise require more control, tempo, and flow.

It's been amazing re-learning the subtleties of sprinting form and focus. For the past few weeks, I had forgotten to select a visual focus point before the sprint. Yesterday morning I remembered, and what a difference it made. My breathing has improved; now I'm lining up my breadths in line with steps. And I can feel my knees lifting higher and increasing my distance/step, which improves speed and endurance. It's all coming back.

I'm pretty excited looking forward. I was able to push myself to respectable heights in weights, and subsequently and rings. Now it's time to swing to the other extreme and push to new heights in ultimate frisbee and sprinting.

Off to the races!

The Value of Not Thinking

I try to think as little as possible. Humans are bad at thinking. I think a lot, but I actively think about how to think as little as possible.

One of the key tenets of computer science is abstraction. Modern programmers don't think in terms of 0s and 1s. They think in terms of objects and relational models. Each of those objects and models may translate into millions, perhaps billions, of 0s and 1s. No one can possibly think through all of those low level details every time they think about a programming problem. Luckily, they don't have to. Our programming forefathers abstracted that complexity away so that modern programmers can spend their time solving more important problems.

That is just one example of not thinking. But there are infinite more. In particular, I like to setup structures and processes in my life so that I don't have to think. Once I setup a process that removes a layer of thinking, I can either turn off my brain and let it rest (the brain is a muscle, and needs rest) or devote that time elsewhere.

A few examples in my life:

Alfred, and the resurgence of the command line UI

Pre-packed travel bag, including toiletries, chargers, and adapters

Rubik's Cube algorithms - it would be impossible to solve a cube in 30 seconds if I had to think about how the algorithms work. I just know that they do, and spin upwards of 10 spins/second knowing that my algorithms will work.

Of course, there are many examples that we take for granted in our daily lives. We don't have to think about how car or plane engines function when traveling; we don't have to think about the structural integrity of a building before walking inside. We can take for granted millions of little details in life that our ancestors could not. And that's why we're so much more advanced than they are. We don't have to expend any mental energy, physical energy, or time to accomplish what used to take enormous amounts of all 3 of the above. And because of that, we can focus on other problems.

I think this can be summed up in a Persian proverb that my Father tells me. Translated into English: "You can't think on an empty stomach."

Why is Microsoft Moving to a 1 Year Windows Cadence?

Microsoft just announced the formal preview of the successor to Windows 8, which is rumored to be named Windows Blue. Blue will be unveiled at the Microsoft BUILD developers conference in June 2013. Rumor has it that Microsoft will release Blue to the public in 2013, just 1 year after shipping Windows 8. This stands in stark contrast to the 3 year release cadence for the preceding 2 iterations of Windows.

Why is Microsoft transitioning from a 3 year OS cadence to 1 year? And why is this transition happening now? Why didn't it happen years ago?

A 1 year OS cadence will substantially reduce competitive risk to its cash cow Windows and Office monopolies. The greatest threat Microsoft faces is the cost of being wrong. Microsoft's old OS cadence was 3 years. If Microsoft's OS is wrong for any significant reason, competitors will have at least 3 years to gain ground against Microsoft. In fact, this could be closer to 5 years depending on when the competitor's product is released relative to Microsoft's OS release cycle. Given the timelines required to plan, develop, test, release, and support an OS, Microsoft can easily fall into a position where it cannot release a competitive OS until 5 years after its competitors.

When and how did Microsoft realize this? Microsoft learned this lesson the hard way in the mobile computing market. They were wrong about the iPhone, and the cost of being wrong has astronomically large. Additionally, the cost of being wrong has also created opportunities for Microsoft's chief rivals - Apple and Google - to chip away at Microsoft's core businesses - Windows and Office. In order to stay competitive, Microsoft needs to keep its planning and development cycles lean enough so that even if it's wrong, it can quickly catch up. Windows Phone was released in October 2010, 3.5 years after Apple released the original iPhone in June 2007. However, Windows Phone didn't reach feature parity with iOs and Android until 2012, 5 years after the iPhone was released.

Apple has adopted a similar OS cadence. iOs has been on annual cadence since launch, and Apple has stated that they're transitioning OSX to an annual cadence as well. Google updates Chrome every 6 weeks, and has been upgrading Android 2-3 per year. Google has said they plan to slow Android releases to 1-2 times per year as the OS matures.

It would appear that all of the major tech giants are afraid of begin wrong.

The Power of Accessibility: Fruit Slicers

I love fruit slicers. They're super awesome. I've purchased every fruit slicer that I've seen on Amazon:

Apple

Banana (I find this one to be mostly useless since I blend bananas into my protein shakes)

Mango!

Pineapple!

Strawberry (excellent to put sliced strawberries into cereal when in a rush in the morning)

Most people love fruits, yet so many people actually go out of their way to purchase fruits from the grocery store and eat them. And of course, the harder it is to eat the fruit, the fewer people that are willing to eat it.

Mangos and pineapples are two of the most delicious and hardest-to-eat fruits. Despite the fact that most people love them, very few people eat them at home. Their respective slicers reduce the friction between you and the fruit by an order of magnitude. The mango slicer and pineapple corer make it incredibly easy to experience the delight of superbly delicious fresh fruit. Everyone that wishes to eat more fruit should invest in fruit slicers. At about $5 each, they're a fantastic investment.

The Value of Meaningful Use

There's been a recent round of headlines reflecting on the efficacy of the Meaningful Use program to push physicians and hospitals to adopt electronic health records (EHRs). A RAND study conducted in 2005 was one of the major arguments that prompted Congress to allocate tens of billions of dollars to care providers (and really, healthcare IT vendors). The study suggested that adopting healthcare IT would save billions and pay for itself, and help push medicine further faster.

There are a million ways to criticize the meaningful use program. They are probably all valid. Some of the most prominent arguments against meaningful use have been: 1) that the nature of the program rewards legacy vendors and discourages innovators 2) that meaningful use's failure proves that healthcare IT can never work 3) that we're spending enormous sums of tax payer dollars without having proved efficacy of healthcare IT 4) that the program is measuring the wrong metrics 5) that the program doesn't take into account the nuances of each medical specialty. 

All of these arguments may be valid. But even collectively, they do not provide the impetus to halt the meaningful use program. They all fail to recognize the enormous cost of stopping the healthcare IT push. Modern computing platforms have transformed every industry in the world. Healthcare is not special. Yes, it has its nuances, but fundamentally, healthcare is no different than any other industry - highly specialized individuals need to share information to make decisions and exercise their best judgement, and that highly specialized work needs to tie back into generic administrative processes (managing accounts receivable, accounting, payroll, inventory, etc). Computers have revolutionized these processes in every industry except healthcare (and education, and to some extent, retail). Computers can revolutionize healthcare too.

50 years from now, we will all look back on the late 2000-aughts and 2000-teens as the decade of experimentation in healthcare IT. There are hundreds of thousands, perhaps millions, of extremely smart and driven people trying to implement healthcare IT solutions to improve efficiency and outcomes. Everything we do in the first half of the 2000-aughts and teens may be 100% wrong. And that's ok because that's the cost of learning.

We are learning because we are wrong. We will never get to the futuristic sci-fi medicine of 2060 without making all of the mistakes that we're making today. We need to learn how to effectively design, build, and implement large and complicated administrative and clinical systems spanning hundreds of stakeholder groups and millions of care providers. Hospital management, administrators, and clinicians are all learning incredible amounts about how technology works, and how information flows can and should be in different environments and situations. No academic environment can teach the future of healthcare delivery because no one knows what it will look like yet. We have to figure it out.

Although I'm frustrated with many of the details of the meaningful use program, the calls to repeal it and stop healthcare IT installations are laughable. They take such a short-sighted, limited view of where medicine is today, where it could be, and how we'll get there. No country of scale has implemented a decentralized healthcare environment (it's relatively easy when you operate at 1/10 the scale of the US and force every doctor onto a monolithic platform that is mediocre or sub-par in every way, such as Denmark or Finland). As Americans, we're the pioneers, and we're going to make plenty of mistakes along the way. And that's why we generally out-innovate everyone else.

Americans try, fail, and learn faster and more frequently than everyone else. We always have. And that's why we have and will kick everyone else's asses in healthcare delivery. There're good reasons why almost every major technology company in the world was founded and is based in the USA. Americans take risks and figure shit out. Americans will be the leaders in informatics, data interoperability, medical research, efficient and effective care delivery because they spend tens of billions of dollars making mistakes. It may take 10 or even 20 years to pan out, but the meaningful use impetus has changed the course of healthcare delivery in this country for the better.