designers + engineers + empathy = greatness


designers and engineers need a deeper understanding of each other’s craft to create truly great products. i’m convinced that engineers need to understand the experiences designers aim to create, even as designers need to understand just know how engineers will make those experience come to life. when the two groups interact for the greater good: they build phenomenal products, with minimal time resources.

it’s all a matter of empathy — loosely defined as understanding the feelings and thinking of others. during my time leading engineering and design on webOS, and later at Twitter, i’ve learned that empathy is core to a product team’s ability to move quickly from designers’ “what” to engineers’ “how.” said differently, a designer knows what to make, and an engineer knows how to make it. when they overcome the communication barrier that separates the what and the how, good things are certain to come.

at palm, for example, we had to deliver a complete reset of webOS, moving the entire platform to a web-centric model. to do that, we put together a unified group of four teams: one team on the kernel, another on the apps, a third on infrastructure and the fourth on design. by working as a unified group, the engineers could empathize with what designers wanted the experience to be, while the designers understood the constraints of the OS.  and because of that empathy, we delivered an entirely new webOS in less than a year. we had achieved a virtuous cycle of product design, the goal of every product company.

the notion of deep, cross-discipline understanding isn’t limited to software development. it can be just as effective when developing hardware, hardware/software systems, and even for manufacturing. it’s not even new. design for manufacturability and assembly methodologies – where designers actually consider whether their designs can be easily assembled and built – have been around for decades.

but without empathy — where the different roles innately understand each other’s goals, assumptions and constraints — those cross-discipline development teams are still prone to misunderstandings and delays. recognizing that there is craft in the what and the how is key – and for the leader to help their design and engineering teams seamlessly understand each other is key.

which brings up the question: how can people with different mindsets and goals understand each other’s thinking? my colleague john maeda, who also happens to live at the intersection of design and technology, suggests early stage companies let designers code and engineers design. while not everyone can make this crossover, those who do will bridge the groups and help accelerate development.  i’ve lived in the valley long enough to see these kind of hybrid design/engineers make a huge difference in companies and now in the startups that they are founding.

for even slightly more mature companies, i believe it’s the leaders who have to crossover  and interact with other teams. they become the bridges who make sure everyone’s on the same page, with the same understanding of goals and constraints. and they also make sure design comes at the beginning of the process. without it, empathy becomes a one-sided proposition, and that just won’t work. so it’s not enough for a leader to keep the “why” in focus for everyone anymore – they’re going to have to get their hands dirty in the what and how, or at least serve as a solid communication bridge.

great execution: balancing order and chaos


it’s human nature to prefer order over chaos. as a general rule, people want everything to be calm and predictable. we are generally as a species uncomfortable with turmoil.

ironically, in the tech industry upheaval is valued – we love turmoil. as entrepreneurs and venture capitalists, we’re on a constant hunt for new ways to disrupt markets and upend the status quo. the companies that we launch and fund are in a nonstop race for more engineers, more customers, and never-ending revisions that challenge the edges of the CEO’s sanity. working at these places can feel like riding a one-way ticket to crazy-town: an absolute sh*tshow of disorganization with no panic button to be found.

yet here’s the thing. If the environment at a startup isn’t crazy, then something’s wrong. It may seem counter-intuitive, but chaos is an essential ingredient in a startup – it is what catalyzes the innovation. think of it as a necessary state of brownian motion where ideas collide with other ideas as fueled by deadlines and desperation. there’s a limit to the chaos, however, and you immediately see what that looks like because products stop shipping.

the key to a great startup environment is finding balance, as steve jobs did at apple, jon rubinstein did at palm, and elon musk is doing at tesla motors. special kind of leaders know how to orchestrate the simultaneous demands of time, scope and quality. consider what happens when any of those elements go out of balance:

  • time/ fall too far behind a deadline, and you could miss a critical service level agreement.
  • scope/ allow unchecked scope creep, and you could end up with a bloated mess satisfying no one.
  • quality/ let quality decline, and your company’s reputation could get permanently damaged.

great leaders know how to keep the chaos driven by these three factors in balance, and constantly modulate the demands to make the chaos manageable for their company leaders and employees.

seeing is believing


today, i’m going to riff on a topic i haven’t written about before, but has interested me for years: computer vision on mobile phones. i believe advances in computer vision — combined with the compute power we now take for granted on our cell phones — could improve people’s lives in ways most of us haven’t imagined. i’d better explain how i reached this conclusion.

scientists have known for years that increased blink rates are a great predictor of tiredness or fatigue. that fact came in handy when i wanted to figure out if my 4-year-old daughter would go to bed at 8pm or at 9pm, since that usually meant a big difference in her bedtime routine. i decided to build a mobile app that would record her face and let me count how often she blinked, helping me predict when she would fall asleep — and making a happier evening for parents and child. that’s a pretty simple example of what I mean.

cardiio is a more-sophisticated app that leverages mobile phones’ cameras and compute power. hold up your iPhone to your face in a well-lit area, and cardiio uses the front-facing camera to look at the capillaries on your cheeks. the app then measures the light that’s being reflected to determine your heart rate — useful for tracking fitness levels, calorie burn, and even estimate your life expectancy.

i should mention i don’t have any investments in cardiio or any other mobile computer vision app. i just find the whole space really interesting. MIT, for example, has developed computer-vision algorithms that can tell the difference between frustrated and pleased smiles. now imagine mobile apps that interpret shoppers’ smiles and help retailers fine-tune their merchandising. retailers could also use mobile apps to analyze foot traffic for optimum cross-selling and impulse buys. and thanks to community efforts like PubFig and Labeled Faces in the Wild, computer vision software can recognize faces — with a high degree of confidence — across a wide variety of poses, expressions and conditions (recent NYT article on the advances). it won’t be long before that capability shows up in commercial-grade mobile apps.

mobile computer vision can also help us model our environment and improve crop yields. for years, scientists have been finding new ways to use near-infrared reflectance spectroscopy to detected crop mold and fungi contamination and insect infestation. it’s easy to imagine drones fitted with infrared cameras detecting early signs of infestation.

and then there’s augmented reality — potentially giving humans a sixth sense for understanding the world around us. Google Glass may be the best example so far, as developers continually add new apps that overlay information on what the wearer sees. But I wonder about the effect this sort of enhanced vision has on us. if you wear Oculus for seven hours, does it rewire your brain? for pro and con, mobile computer vision could have a dramatic impact on us and the world we live in.

here is a fun application of a convolutional neural net that i setup with caffe last weekend.