The case for people: why startups should leave humans in the loop

matrixpartners_gray

As technology marches on, many, including myself, believe we’re entering a new and scary phase where wide swathes of humanity will become ZMPs- zero marginal product workers who have lost the race against machines and are no longer capable of being productively employed at prevailing wages. Animals reaching a point where demand for their labor approaches zero is not unprecedented: consider horses, now relegated to a few edge cases where they still have advantages over inorganic modes of transportation. Humans have a vastly wider variety of use cases than horses ever did, but advances in automation raise the prospect that large swathes of us will find our jobs obsolete within our lifetimes. Because machines are moving into knowledge work and taking on increasingly human tasks, this feels intrinsically different than prior waves of innovation, and some are starting to imagine a future where the role of humans in the workplace is rapidly reduced and technological change leads to mass, perpetual unemployment. It is fair and proper to wonder if this time might be different, with ludditical instincts vindicated after centuries of being overridden by economics.

I could wax on about the political/philosophical/economic factors and ramifications of this debate for hours, but I’m fortunate to work in a field where there are even more practical applications. Being part of this transition to human-less work is a seductive idea, and I see many startups that are building to the idealized end-state, creating applications that are intended to ultimately cut humans out of the loop and fully automate tasks. In this piece, I’m going to present a case for the importance of people in the future of work, and even for the very class of products intended to replace them. My thesis is that startups which design to take advantage of humanity will have an advantage over those that idealistically drive toward complete automation.

What people have to offer

1) Human attention in and of itself is valued by other humans

We see this at play in the real world in lots of contexts where part of the value add of a person is being a person rather than any objective value creation in the moment. One clear example is street fundraisers- a robot asking for funds would be cheaper but surely be less effective and easier to ignore than living, breathing humans. I would argue that some rank and file employees in big companies draw some of their value from a similar effect: consider a cold-caller who is given a list of prospects, a telephone and an a/b tested script- one of their key benefits is that they aren’t a robo-call.

2) There’s a long runway for people in information problems.

In the 1990s, computers started beating the best humans at chess. For the better part of two decades, though, human + machine teams could beat the best machines, as humans proved capable of adding enough insight to influence games and beat unaided computers. Machines may fully dominate chess soon, but this is still a remarkable outcome worth considering as we think about algorithms replacing humans in other areas. Navigation seems to be largely solved by Google Maps, but Google Maps operated by an expert is far superior to Google Maps alone. Even if Google Maps itself is better than any human navigator by herself, there is still a place for humans in the equation.

Assembly-lines for thought work: how humans can help software

Instead of full automation, then, the ideal white collar production modality of the near future is something resembling intellectual assembly line, with tasks best left to machines done by machines and humans filling in the gaps with either their pure humanity or the set of cognitive skills machines can’t yet replicate. Consider how Uber takes all the work out of running a car service except the hardest part to automate: driving. As more work is done to machines, people will naturally specialize into particular tasks. At Matrix we’re increasingly seeing startups driving toward this vision, whether consciously or not. When they do, they run into a design problem: designing humans into technology is hard.

Put another way, people aren’t great at learning new software or predictable in how we do it. We use interfaces in hard-to-predict ways that often baffle designers. Supply of us is unpredictable, based on a set of unobserved characteristics and incentives. We get sick, our performance varies at random, we quit. Many companies have seen fit to respond to these flaws by cutting humans out of the loop entirely, designing solutions that are imperfect but scale infinitely without a fleshy bottleneck. As tempting as this is, it is a cop-out: it leaves products worse by running from a hard problem, even when that problem has obvious pay-offs for the customer experience.

There are two angles of attack on this problem. One is training, which is a well understood approach. Another, seemingly less discussed, is a new way of thinking about user interface design that recognizes both the shortcomings people have and the value they can provide. Increasingly, humans will be part of the toolkit for software, not the other way around. Just like the industrial revolution replaced craftspeople with assembly line workers, the information revolution will render many workers as specialized software-assisters. This will flip the UX paradigm: rather than people asking questions of software, software needs to learn how to ask questions of and interact with people. We may not have anything approaching a reliable API, but we have tons of value to add if software can learn the right questions to ask and how to ask them.

So will we all become “meat robots” in the gig economy?

Just like when the assembly line arrived, many fret about the disruption that will come with these changes. Some worry that we’ll find new work tiresome and mundane, that serving as carefully monitored cognitive microservices will be less rewarding than the more vertically integrated nature of today’s work that is often analogous to intellectual craftsmanship. This same hand-wringing accompanied the assembly-line, and yet we now lionize assembly-line jobs as a facet of the “good old days.” On the surface, assembly-line work looked like a come-down from being a craftsman, but in practice it generated a whole class of good-paying jobs that people came to value. We should also be careful to avoid the trope that a “job” must be something so engaging and stimulating that it becomes a lifestyle in and of itself- only a small sliver of people have that and many don’t find it desirable at all. Finally, we must take care not to confuse “flexibility” with the loss of “stability.” In areas where heightened connectivity lowers transaction costs enough that the traditional Coasean firm dynamic collapses, work is always available at market-clearing price and measurability of output gives us chances to build more meritocratic systems. None of this is to say that the future will be without painful disruption, but I find it hard to believe that such a transition is intrinsically harmful to people or the economy once the dust settles.

Final thoughts

I firmly believe that the benefits of incorporating human attention and insight into products will outweigh the costs. If we fail to do so, not only will we contribute to technological unemployment, we’ll be settling for products that are worse than they could be. As part of the transition, I expect we’ll see software cross the chasm from tool to user, and more and more humans work effectively as tools with which software accomplishes its ends. Like the industrial revolution, this will be messy and imperfect, but the result will be higher productivity and jobs we’ll be nostalgic for one day when automation finally finishes the job and totally replaces us.

Follow us on Twitter: Jared Sleeper, @JaredSleeper and Matrix Partners, @Matrix Partners.