Let’s talk about corn.
Corn and how it gets from growing in fields onto your table.
Below is a video of a corn harvesting machine:
And here is a video of people gathering corn:
So, I hear you say, what all does this have to do with machine learning?
A lot, as it so happens.
I’m a skeptic. A cynic. And for all the hype about AI, machine learning, cognitive computing (pick your flavour and associated nomenclature, whatever you prefer) doing everything short of literally curing cancer, I can’t help but think that perhaps sometimes we’re getting a little caught up in it all. That we’re tackling the right problems the wrong way. That we are in love with complexity for its own sake.
That maybe we should take a step back and go “whoa, whoa, whoa – what is it we’re really trying to do here?” before we start building models and training them on massive datasets and creating 1000s of APIs and plugging them all into each other in the hopes that one day the whole mess will eventually become sentient.
Machine learning can do amazing things. It can solve very difficult problems. Problems which were once thought impossible, or at the very least, so difficult for a computer, they were insurmountable. In particular, I’m thinking about this XKCD comic which someone else more clever than me has already pointed out did not age well.
So let’s get back to corn. Imagine you’re trying to automate the process of picking corn. You want to build a corn gathering machine. Naively, if you wanted to automate that process, maybe you’d say – “AHA – I’m a brilliant inventor, I’m going to create a beautiful machine that can pick corn just as well as any person”. So you think about how people gather corn, how they walk up and down the rows of stalks and pull the ears off of them, and then shuck the corn later, later removing the silk. So you start to create a robot that looks just like a person, with arms to reach out and grab the ears, and pull them off and then put them in a bag or the back of a truck, just like a person would.
And then you realize: this is a very, very difficult thing to do.
Because now, not only do you have to create a robot that can walk like a person, but also one that has a way of identifying ears of corn, and that has arms that can reach out and grasp the corn, and can apply enough force to pull the corn off the stalks without damaging it, and then understand how to put it in the bag or the back of a truck. And it also has to know when the bag or truck is full, how to identify which ears are ready to be picked and which aren’t, when it’s reached the end of a row, etc.
In short, you have to create a corn-gathering android.
But hold on a second. Why are you doing this in the first place? Why would you assume that if you wanted to automate the process of gathering corn that the device that gathers it would, should, or could do so the same way you do? The only reason you pick corn like that is that you’re a person with arms and legs and eyes and ears.
Now watch the first video again and think about how different that machine is one from making one that gathers corn like a person.
So again, you’re saying, what does this have to do with machine learning?
Think about how the usage of the telephone spread. Or how the different continents had roads and railroads and superhighways built across them, to let us travel from A to B. Think about how hard it would be if we never built all those roads. If we just tried to solve the problem without changing anything. Without redefining the parameters of that problem. Reshaping it.
Think about the way a human gathers corn and the way a piece of farm machinery does. Or about the difference between the washing clothes by hand and the way a washing machine works.
So in our modern urban digital wilderness, why does there seem to be these problems where we won’t invest that same amount of effort and technology and engineering to reframe them, but instead, try to solve them with ever more complexity?
So I ask you:
- Should we: teach a car to identify stop signs under any driving condition – day or night, clear day or rainstorm or blizzard – or change our infrastructure to incorporate self-driving cars?
- Should we: build complex robots with complicated software and teach them how to walk and run and climb stairs and remain upright, or should we just build robots with wheels?
- Should we: create chatbots we can talk with, to book a trip, buy groceries, or shop online, or should we just provide more relevant search results on a website or an app with a user experience that doesn’t suck?
- Should we: use text-to-speech (powered by machine learning) to recognize people saying the names of options when they call customer support, or just let them press numbers on the keypad?
- Should we: build sophisticated models and code to score customers and predict the next best action, or just ask them what really they want?
I can’t help but feel like sometimes we’re looking at things the wrong way. That just because machine learning and data can solve a lot of the hard problems doesn’t mean it can be, or should be, applied to everything. Sometimes when all you have is a hammer, everything looks like a nail. And sometimes knowing how to solve a problem is mostly about how you frame it.
Just a thought. Let me know what you think. Tell me I’m wrong or tell me I’m right in the comments.