One way of thinking about the Internet is as a giant matching machine. You have a question, it finds you an answer. You want a flight, it finds you a good deal. You want a date, it can find that for you too. Lots of them in fact.
But is this the whole story? Not exactly.
A fairly simple problem/solution scenario is how things worked in the days of Web 1.0, a pre-data collection web that hadn’t yet developed, let alone mastered, micro-targeting by such attributes as demographics, psychographics, and location. And before you cry “surveillance!” bear in mind that it is the advertising-supported, data slicing and dicing web that brings so much to all of us each day in the form of news, entertainment, and productivity tools. Not to mention that the systems that optimize marketing done online also help filter out that which could be called ‘noise’; i.e. if I don’t have kids I won’t get daycare ads on Facebook, if I don’t have a dog or a cat I won’t get coupons for kibble popping up alongside YouTube videos I watch.
Is this all for the better? As with many things, it depends how you look at it, and it depends who you ask. If you ask mathematician Cathy O’Neil author of Weapons of Math Destruction, the answer would be no. At a recent talk held at Microsoft Research O’Neil began by describing what an algorithm is. “It’s something we build to make a prediction about the future…and it assumes that things that happened in the past will happen again in the future.” O’Neil explained that algorithms use things such as decision trees, which contain if/then and yes/no statements and then use historical information, pattern matching, and machine learning to build models that can make thousands to millions of predictions, and can do so in a fraction of the time of a human being with a calculator and a scratch pad.
So what’s not to like? The problem is, according to O’Neil, that the agenda of the algorithm is decided by the builder of the algorithm. What goes into the algorithm is, necessarily ‘curated’, and when some variables are selected while others are left out, than a value system is embedded in the algorithm. These value systems in turn can affect decisions now made by machines that used to be made by humans, such as hiring, credit worthiness, professional evaluation, and insurance eligibility. If you’ve ever wondered why you often spend a half hour on hold when you call customer support and your friends say they get through right away the explanation may be more than “we’re experiencing larger than normal call volumes.” Maybe they are, but maybe, it’s something else. O’Neil cites the example of how common it is for customer service lines to have pre-determined if you’re a high value customer or a low value customer based on the information cross-referenced with your phone number, and, well, you can figure out who gets put through to a real live human operator and who has to listen to extended musical accompaniments of flutes and vibraphones.
O’Neil calls such processes “systematic filtering”, and is concerned that machine learning, a key component of artificial intelligence — which is said to be the next revolution in computing — “automates the status quo” and in turn creates “pernicious feedback loops”.
Approaches based on big data and machine learning are among the shiniest of objects in the marketer’s toolkit today, but practitioners ought to be aware that when an organization relies on pattern recognition for activities such as consumer insights or new product development, new learning can actually be impeded rather than enhanced. This is because the use cases that comprise the rules inputted into the system can prematurely shut down ideas that fall outside of the parameters. A caveat for marketers, therefore, is to realize, as author O’Neil points out, that machine learning-based systems perpetuate past practices and that is not where insights and innovation tend to reside.