Anyone who’s ever fiddled with the Yoast SEO widget on WordPress will know making sense of your words to an algorithm like Google is, of course, all about labelling your content in the right way to give Google some context about the meaning. Context (or lack of it) is the thing that makes inspirational quotes so meaningless. Context changes the meaning of words, creating both jargon and sense from the exact same sequences of letters. Context (and its sublime cousin nuance) lie at the core of our brain’s ability to navigate the complex physical and social world we live in.
Context is also going to be huge for the global economy over the next decade. Context lies at the heart of the bleeding edge generation of computers and software, creating a world of AI (artificial intelligence) which is essential for automation, robotics and the Internet of Things (or IoT). But understanding the role of context and details is hard for us. We live in a world where little snippets of data are seen as trivial. We’re always under pressure to reduce things down to big, simple, bald statements. We define most of our rational decisions by simplicity and logic, not endless attention to detail.
Which makes context, and understanding it, a thinking problem. So naturally, ManVsBrain is here to put our thinking problem about context into context, contextually speaking…
Your brain needs context…
In all the essays on this site, the theme of the relationship between our conscious, rational thoughts and our irrational, unconscious emotional thinking processes features a lot. But it’s not always easy to make a connection between the conscious and unconscious thinking we do, because the unconscious bit is something that, by definition, we’re not aware of. Which also means, when we come to think about the role of things like context, that are often unconsciously recognised, we miss their importance.
First, let’s consider context as far as your brain is concerned. Try this little context test:
Next time you are walking down the street, put in a pair of headphones and crank up some loud music. Then close your eyes. Try to keep walking. How far do you get?
Of course, you won’t keep walking at all. You’ll stop. That is really interesting… after all, is the decision to keep walking a rational, logical choice your brain is making over your body? Yes. You decide to walk, your body doesn’t just unconsciously walk places without your conscious say so.
Okay, so if that’s the case, why do you instinctively stop walking when you can’t see or hear the world around you? Well, duh, that’s obvious right? You don’t want to have an accident.
But be honest about that process, did you simply choose to stop walking as a logical, rational thought?
Your decision to stop was instinctive. You will have felt emotions that made you stop. Fear. Anxiety. Common sense. Call it what you like, the choice to stop walking was unconscious. It’s only when you try to rationally justify it to yourself that you’ll find perfectly logical reasons for not walking down the road with your eyes closed listening to loud music.
But you stopped walking before you consciously worked out it’s a stupid idea. It was an unconscious decision made because you’re brain had lost it’s context.
What’s happening in this experiment is very simple. All the time you’re walking, your brain’s unconscious processes are measuring the world around you. Sights, objects, distances, sounds, feelings against your skin. Your senses. Now you couldn’t consciously measure all that stuff, it would make walking impossible.
Imagine having to remember the complex sequence of muscle movements to move your legs (hundreds of different movements for each step). Add to that mapping your environment and collision detection by focusing on everything that might hit you on a busy main road. Add to that remembering the details of your route to work like you’re looking at it on Google maps. Throw in remembering how to operate a mobile phone, and all the motor control to do that with your left hand whilst carrying a hot drink in your right: That’s thousands of precise, fine detail decisions. That would be a huge mental effort if we did it deliberately.
But we don’t. Our brain develops neural networks that automatically do that kind of essential, repetitive stuff. Which is why crossing the road, on the way to work, drinking a coffee and thumbing your mobile phone looking at emails doesn’t appear to be a remarkable feat of brainpower. That complexity is the role of context in our lives. The little stuff we don’t even know we’re doing but, quite literally, keeps us alive and functioning within human society.
In fact, this year’s nobel prize for physiology / medicine was won by the team who painstakingly mapped a whole part of your brain that’s dedicated to making maps of your surroundings, or in other words, your physical context.
Artificial intelligence is all about context
Our brains need the unconscious context of the physical world for us to think effectively, and now the next generation of computers and machines are doing something very similar. Like us, if you give computers more context, they become more expert. In many respects, what we think of as expertise is actually defined by context and the process of using contextual data analytically, called inference.
The best way to explain the role of context in expertise is to consider cooking. Good cooks all share a sense of context when it comes to the relationships between ingredients. That ability to taste something and decide “it needs salt” or “saffron would be good in that” is an expression of contextual decision making. It explains why everyone (when they first start to cook) follows a recipe to the letter but the result isn’t always what they were expecting. Things have a habit of going wrong, without any obvious reasons why.
In that scenario, we see things like the performance of the specific oven, or the freshness of the eggs, or the properties of a specific brand of butter (and so on) coming into play. The contextual factors that define the outcome of the recipe won’t be apparent to an inexperienced cook. However, adapting the recipe to function within the context of the real world cooking environment you find yourself in, is what defines an expert cook.
In the last couple of decades, the desire to capture context has been the holy grail of computer development, and it sits at the heart of the commercial movement behind artificial intelligence. Now when it comes to AI, people immediately think of cyborgs ruling the planet, which is absurd but makes for a decent movie (and some dreadful ones, come to think of it). But in reality, you might consider AI to mean something else, namely contextual computing.
A couple of weeks back, I was lucky enough to catch up with AI expert Dom Davis from Rainbird AI. Davis is working at the leading edge of AI systems, having been part of the team that created Rainbird, one of the first easily accessible AI systems for adding context to online systems, specifically, in the realm of creating artificially intelligent customer service systems. As he explained in his talk (paraphrased a bit):
“Let’s say you’re serving a meal with a miso soup starter, a fish course and a roast beef main. What wine do you choose to accompany it? Most people will say ‘white’ with the fish or ‘red’ with the beef, but beyond that? What grape? What Chateau? There’s a layer of expert knowledge that’s missing. In the past you might explain your menu to a wine merchant and they could suggest specifics based on expert knowledge, but what about online? Will automated help do it? Usually not. It will just make basic connections between keywords. If you call customer services, that isn’t usually much good either because the people in the call centre aren’t normally wine experts, experts are too expensive to field customer service calls because the reasons people call customer services are very varied. They’ll put you on hold whilst they find you an expert. By now, you’ve gone from the web to phone one person, to be passed to another. It’s slow and expensive and the customer might well just click the ‘people who bought this also bought that’ button and not get any advice at all, or the result they wanted. Expert systems solve an economic problem: making relevant, contextual online expertise economically viable. Which means you need AI.”
Rainbird does that. It connects together multiple data sources to add context to a query. The computers aren’t experts, they don’t know anything, but they can process data. What AI does is process data to define the context of a question in order to offer an expert solution. The process is challenging, as Davis explains:
“Some years ago they did an experiment where they fed huge amounts of Wikipedia data into an AI system. The result it came back with? All humans are famous.”
It’s a logical conclusion too – all the historical records of humanity are defined by famous people and famous events. We don’t record everyone’s activity, so to an AI, based on the available data, every human is a significant historical character or a celebrity of some sort. However if you define your data sources more precisely for specific tasks like expert knowledge of a topic, AI can be very effective.
Rainbird’s AI systems work because they recognise the importance of context as the defining characteristic of expertise. By digesting the relationships between different kinds of data, their AI system can offer expert advice by inference, I.e. by linking together all the context that makes a choice relevant, their systems can make a judgement about the answer that’s right for the person asking the question.
Which dog breed is best? Is a meaningless question because it lacks context, but people ask it. Try Googling it yourself. What you’ll see is a lot of content that lists dog breeds, but it lacks context. You have to work out which breed is best for yourself. So in many respects, the system can’t give you any advice, it just points you at the data sources you need to make a decision for yourself, but if you could make a decision for yourself you wouldn’t be asking the question in the first place, would you?
Systems like Rainbird can connect multiple doggy data sets – size, playfulness, shedding, temperament, exercise and relate those back to the person asking the question. By gathering a couple of data points about you, for example your age or whether you have a garden or live in a tower block, it can use your own context to define the context of the dog recommendation it makes back to you. Its intelligence and expertise is expressed in the remarkable achievement (for a computer system) of not recommending a greyhound to an arthritic old lady who lives on the tenth floor in a one bedroom flat, even if she really likes lean, fast, energetic dogs.
What makes that really important for the future of computing is simple: Computers can process a lot more information that you can, faster and with reliable memories that aren’t distracted by the radio or whether they’re feeling tired or not. Once the AI understands the context of your question, and your circumstances, it can digest huge volumes of data and make suggestions that appear to be expertise – but of course, they’re not really human expert suggestions at all, just highly contextual data processing. Which is, at a programmatic level, what human expertise is as well. Judgement, instinct, gut feelings, experience, they’re all just our organic equivalent of highly advanced contextual data processing.
The Internet of Things economy is all about context too…
Context computing will soon lie at the heart of the global economy, because the Internet of Things (the world of connected devices) is projected to produce between a 25-50% of global economic output by the 2020s. It’s an astonishing claim, because the Internet of Things excludes connected devices like mobile phones and computers – the things we think of now as connected devices. This vast new marketplace will be (as the name implies) everyday things. Washing machines. Tyres. Shoes. Cartons of milk… and so on.
And what’s the economic gold dust that makes it so valuable? Context.
Context, in the world of things, is very exciting. Over the last few years, the dramatic increase in component miniaturisation, wifi connectivity, battery life and wireless charging has meant that the technology to make a device connect to big computer systems (in the cloud) and share data about what it’s doing is relatively simple. Small connected chips the size of a sticky label are all it takes. This new ability to make everything log where it is, how it’s moving and what physical forces (like heat or acceleration) are acting upon it, means we can foresee a future where almost everything can gather context about how it’s being used.
Tyre giant Michelin is already experimenting with tyres that know when they need to be changed, and the context in which they are being used… which means Michelin can offer the tyre user products that are best suited for their needs and learn about the environment in which tyres wear, enhancing the durability and safety of tyres in different kinds of application. Soon we’ll see clothes that know when they’re being worn, where they’re being worn, how often they get washed and so on. It sounds a bit creepy, but in fact, that information would make clothing manufacturers produce much more durable and useful clothing.
It will be a world where street lamps are never out, because the lamp post will know when the bulb is flickering and about to die, and someone will fix it before it gets dark again (which is already happening in Holland). Medical equipment will be able to identify how it’s being used, and based on the data from all the other machines out there, make suggestions as to improve treatment for each specific user. The possibilities are endless, and although they all sound like science fiction, they’re not, they’re all just systems based on capturing context.
There’s a company called Estimote that have pioneered the use of “nearables” (thanks to tech consultant Jo Vertigan for the steer). Nearables are small stickers that have electronics in them that can be recognised when a computer is near them (near field communication or NFC, as it’s called).
By connecting the computer with the sticker, you see something remarkable happen.
Let’s say the nearable sticker is placed on objects in a store. By monitoring each object’s movements (when they get picked up) or by monitoring when a mobile phone passes the object, the store can learn precisely which objects are looked at the most, handled the most, where people walk as they browse in the store. That context means all sorts of things like optimising the layout of the shop to match people’s natural browsing habits, or the positioning of the items on display and the items around them. With that kind of context, everything in our physical world can be adjusted to be more usable.
Take that idea out of a shop and into the world and suddenly, we have traffic signals that can adjust themselves to ease congestion, or make crossings safer. We’ll see street lamps that switch on and off as people need them, saving power. We’ll have washing machines that know what they’re washing and never shrink a delicate jumper or die our whites pink again. Ultimately, we’ll have robots in our house that clean and tidy, take out the trash and make dinner; transforming notions of convenience as well as social care.
Okay, maybe they’ll take over the world as well, enslaving humanity in the process. (Actually, that will never happen… see below). But apart from that “we’ll all be going to work by Jetpack” factor that surrounds Hollywood AI and robotics, you can see how adding a little context to the things around us transforms the usefulness of computers and everyday objects. It also transforms the kinds of products and associated services we use and buy, synthesising them into a world where the company that used to sell tyres starts selling you safe distance of travel, or where the guys who used to sell light bulbs now sell illumination in dark places, or where the people who used to sell refrigerators now sell a fridge that stocks itself when it’s running low on context aware packages of food.
As products and packaging transmit more data about what they’re doing, we learn more about their context and the businesses that make them can build enhanced, even completely new models based on how they supply and service their customers. That’s why the IoT (and product context) is so economically important. It could even mean the end of advertising and marketing as we know it… read more about that here.
The context of the future – and killer robots?
It’s easy to get carried away with excitement when it comes to the possibilities of AI and IoT. However, at the root of all that enthusiasm is something inherently organic and natural, namely our brain’s ability to process data and make decisions instinctively. Which of course, leads us to the cyborg elephant in the room, namely… AI powered robots enslaving humanity, like in Terminator or The Matrix (the only ones worth mentioning).
But ultimately, even creating giant killer robots that enslave mankind would just be a result of making machines more like living things. And therein lies the rub, because using contextual computing and AI to make things more like living things will never create machines that can think for themselves, at least, not think in a way that makes them decide to do something we haven’t designed them to do in the first place.
Unlike the contextual nature of living things, AI and IoT devices are made, not evolved. So whereas we can’t design people to not enslave mankind, or commit crime, we actually design AI and context aware devices to do just that. Or to be more accurate, unless we design devices like robots with AI with a button labelled “Take over the world” they won’t. They can’t. They don’t think.
Which ultimately shows the power of context. Our sense of intelligence is defined by it, which means, of course, our perception of what is and isn’t a threat is defined by it too. For example, a world leading physicist might appear intelligent in the context of adult society, but to a child, seem quite the reverse if he can’t get a high score on Flappy Bird. So that person might be intellectually intimidating to me, but appear like a loser to my kids. In reality, our physicist is neither, or both, it’s all a matter of perception and context for the people around him. That power of perception to warp the reality of a person or thing’s context is powerful stuff.
If you’ve ever seen the 1936’s teen shock movie Reefer Madness, you’ll probably laugh at the main theme of the story which is when teenagers smoke marijuana, they become violent, sex-mad psychopaths. Contrast that with the reality of smoking marijuana, which generally makes people sit around watching crap TV and stuffing their faces with chocolate. That’s perception (or a lack of it, depending on your opinion) for you.
By the same token, we might imagine that because a computer than can detect when the fridge has run out of milk, that AI is capable of world domination. But to the people who deliver your groceries, it’s just a fridge and a milk carton with wifi connectivity.
Similarly, if your robot butler takes delivery of the groceries and puts the carton of milk in the fridge, it might appear to be one step away from annihilating humanity with it’s sheer potency, but as far as Robot Jeeves is concerned that process is neither a delivery, or a carton or even a fridge, just a contextual relationship between objects and a process that it’s been designed to do.
And that’s the whole point that kills off the power mad AI killer robot story. It’s a deeply human assumption to think that an AI understands the world in terms that make sense to us. They don’t understand anything, they simply make calculations and do things. So unless the robot, or the fridge is also designed to take over the world and enslave humanity, mankind is safe.
Phew. Of course there’s nothing to stop a super rich evil genius from designing an army of AI robots to do just that… but that’s a whole different problem. And unlikely. If you were that good at making AI robots, you’d probably make them useful and productive and make a fortune with your own tech company, right?
Context is everything. And more importantly for the business of technology and your brain, those little things you barely even notice are, on aggregate, a lot more valuable than you think.