By Chris Rowell and Marcus de Courtenay
Intuition is regarded by some as the summit of human judgment. Studies have shown how chess masters are able to recognise patterns almost instantaneously, and that firefighters intuitively know when a burning house is about to collapse. In these moments, our “gut” is invaluable and even lifesaving.
And yet at the same time, we are also influenced by hidden biases that often lead us to making irrational decisions.
So, where do these valuable intuitions come from? And when are we likely to have them, as opposed to the misdirected ones? Is it possible that as the age of AI overtakes us, computers could be better than us at our “gut” feeling decisions?
Human Intuition: More Than A Feeling?
Malcom Gladwell devoted an entire book to instant pattern recognition, in his New York Times bestseller, Blink. In it, Gladwell provides a rich account of how several experts intuitively sensed that a statue, ostensibly from 6th century BC, was fake, even after analysts had erroneously confirmed its authenticity. What was especially interesting here was that the experts could not clearly explain how they knew this; they just knew. This instinctive knowing is what we have come to label “intuition.”
So how is this happening?
It’s believed that intuition accumulates from constant exposure to patterns that we internalize and instinctively recognise, removing the need for conscious reasoning. These repetitious experiences embed into, what psychologist Daniel Kahneman calls, our System 1 thinking.
System 1 produces emotions and inclinations based on deep prior learning, often driving actions that need to be subsequently justified by System 2, our slow-thinking, deliberate, and ‘rational’ brain. But because System 1 happens beneath the surface, its processing is not easily comprehended in words, which is why Gladwell’s experts could not simply articulate why the statue was fake, but unconsciously knew that it must be.
This unconscious knowing has been described as manifesting in a wave of “intuitive repulsion.” Our System 1 intuitions have the potential to provide virtually immediate and remarkably accurate insights.
The Guts of The Issue: Where Intuition Fails...
Sadly, it is the very same System 1 that propels us to stumble over a laundry list of cognitive biases when making intuitive judgments that are not based on skill and experience. For instance, in areas outside our expertise, we have a tendency to seek out and pay attention to the wrong information, anchor on irrelevant details, and change our preferences based on how information is framed.
The result? We overlook important information, restrict our creativity and critical thinking, irrationally chase our losses, and systematically underestimate the time and resources needed when planning new projects.
Even without these biases, the limits of our intuition are laid bare as soon as the rules are changed. The same chess masters that can instantly recall where chess pieces are when in a game-like configuration, have no advantage in recalling the positions of chess pieces that are randomly strewn across a board. The reliability and usefulness of expert intuition is therefore limited to narrow and predictable areas, with clear feedback loops, in which a person has built up thousands of hours of repeated experience. Scientists have referred to these as “kind” learning
Accumulating 10,000 hours of repeated experience is the conventional advice for making our System 1 intuitions in a particular domain airtight. In practice, our brains do this by deeply learning thousands of repeated patterns (called “chunking”), which, once recognized, gives us rapid access to a set of accurate predictions and potential responses.
However, cognitive biases can creep into even “kind” learning environments where humans are the experts. For instance, studies have found that upon recognizing a familiar sequence of moves, expert chess players anchored on this and found it incredibly difficult to recognize other moves that were superior.
The AI Advantage
We know that computers also do very well in making predictions in narrow domains with clear feedback. In fact, we are now seeing the ascendency of machines across a wide range of domains, where actions are repetitive and able to be codified and tracked as digital data. Examples include accurately reviewing legal contracts, scanning radiology images for fractures, efficiently linking riders to drivers in a mobility service, and detecting credit card fraud within a sea of information.
Even in areas that were once exclusively the domain of human intuition, such as playing chess and authenticating artwork, we have been surpassed by machines. What each of these examples have in common is that they are narrow areas with repetitive actions that can possibly create vast amounts of data. In matters of domain-specific human intuition, it seems computers now have all our strengths (and more), and none of our weaknesses.
When considering the implications of this, it’s helpful to think about how much of what we do resembles a game of chess, with clear rules, finite dimensions, and immediate feedback. Probably not much!
Far from “kind” learning environments, much of our business and social lives resemble “wicked” learning environments, with missing information, ambiguity around patterns of cause and effect, and delayed, sporadic or even non-existent feedback. Human intuition and machines alike face huge limitations in wicked learning environments, where historical patterns mean little for future prediction.
As we close out the year, wicked learning environments in business readily come to mind. For instance, COVID-19 profoundly shifted consumer attitudes and behaviour in ways that made predictive models built with data from before the pandemic, obsolete.
And in an ever-more complex and connected world, “incremental” innovations and environmental shifts can result in complex and multifaceted challenges that render narrow historical data and patterns largely irrelevant.
Going beyond external events, we should also realise that wicked learning environments are not a bug, but rather a feature of the business world. Leaders inevitably face multiple competing goals that cannot easily be reconciled. Inherent trade-offs manifest between short-term versus long-term goals, financial versus environmental objectives, and investing in optimising current operations versus innovation.
Fortunately, we have the benefits of a second system of thought. System 2 thinking, although less dynamic than our intuitions, has the capacity to meet wicked learning environments on their own messy and complex terms. And, while not always making perfect decisions, we can reach informed considerations from diverse data sets by applying, and adapting, broad critical models. Imagination, analogical reasoning, creativity, morals, and empathy, as features of our System 2 thinking, also come to the fore in wicked learning environments, enabling us to conceptualize complex adaptive challenges and posit new causal relationships.
What Does This Mean For Us?
The good news then is that we think with more than our guts. And despite machines becoming experts in matters of narrow intuition, we are still much better at seeing the “bigger picture”, and in dealing with multifaceted areas, where past patterns do not help predict future outcomes.
We can fall back on our System 2 thinking to perform reasoned and deliberative decision-making, taking into account complex data sets. And there is no argument that in these areas of more “general” intelligence, we’ll have the edge over our digital counterparts for some time yet.
Need More Help?
Keen to find out more about different systems of thinking and how to leverage both to see the bigger picture? Performance Frontiers are experts in helping organisations undertake the fundamental shifts required. Speak to Chris about how we can partner with you to draw upon the combined powers of artificial intelligence and human intuition to enable dynamic decision-making today.