Making Decisions in the Age of AI – why we can’t outsource all our critical thinking

Updated on 23rd September 2024

7 minute read
Table of Contents
MidJourney

All the technology humans have previously created can be classed as tools that have been predictable, explainable, and, most importantly, under our control. But now, for the first time in our history, this is no longer the case.

Popularised by the likes of ChatGPT, large language models (LLMs) have become an irresistible and unavoidable part of our lives. But despite their incredible capabilities, they lack the uniquely human ability to think critically. When they output responses, they aren’t considering their answers as we do: they are simply predicting the next letter that makes sense to them. This means that every time we outsource our thinking to these seemingly all-knowing chatbots, our decisions lose some of that human edge.

Critical thinking is the ability to analyse information objectively and make reasoned judgements. This implies that critical thinking is about the critical thinker; it is about someone engaging with an idea in a certain way and with a particular mentality.

From this understanding, it’s easy to realise that there are some critical thinking skills that LLMs simply can’t replicate. Now, more than ever, leaders need to put a premium on uniquely human qualities in the workplace.

Human Qualities in Decision Making

  • Creativity: While there is a place for LLMs in the ideation process, ultimately, they predict from the past and will struggle to generate truly original ideas. Leaders must, therefore, celebrate human ideas and allow for blue-sky thinking. In making space for people to embrace their own creative process – and celebrating when they produce results – leaders can empower their teams to avoid the AI cognitive funnel.
  • Ethics: A LLM does not have the ability to reflect on its thinking process or exercise ethical judgement. Leaders must have a clear understanding of their own ethics and nurture a team that aligns with their values. For instance, if a leader in a financial services company prioritises fairness and integrity (as they should!), they must ensure their team reviews all AI-generated investment advice to avoid conflicts of interest and prevent biased recommendations.
  • Context: LLMs are blind to the world they exist in and bound by what they have been told, or ‘fed’ as data. On the other hand, humans have rich real-world experience and can draw upon their subjective experiences, intuitions, and emotions. Leaders should pair human insight with AI tools in a complementary way, rather than replacing people with AI. Consider a marketing team tasked with launching a new product in a specific cultural market. An LLM can analyse data, identify trends, and even generate content ideas. However, it cannot fully grasp the cultural nuances and local sentiments that might influence the campaign’s success. This is where a human marketer, with lived experience and a deep understanding of the local culture, can interpret the data provided by the LLM and make contextually relevant decisions.
  • Emotional Intelligence: LLMs don’t have the ability to empathise with different perspectives and experiences. They have no emotions and, therefore, no emotional intelligence. A leader in human resources may collaborate with AI ahead of an employee’s quarterly performance review, but quantitative metrics alone will not form the foundation of a constructive conversation. Approaching the interaction with empathy and emotional intelligence is more important than reducing the person to handful of performance metrics.   

And even everyone’s favourite chatbot was quick to concede that LLMs don’t have these critical thinking abilities. When asked about its critical thinking capabilities, we got the response:

‘Large language models like me can mimic some aspects of critical thinking, but we don’t think critically. I operate without genuine understanding, self-awareness, or the ability to reflect on moral or philosophical concerns.’ – ChatGPT.

Chatbots have certainly made it easy to anthropomorphise LLMs. This technology no longer sits in the background of our lives; instead, it interacts with us, outputting strikingly human-like answers. But that doesn’t mean we’ve suddenly made the leap from machine learning to machine thinking.

This is all not to say we should do away with them. A recent Swedish study revealed the powerful potential that is the marriage of human ingenuity and AI. The study pitted pairs of doctors working on their own against colleagues who were working with AI. Everybody was given the task of diagnosing breast cancer, and the AI-assisted doctors accurately diagnosed 20% more cases – and in less time.

So, instead of thinking of LLMs as tools, try treating them like a coworker who is extraordinarily capable, eager to please, and sometimes spectacularly wrong. Have them crunch the numbers and spot the trends while you work out what is important and how you can come up with a novel solution. Don’t just use them: duet with them. When you challenge their responses, they can help fortify your weak points, and you can complement their flaws.

Troubleshooting AI

Next time you find yourself on the landing page of your favourite LLM, consider the following questions:

  • Does this answer seem correct? Always be ready to question an LLM, especially when their suggestions seem as though they require a deep understanding of human emotions and/or ethics.
  • What is the impact of using this information on my decision? Reflecting on how your decisions affect yourself and others forces you to engage critically.
  • Am I combining the LLM’s output with my own thought-process? Bringing emotional intelligence or context or creativity to the answers that an LLM can’t provide will always lead to more well-rounded decisions.
  • Have I considered the potential biases or limitations in the LLM’s responses? LLMs are trained on historical data and may perpetuate biases or have gaps in their knowledge. Be aware of these limitations and think about how to account for them.
  • Am I using the LLM to enhance my own creativity, or am I letting it do all the creative work? While LLMs can generate ideas, true originality and innovation requires a human touch. Use the LLM as a tool to spark your imagination, but don’t let it replace your own creative process.

While LLMs represent a significant technological advancement, they should only be used to complement – not replace – human critical thinking. The capabilities of LLMs in data analysis and pattern recognition are remarkable, but they lack the essential human qualities of creativity, ethics, contextual understanding, and emotional intelligence. As leaders navigate the integration of AI in the workplace, it is crucial to prioritise these uniquely human attributes.

Read more

While every effort has been made to provide valuable, useful information in this publication, this organisation and any related suppliers or associated companies accept no responsibility or any form of liability from reliance upon or use of its contents. Any suggestions should be considered carefully within your own particular circumstances, as they are intended as general information only.

Do you need help creating change in your organisation?

Speak to our team today about how we can partner with you to imagine the possibilities, avoid the pitfalls, and perfect a plan to move worlds, together.

Keep learning