A fight-or-flight response to AI is understandable, but unwise.

In his seminal business book Thinking, Fast and Slow, Daniel Kahneman tells the story of a firefighter who acted on instinct to save the lives of his team. The firefighters had entered a building to douse a blaze in a kitchen. Instinctively, their commander heard himself shout: “Let’s get out of here.” In that instant, he had no idea why he issued the order. But his team followed his instruction and evacuated.

Only afterwards did the commander realize what had spooked him. The kitchen fire was unusually quiet, and his ears abnormally cool. It transpired that the core of the fire was not in the kitchen, but in the basement – directly below where he and his team had been standing. Minutes after he issued the order to evacuate, the floor collapsed, with the firefighters now safely outside.

The story illustrates how so-called Type 1 thinking – the instinctive response that is sometimes framed as intuition – can be invaluable in familiar situations. Having fought hundreds of fires, the fire commander had compiled myriad reference points from a lifetime of experience that created a heuristic: a way of problem-solving that may be neither perfect nor immediately explicable, but nevertheless provides an instant path towards a short-term goal.

Intuition is useful when you are working in a recognizable context, like a fireman fighting a fire. Yet its potency decays quickly in a new context, or when experimenting with novel applications. The advent of generative AI has led our knowledge of the world to compress. It has accelerated change on a previously unimaginable scale. It has enhanced connectivity between ideas, individuals and institutions, such that networks are unmappable. The generative AI era is the epitome of a situation where our instincts are no longer fit for purpose. They can be actively detrimental: Type 1 thinking can lead us to assume the basement is burning, when it is not.

When faced with a perceived threat, humans’ Type 1 heuristics go into overdrive. They reduce our portfolio of responses to ‘fight’ or ‘flight’. This leads to squandered allegiances. Imagine that horses were unknown to us: they would induce fear. We would try to escape them or disable them. The idea that we would befriend them – and employ them as transports, workhands and athletes – would be alien to our Type 1 brain.

The AI age requires, instead, Type 2 thinking: the cognitive system that is unhurried and deliberate. This is our rational brain, responsible for the slow thinking referenced in the title of Kahneman’s book. It draws on reason, evidence and calculation to consider options, draw conclusions and – crucially – identify opportunities. Type 2 requires more effort than relying on instinctive Type 1 thinking. Yet Type 2 allows us to tackle intricate challenges, override biases and make sound decisions in complex situations. If ever we needed to draw more on that part of our brain, it is in the age of AI.

Generative AI has the potential to be a powerful workhorse for leaders: a formidable companion that tackles the laborious and routine, allowing leaders to focus their efforts on the creative and innovative – the segments of the business landscape where human endeavor adds most value. But Type 1 thinking sees generative AI not as an ally, but as an adversary: a threat which should be attacked or escaped.

The advent of AI has ushered in an era of fearfulness. Yet the dread of the new is instinctive, and irrational. Look around you. The building is probably not on fire. The floor is unlikely to fall in. Leaders are probably not going to perish amid the technological shift.

Our instinctive Type 1 brain thinks we are in danger. If we allow ourselves to be led astray by its fears, it is likely to be proved right.