What is the impact of automation on our cognitive abilities?

When it comes to technological change, there’s no shortage of pessimistic thinkers. One of their recurring themes sounds fairly familiar: are new technologies making us dumber?

Like many assumed ideas, it’s more complicated than that.

While discharging us from key cognitive tasks, most automation solutions are also expanding our cognitive capabilities.

Honestly, it all depends on the mental task these artifacts are leaving us with. Let’s see how we can assess automation tools depending on how they complement us or compete with us.


Enhancing vs complementary vs competitive automation

Photo by Crissy Jarvis on Unsplash

Since the dawn of civilization, we humans invented and used tools to extend our capabilities. We designed windmills to make flour, steam engines to transport ourselves, and washing machines to handle laundry. All these inventions helped us to get a lot more done with minimal human input. And that’s what we’re talking about when we speak about technological progress.

But when it comes to our mental capabilities, the story of automation becomes much more complicated. What UX pioneer David Normal calls cognitive artifacts are devices designed to serve a representational function. They are tools like writing papers, to-do lists, and computers that are assisting us in our cognitive tasks.

As they outsource some of our cognitive tasks, these cognitive artifacts are double edge swords. They either enable us to focus on deeper cognitive processes or free us from any mental effort.

In that way, not all automation tools are created equal. Depending on how they solicit our cognitive capabilities, they can be classified into 3 categories:

  • Enhancing artifacts: some tools enhance our cognitive capabilities by enriching our mental representations. One iconic example is the abacus, which allows us to understand visually complex calculations. It helps our minds properly learn to compute figures together. What’s specific about them is that after some training we don’t need them anymore. They have taught us mental models that we can rely on to crack hard problems.
  • Complementary artifacts: Similar to enhancing artifacts, complementary artifacts are augmenting our mental representation. For example, whiteboards help us sustain long and complex calculations. They help us remember our reasoning, get an intuitive overview of it, and guide us to the next calculations. However, when we don’t use them anymore, we lose the mental clarity we get from them. We need to keep relying on these whiteboards or written interfaces to support our calculations.
  • Competitive artifacts: competitive artifacts keep all the cognitive work for themselves. They help us handle complex mental problems while relieving us from mental efforts. Calculators fall under this category, as they enable us to get a fast solution to a mathematical problem without doing the numbers by ourselves.

Psychologists like David Krakauer especially worry about these last categories of automation tools. As GPS apps, recommendation systems, and autonomous devices become widespread -which are some kind of competitive artifacts — we won’t need to figure out our problems by ourselves anymore. We won’t need to continuously strengthen our cognitive abilities. And that could mean for Krakauer the degeneration of what makes us human.

Yet, as compelling as this argument is, the reality is more complex than that. These cognitive artifacts are complementary or competitive in varying degrees. It all depends on how they affect our cognitive representation.


Automation and cognitive affordances

Photo by Mike Tinnion on Unsplash

David Krakauer is not the first to have expressed his fear related to cognitive tools.

You might know Plato’s critics of writing. Through the voice of Socrates, Plato blamed writing for removing us from the need of remembering things. He accused writing of harming our memorization abilities and discouraging us from coming up with original ideas.

Technology pessimists all rely on this same line of reasoning: as new technology makes tasks easier to perform, they also remove the need to do the work and resolve problems by ourselves. They take away our autonomy.

What these thinkers get right is that new cognitive artifacts are indeed fundamentally shifting the way we think.

What they get less right is that they are leaving us with more cognitive power to think through complex subjects. As they afford us heavy mental processes, we can spend more time thinking with new cognitive representations. It’s from the nature of these “mental affordances” that we can assess the net benefits of automation solutions.

For example, writing relieves us from the task of remembering every idea and thought that come through our mind. It replaces the mental efforts of mentally visualizing our past and current thoughts to make complex reasoning. As a result, it enables us to keep more consistent and long-stretch lines of reasoning and to easily share them. As our minds get a quick overview of all our ideas, it’s even easier to think productively. So, it’s a net gain to switch from out-of-the-air thinking to papers and pens.

Let’s take a different and more equivocal use case. GPS technologies seem at the first glance to greatly help us find our way. They free us from the cognitive tasks of figuring out our current location, the path to our destination, and the direction to take. They free us from relying on our often unreliable sense of orientation.

In return, they require from us different cognitive abilities like map reading and direction-based orientation. When you compare 3D orientation skills with 2D map reading skills, it’s clear that they require different mental abilities. GPS makes us think less in spacial terms than following precise instructions. Moreover, it makes us depend on them to orient ourselves. Without GPS, we struggle even more to find our way home. It has made us poor spatial readers of our environment.

Still, everything is not black and white. GPS enables us to focus more on the road, and expand our horizons, and in the case of services like Google Maps makes us discover unexpected places. It makes us think more about the journey than the logistics.

What about automation solutions seeking to fully replace humans like self-driving cars and auto-land systems? When these technologies will become widespread, professional drivers and pilots won’t need to control and drive the system in real-time anymore. They will just have to monitor those systems and intervene in case of emergency.

That dramatically alters their responsibilities and duties on board. On the one hand, they don’t need to permanently focus on the driving process anymore, and they have more time to make the trip an enjoyable experience. On the other hand, they can lose the feeling of self-control and mastery that the job of driving and flying was all about.

That example leads us to a worst-case scenario. What will happen when these machines will definitely outcompete us? One original answer would be to merge with the same technology that is replacing us.


How automation could augment and fuse with our cognitive abilities

Neil Harbisson and his light-translating antenna

In 1957, the Soviets launched the first successful earth-orbiting satellites. This accomplishment compelled American authorities to press on their space initiatives.

To meet this new challenge, Manfred Clynes and Nathan Kline were among the brightest scientists appointed. To help American astronauts get into space, they sought to address a specific problem: the biological sustainability of humans in space.

Clynes and Kline realized that humans outside the earth’s atmosphere couldn’t breathe and are subjected to dangerous solar radiation. That leads them to a new approach: altering human physiology by designing technological extensions of our innate biology. They soon called humans augmented by these additive technologies “cyborgs.”

But cyborgs aren’t only a good solution to extend our capabilities in hostile environments. They can also be a way to integrate competitive technologies within and preserve our human agency. In other words, a way to fuse with your cognitive artifacts to make a unique entity.

How can you be sure that you’re merging with competitive technologies and not merely being overtaken by them? There’s a framework inspired by the extended mind thesis.

It argues that two independent systems, A and B, need constant non-linear interactions to create a merging system, C.

When you’re manipulating a tool and this tool is impacting your representation, you can say that you’re effectively fusing with it. It won’t make sense anymore to distinguish yourself from the machine, you’ll become an integral part of a new augmented entity.

What does it mean concretely? For example, suppose you are using your phone to check your friend’s birthday date. In that case, the interaction is purely linear, as your phone is providing information not impacting the way you think. You can still remember by yourself those dates in your mind, without the phone.

However, when you rely on Google Maps to find your way home, the application matches your on-screen behavior. Depending on your position, it recommends various places and updates you about your progress to your destination. So, while Google Maps shift its services depending on your location, you change your plans according to its recommendations. Which in turn, is impacting Google Maps’ recommendations. That means you’re in a kind of cyborg relationship with Google Maps.

This complementary interaction has some limitations though. The flow of information from Google Maps isn’t significant, is short-lived, and doesn’t affect a lot of your cognitive abilities. There are better examples of cyborg-like interactions.

You might know Neil Harbisson. This self-proclaimed cyborg has suffered from color blindness all his life. To be able to perceive light, he has built an antenna that can translate colors into sounds. As a result, by moving the antenna, he can hear the light rays surrounding him. This is a perfect case of technological interdependency.

While Harbisson’s antenna converts light into hearable signals, Harbisson’s brain translates these signals as light. At the same time, Harbisson moves the antenna to better capture lights and thus shape its own perception.

This interdependency is deeper, as the whole perception of Harbisson depends on this constant exchange of information. Harbisson can even imagine arts work at the crossroad of sound and light.

As a result, Harbisson augments his sensing capabilities while preserving his human agency by fusing with its cognitive artifacts. The antenna is not competing with him anymore but is effectively an enhancing part of him.


Hard to say if that’s the future of human-machine interaction. But one thing is sure: we need to better assess whether our technological artifacts are complementing us or competing with us.

Subscribe to our newsletters to learn how automation is going to impact your job, and how to make it work for you.

Don’t Stop Here

More To Explore

The Ethics of Automated Decision Systems

Automated decision systems are today everywhere. What are the user’s rights regarding them, and how can designers make them compliant ?