An algorithm is a set of unambiguous instructions that are followed in order to complete a task. Algorithmic aversion is the tendency for people to avoid using algorithms or automated decision-making systems when given the choice. This aversion may be due to concerns about the lack of transparency in these systems, or the fear of being replaced by a machine.
People innately resist change to the status quo. Understanding and addressing functional and psychological barriers is critical to successful development and implementation of new technologies.
Humans like to think of ourselves as rational beings, making decisions by sifting sources and ultimately choosing the relevant information. However, social scientists like Herbert A. Simon have suggested that individuals have a level of bounded rationality, as the volume of alternatives one must identify is so large that the ability to make a genuine rational estimation is near impossible.
With the rise of Big Data, organisations have embraced the adoption of artificial intelligence (AI), changing the way we make decisions in the workplace. AI systems are infinitely more capable of scanning sources to learn by themselves, consequently becoming more rational – or in the words of Dirk Lindebaum, AI systems can become “supercarriers of formal rationality”. However, many of us have an innate aversion to this technology, this is widely known as Algorithmic Aversion.
Why does Algorithmic Aversion matter?
Algorithmic aversion – a type of innovation resistance – is a type of cognitive bias that refers to our tendency to avoid using algorithms or automated processes that can identify, interpret and learn from data to suggest decisions or answers to a problem. This tendency to resist innovation has, and will have, a huge global cost, limiting development and growth worldwide. A wide range of empirical research has been conducted to suggest why most humans have at least an element of Algorithmic Aversion.
Causes of Algorithmic Aversion:
This research splits algorithmic aversion into two groups. Firstly, functional barriers occur when a user perceives any significant change due to the adoption of innovation, such as AI. Secondly, psychological barriers, when the adoption of innovation is perceived as a conflict with their prior beliefs.
1. Functional barriers:
Usage barrier – regarded as the most common cause of innovation resistance, this occurs when users perceive innovation as a divergence from the status quo. In the context of Algorithmic Aversion, this is where AI implementation creates a radical change within existing workplace practices. This impact is compounded if users view the innovation as one with unnecessary complexity and decreased ease of use.
Risk barrier – this barrier suggests that the higher the perceived risk of innovation, the more averse individuals will be to its adoption. Previous research suggests that organisations abandon even the best possible AI solutions if the environment it is applied in is deemed too risky or volatile.
2. Psychological barriers:
Tradition barrier – this arises when innovation implementation requires change to an individual’s behaviours rather than their practices. If the implementation of AI within an organisation requires significant behaviour change (i.e. a change in the status quo), then the likelihood of aversion increases,
Image barrier – images are mental impressions left on the minds of others from entities in our physical world. These images are used when humans evaluate newly implemented technology. If users are left with negative images from new technology, such as AI, they are more likely to be algorithmically averse to it.
Developing technology that users embrace – how to avoid Algorithmic Aversion:
To avoid users becoming Algorithmically Averse to AI, we must focus on developing technologies that work with the users, rather than instead of them. At AutogenAI, we have developed cutting-edge natural language processing solutions to save organisations time in the production of bids while increasing their quality.
Our language engine helps bid writers create first drafts in a few clicks. Getting rid of the heavy lifting of structuring responses, allows writers to spend more time injecting creativity into the response, so they can add maximum value to each bid. This type of technology allows users to overcome functional and psychological barriers, as the practical and behaviour change needed is positive – less time in the drafting stage, more time to create a unique bid.
Our three-step transformational model ensures that users are involved from the first day. During our Discovery phase, we work with end users to identify their biggest challenges and reveal where language technology will make the biggest impact. In the Configuration stage, we then collate previous text like winning bids, annual reports and company website content, to train a unique model in an organisation’s voice and culture. Finally, in the Deployment stage, our Change Experts help deploy the bespoke unique language engine, working with users to continuously iterate the language engine. This process ensures that end users, i.e. Bid Writers, understand the value and (lack of) risk associated with this technology, to ensure their psychological needs are met.
Although some Bid Writers may initially fear this technology will replace their job, this three-step transformation model is used to demonstrate how this can elevate the work of Bid Writers – not replace them. The best accountants in the world know how to use excel – and we believe the best bid writers in the world should know how to use Genny.
All humans have innate barriers to changes in status quo, especially when it is perceived to potentially harm one’s job. Whilst these concerns may be valid for some artificial intelligence software, developing technology that users can embrace could be the future in regards to balancing the psychological needs of employees and the objective goals of an organisation.