In the ever-evolving world of artificial intelligence, new products, developments and news articles are released on a daily basis, captivating a growing curiosity among people from all walks of life. Playing around with AI is fun, sexy, and immediately engaging: its unparalleled capacity and speed; the excitement of key players, billionaires and tech gurus; the ability to revolutionise productivity and eliminate monotonous labour. And yet, there are many layers to the new technology which people find, en masse, unsettling. Now, exploring the depths of each psychological manifestation of this fear would require novels to adequately delve into. However, one notable aspect of this instinctual response may resonate particularly with those who have experienced marginalisation.

Diversity and Bias in AI

Big Tech companies like Google, Meta and Amazon are set to gain the most from AI’s success, and private investments in AI sweepingly come from the US. However, the majority of time-consuming labour in AI is consigned to overseas gig workers earning salaries like $2.83 per hour, so it’s no surprise that the sector’s relationship with diversity and ethics can seem off-putting. Global reception to AI has also revealed some grave problems in the way it produces and analyses. Generative AI images have been found unlikely to produce images of non-white people. Face recognition technology used by police to identify suspects is more error-prone in detecting darker skin tones and women as opposed to images depicting white men. Women have played around with AI-editing to find hyper-sexualised results and asked: “Why do all my AI avatars have huge boobs?”. Carnegie Mellon found women far less likely to be shown targeted ads for high paying jobs. Additionally, the LLM GPT-3, was prone to spitting out sexist, racist, and violent rhetoric.

People are asking – entirely fairly – who has control of my data, and how are they wielding it? Are those working and influencing in AI diverse, do they care about social impact, mitigating bias, and protecting public wellbeing, in as much as they care about money, power, and relentless innovation?

Furthermore, as AI becomes more powerful, so do corporate monopolies, as AI increasingly influences and prevails over global communication, surveillance, and data wealth. Additional questions grow in relevance: do those creating these systems acknowledge the threat to democracy and the validity of law? Fundamentally, do these key players realise who any negative outcomes will affect, and do they care?

AI Inputs

Pivotally, AI in its current form is a tool which can only reflect the influence of its creators, the data it is trained on, and the user operating it. It is learning its moral code from all of us – reflected in the data it consumes by those who teach it. Thus, it outputs bias based on blindspots in the teams that developed it. AI, as a new technology with the capacity to further increase hyper-connectivity, is likely to perpetuate and amplify any societal problems that we fail to solve ourselves. However, technology has the power to restructure our physical and social worlds, and proactivity can allow new technologies to produce hugely positive societal impacts. Ultimately, AI is a mirror, and we will get out of it what we put into it. The diversity of AI workers is thus principal: those that control the data input, the capacity, and the reach of AI.

Most critically, the AI Now Institute reports that a chiefly white male coding workforce is causing algorithmic bias. A lack of diversity in any sector leads to stagnation in innovation, growth and positive development, for both the company and wider society. Considering the inevitable and rapidly approaching reach of AI in intimate areas of life like health, education and safety – this threat becomes more pressing. It’s unlikely these creators seek to develop a harmful, homogeneous, biased product, but can someone regularly and effectively anticipate issues and problems completely outside of their own personal experience? Great power in the hands of a privileged few… (it’s a tale as old as time – and one we know rarely ends well for most).

Cause for Positivity

This feels suitably daunting, but these ideas have not gone unnoticed. For example, AutogenAI’s Chief Data Scientist, James Huckle, explores how human intervention is used to train LLMs on a clean corpus of data, neutralise word bias, and evaluate and re-train models to avoid toxic or offensive language. CTO of OpenAI, Mira Murati, acknowledges what many of us have considered: “There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.” She disregards any idea that regulation at this point would slow innovation, noting the importance of policymakers and regulators in getting involved now. Governments and international organisations are taking action too, with the UK government launching a training scheme to get women into STEM jobs, and the EU beginning regulation with the AI act, for example banning applications such as the Chinese government’s social scoring system, and regulating those that might particularly perpetuate bias, like applications which scan and rank CVs. This regulation is not sufficient, and regulating AI is hard – AI being fundamentally reactive and transient. However, it nods to a global interest in protecting democracy, and so too, the importance of public pressure on governments and corporations to pursue moral AI development and effective regulation.

Doubtless, there will be AI systems created which are limited, biased and greedy. However, with enough public interest, we can have far more which are ethical and inclusive. We are already building systems which are improving healthcarepersonalised education and increasing crop productivity. Amongst hosts of potential applications, AI has the potential to help tackle climate change, and improve communication for non-native language speakers.

Whilst AI can produce bias at scale, it can also produce systems that detect bias at scale. For example, large language models can be used effectively to pick out nuanced biases in phrasing, sentence structure, and choice of words. Furthermore, fears of AI destroying education as we know it have become extensive, but as this discussion has intensified, early stage AI-or-human-writer-detectors have been built. The power of AI is immense – but we currently have the power of demand and social pressure to influence its development.

Conclusion

The AI company I work for is diverse, and there is a focus on regulation. But I can account for only one organisation, and with Big Tech and an overrepresentation of privileged men owning key processing power, multi-billions of parameters of global data and monopolising AI innovation, there is valid cause for concern as to the dangers of this lack of diversity. However, the very fact that fear for dangerous AI outputs is so widespread, suggests that this concern, movement for regulation, desire for diversity, and care for thoughtful innovation can be used to spur demand for diverse, unbiased, ethical AI.

AI is inevitable. It is powerful. This should be exciting. This should create huge benefits. This could have an endlessly positive impact, and transform life for the better by creating opportunities for human connection and innovation. Whilst the power and onus to do good, regulate effectively, and innovate ethically lies resoundingly on big corporations funding and developing AI, and governments and international bodies, we should remember, optimistically and rationally, that as a public we do still have some democratic power that we can harness to lobby, boycott, and encourage AI to be a positive force for change.