{"id":3033,"date":"2023-03-08T15:45:42","date_gmt":"2023-03-08T15:45:42","guid":{"rendered":"https:\/\/autogenai.dsstaging2.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/"},"modified":"2025-03-26T11:52:59","modified_gmt":"2025-03-26T11:52:59","slug":"bias-in-bias-out-a-closer-look-at-diversity-in-ai","status":"publish","type":"post","link":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/","title":{"rendered":"Bias In, Bias Out: A Closer Look at Diversity in AI"},"content":{"rendered":"<p>In the ever-evolving world of artificial intelligence, new products, developments and news articles are released on a daily basis, captivating a growing curiosity among people from all walks of life. Playing around with AI is fun, sexy, and immediately engaging: its unparalleled capacity and speed; the excitement of key players, billionaires and tech gurus; the ability to revolutionise productivity and eliminate monotonous labour. And yet, there are many layers to the new technology which people find, en masse, unsettling. Now, exploring the depths of each psychological manifestation of this fear would require novels to adequately delve into. However, one notable aspect of this instinctual response may resonate particularly with those who have experienced marginalisation.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#Diversity_and_Bias_in_AI\" >Diversity and Bias in AI<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#AI_Inputs\" >AI Inputs<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#Cause_for_Positivity\" >Cause for Positivity<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#Conclusion\" >Conclusion<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#Diversity_and_Bias_in_AI-2\" >Diversity and Bias in AI<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#AI_Inputs-2\" >AI Inputs<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#Cause_for_Positivity-2\" >Cause for Positivity<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#Conclusion-2\" >Conclusion<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Diversity_and_Bias_in_AI\"><\/span>Diversity and Bias in AI<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Big Tech companies like Google, Meta and Amazon are set to gain the most from AI\u2019s success, and private investments in AI sweepingly come from the US. However, the majority of time-consuming labour in AI is consigned to overseas gig workers earning salaries like\u00a0<a href=\"https:\/\/2022.internethealthreport.org\/facts\/\">$2.83 per hour<\/a>, so it\u2019s no surprise that the sector\u2019s relationship with diversity and ethics can seem off-putting. Global reception to AI has also revealed some grave problems in the way it produces and analyses. Generative AI images have been found unlikely to produce images of\u00a0<a href=\"https:\/\/slate.com\/technology\/2023\/02\/dalle2-stable-diffusion-ai-art-race-bias.html\">non-white people<\/a>. Face recognition technology used by police to identify suspects is\u00a0<a href=\"http:\/\/proceedings.mlr.press\/v81\/buolamwini18a.html?mod=article_inline\">more error-prone<\/a>\u00a0in detecting darker skin tones and women as opposed to images depicting white men. Women have played around with AI-editing to find hyper-sexualised results and asked: \u201c<a href=\"https:\/\/www.thecut.com\/2022\/12\/ai-avatars-lensa-beauty-boobs.html\">Why do all my AI avatars have huge boobs?<\/a>\u201d. Carnegie Mellon found women far\u00a0<a href=\"https:\/\/www.cmu.edu\/news\/stories\/archives\/2015\/july\/online-ads-research.html\">less likely<\/a>\u00a0to be shown targeted ads for high paying jobs. Additionally, the LLM GPT-3, was prone to spitting out sexist, racist, and violent\u00a0<a href=\"https:\/\/time.com\/6247678\/openai-chatgpt-kenya-workers\/\">rhetoric<\/a>.<\/p>\n<p>People are asking &#8211; entirely fairly &#8211;\u00a0who\u00a0has control of my data, and how are they\u00a0wielding\u00a0it? Are those working and influencing in AI diverse, do they care about social impact, mitigating bias, and protecting public wellbeing, in as much as they care about money, power, and relentless innovation?<\/p>\n<p>Furthermore, as AI becomes more powerful,\u00a0so do corporate monopolies, as AI increasingly influences and prevails over global communication, surveillance, and data wealth. Additional questions grow in relevance: do those creating these systems acknowledge the threat to democracy and the validity of law? Fundamentally, do these key players realise who any negative outcomes will affect, and do they care?<\/p>\n<h2><span class=\"ez-toc-section\" id=\"AI_Inputs\"><\/span>AI Inputs<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Pivotally, AI in its current form is a tool which can only reflect the influence of its creators, the data it is trained on, and the user operating it. It is learning its moral code from all of us &#8211; reflected in the data it consumes by those who teach it.\u00a0Thus, it outputs bias based on blindspots in the teams that developed it.\u00a0AI, as a new technology with the capacity to further increase hyper-connectivity, is likely to perpetuate and amplify any societal problems that we fail to solve ourselves. However, technology has the power to\u00a0<a href=\"https:\/\/link.springer.com\/chapter\/10.1057\/9781137349088_4\">restructure our physical and social worlds<\/a>, and proactivity can allow new technologies to produce hugely positive societal impacts. Ultimately, AI is a mirror, and we will get out of it what we put into it. The diversity of AI workers is thus principal: those that control the data input, the capacity, and the reach of AI.<\/p>\n<p>Most critically, the AI Now Institute reports that a chiefly white male coding workforce is\u00a0causing algorithmic bias. A lack of diversity in any sector leads to stagnation in innovation,\u00a0<a href=\"https:\/\/www.managers.org.uk\/knowledge-and-insights\/listicle\/the-five-business-benefits-of-a-diverse-team\/\">growth<\/a>\u00a0and positive development, for both the company and wider society. Considering the inevitable and rapidly approaching reach of AI in intimate areas of life like health, education and safety &#8211; this threat becomes more pressing. It\u2019s unlikely these creators seek to develop a harmful, homogeneous, biased product, but can someone regularly and effectively anticipate issues and problems completely outside of their own personal experience? Great power in the hands of a privileged few\u2026 (it\u2019s a tale as old as time &#8211; and one we know rarely ends well for most).<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Cause_for_Positivity\"><\/span>Cause for Positivity<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>This feels suitably daunting, but these ideas have not gone unnoticed. For example, AutogenAI\u2019s Chief Data Scientist, James Huckle,\u00a0explores how human intervention is used to train LLMs on a\u00a0<a href=\"https:\/\/arxiv.org\/abs\/2104.08758\">clean<\/a>\u00a0corpus of data, neutralise word bias, and evaluate and re-train models to avoid toxic or offensive language. CTO of OpenAI, Mira Murati, acknowledges what many of us have considered: \u201cThere are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it\u2019s important that we bring in different voices, like philosophers, social scientists, artists, and people from the\u00a0<a href=\"https:\/\/time.com\/6252404\/mira-murati-chatgpt-openai-interview\/\">humanities<\/a>.\u201d She disregards any idea that regulation at this point would slow innovation, noting the importance of policymakers and regulators in getting involved now. Governments and international organisations are taking action too, with\u00a0<a href=\"https:\/\/www.gov.uk\/government\/news\/more-women-to-be-supported-back-into-stem-jobs-in-government-backed-training?dm_i=2OYA,1D61Y,8P5KTK,5COB2,1\">the UK government<\/a>\u00a0launching a training scheme to get women into STEM jobs, and\u00a0<a href=\"https:\/\/artificialintelligenceact.eu\/\">the EU<\/a>\u00a0beginning regulation with the AI act, for example banning applications such as the Chinese government&#8217;s\u00a0<a href=\"https:\/\/www.aiplusinfo.com\/blog\/chinas-social-credit-system\/\">social scoring system<\/a>, and regulating those that might particularly perpetuate bias, like applications which scan and rank CVs. This regulation is not sufficient, and regulating AI is hard &#8211; AI being fundamentally reactive and transient. However, it nods to a global interest in protecting democracy, and so too, the importance of public pressure on governments and corporations to pursue moral AI development and effective regulation.<\/p>\n<p>Doubtless, there will be AI systems created which are limited, biased and greedy. However, with enough public interest, we can have far more which are ethical and inclusive. We are already building systems which are\u00a0<a href=\"https:\/\/www.pwc.com\/gx\/en\/industries\/healthcare\/publications\/ai-robotics-new-health\/transforming-healthcare.html\">improving healthcare<\/a>,\u00a0<a href=\"https:\/\/hbr.org\/2019\/10\/how-ai-and-data-could-personalize-higher-education\">personalised education<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.enterpriseappstoday.com\/news\/artificial-intelligence-in-agriculture-market-growth-usd-13-33-bn-by-2032-at-26-7-cagr-global-analysis-by-market-us.html\">increasing crop productivity<\/a>. Amongst hosts of potential applications, AI has the potential to help\u00a0<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3485128#d1e4323\">tackle climate change<\/a>, and improve communication for non-native language speakers.<\/p>\n<p>Whilst AI can produce bias at scale, it can also produce systems that\u00a0detect\u00a0bias at scale. For example, large language models can be used effectively to pick out nuanced biases in phrasing, sentence structure, and choice of words. Furthermore, fears of AI destroying education as we know it have become extensive, but as this discussion has intensified, early stage\u00a0<a href=\"https:\/\/hai.stanford.edu\/news\/human-writer-or-ai-scholars-build-detection-tool?utm_source=linkedin&amp;amp;utm_medium=social&amp;amp;utm_content=UComm_linkedin_Stanford-University_202302151523_sf175174497&amp;amp;utm_campaign=&amp;amp;sf175174497=1\">AI-or-human-writer-detectors<\/a>\u00a0have been built. The power of AI is immense &#8211; but we currently have the power of\u00a0demand\u00a0and social pressure to influence its development.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Conclusion\"><\/span>Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The AI company I work for is diverse, and there is a focus on regulation. But I can account for only one organisation, and with Big Tech and an overrepresentation of privileged men owning key processing power, multi-billions of parameters of global data and monopolising AI innovation, there is valid cause for concern as to the dangers of this lack of diversity. However, the very fact that fear for dangerous AI outputs is so widespread, suggests that this concern, movement for regulation, desire for diversity, and care for thoughtful innovation can be used to spur demand for diverse, unbiased, ethical AI.<\/p>\n<p>AI is inevitable. It is powerful. This should be exciting. This should create huge benefits. This could have an endlessly positive impact, and transform life for the better by creating opportunities for human connection and innovation. Whilst the power and onus to do good, regulate effectively, and innovate ethically lies resoundingly on big corporations funding and developing AI, and governments and international bodies, we should remember, optimistically and rationally, that as a public we do still have some democratic power that we can harness to lobby, boycott, and encourage AI to be a positive force for change.<\/p>\n<p>In the ever-evolving world of artificial intelligence, new products, developments and news articles are released on a daily basis, captivating a growing curiosity among people from all walks of life. Playing around with AI is fun, sexy, and immediately engaging: its unparalleled capacity and speed; the excitement of key players, billionaires and tech gurus; the ability to revolutionise productivity and eliminate monotonous labour. And yet, there are many layers to the new technology which people find, en masse, unsettling. Now, exploring the depths of each psychological manifestation of this fear would require novels to adequately delve into. However, one notable aspect of this instinctual response may resonate particularly with those who have experienced marginalisation.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Diversity_and_Bias_in_AI-2\"><\/span>Diversity and Bias in AI<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Big Tech companies like Google, Meta and Amazon are set to gain the most from AI\u2019s success, and private investments in AI sweepingly come from the US. However, the majority of time-consuming labour in AI is consigned to overseas gig workers earning salaries like\u00a0<a href=\"https:\/\/2022.internethealthreport.org\/facts\/\">$2.83 per hour<\/a>, so it\u2019s no surprise that the sector\u2019s relationship with diversity and ethics can seem off-putting. Global reception to AI has also revealed some grave problems in the way it produces and analyses. Generative AI images have been found unlikely to produce images of\u00a0<a href=\"https:\/\/slate.com\/technology\/2023\/02\/dalle2-stable-diffusion-ai-art-race-bias.html\">non-white people<\/a>. Face recognition technology used by police to identify suspects is\u00a0<a href=\"http:\/\/proceedings.mlr.press\/v81\/buolamwini18a.html?mod=article_inline\">more error-prone<\/a>\u00a0in detecting darker skin tones and women as opposed to images depicting white men. Women have played around with AI-editing to find hyper-sexualised results and asked: \u201c<a href=\"https:\/\/www.thecut.com\/2022\/12\/ai-avatars-lensa-beauty-boobs.html\">Why do all my AI avatars have huge boobs?<\/a>\u201d. Carnegie Mellon found women far\u00a0<a href=\"https:\/\/www.cmu.edu\/news\/stories\/archives\/2015\/july\/online-ads-research.html\">less likely<\/a>\u00a0to be shown targeted ads for high paying jobs. Additionally, the LLM GPT-3, was prone to spitting out sexist, racist, and violent\u00a0<a href=\"https:\/\/time.com\/6247678\/openai-chatgpt-kenya-workers\/\">rhetoric<\/a>.<\/p>\n<p>People are asking &#8211; entirely fairly &#8211;\u00a0who\u00a0has control of my data, and how are they\u00a0wielding\u00a0it? Are those working and influencing in AI diverse, do they care about social impact, mitigating bias, and protecting public wellbeing, in as much as they care about money, power, and relentless innovation?<\/p>\n<p>Furthermore, as AI becomes more powerful,\u00a0so do corporate monopolies, as AI increasingly influences and prevails over global communication, surveillance, and data wealth. Additional questions grow in relevance: do those creating these systems acknowledge the threat to democracy and the validity of law? Fundamentally, do these key players realise who any negative outcomes will affect, and do they care?<\/p>\n<h2><span class=\"ez-toc-section\" id=\"AI_Inputs-2\"><\/span>AI Inputs<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Pivotally, AI in its current form is a tool which can only reflect the influence of its creators, the data it is trained on, and the user operating it. It is learning its moral code from all of us &#8211; reflected in the data it consumes by those who teach it.\u00a0Thus, it outputs bias based on blindspots in the teams that developed it.\u00a0AI, as a new technology with the capacity to further increase hyper-connectivity, is likely to perpetuate and amplify any societal problems that we fail to solve ourselves. However, technology has the power to\u00a0<a href=\"https:\/\/link.springer.com\/chapter\/10.1057\/9781137349088_4\">restructure our physical and social worlds<\/a>, and proactivity can allow new technologies to produce hugely positive societal impacts. Ultimately, AI is a mirror, and we will get out of it what we put into it. The diversity of AI workers is thus principal: those that control the data input, the capacity, and the reach of AI.<\/p>\n<p>Most critically, the AI Now Institute reports that a chiefly white male coding workforce is\u00a0causing algorithmic bias. A lack of diversity in any sector leads to stagnation in innovation,\u00a0<a href=\"https:\/\/www.managers.org.uk\/knowledge-and-insights\/listicle\/the-five-business-benefits-of-a-diverse-team\/\">growth<\/a>\u00a0and positive development, for both the company and wider society. Considering the inevitable and rapidly approaching reach of AI in intimate areas of life like health, education and safety &#8211; this threat becomes more pressing. It\u2019s unlikely these creators seek to develop a harmful, homogeneous, biased product, but can someone regularly and effectively anticipate issues and problems completely outside of their own personal experience? Great power in the hands of a privileged few\u2026 (it\u2019s a tale as old as time &#8211; and one we know rarely ends well for most).<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Cause_for_Positivity-2\"><\/span>Cause for Positivity<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>This feels suitably daunting, but these ideas have not gone unnoticed. For example, AutogenAI\u2019s Chief Data Scientist, James Huckle,\u00a0explores how human intervention is used to train LLMs on a\u00a0<a href=\"https:\/\/arxiv.org\/abs\/2104.08758\">clean<\/a>\u00a0corpus of data, neutralise word bias, and evaluate and re-train models to avoid toxic or offensive language. CTO of OpenAI, Mira Murati, acknowledges what many of us have considered: \u201cThere are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it\u2019s important that we bring in different voices, like philosophers, social scientists, artists, and people from the\u00a0<a href=\"https:\/\/time.com\/6252404\/mira-murati-chatgpt-openai-interview\/\">humanities<\/a>.\u201d She disregards any idea that regulation at this point would slow innovation, noting the importance of policymakers and regulators in getting involved now. Governments and international organisations are taking action too, with\u00a0<a href=\"https:\/\/www.gov.uk\/government\/news\/more-women-to-be-supported-back-into-stem-jobs-in-government-backed-training?dm_i=2OYA,1D61Y,8P5KTK,5COB2,1\">the UK government<\/a>\u00a0launching a training scheme to get women into STEM jobs, and\u00a0<a href=\"https:\/\/artificialintelligenceact.eu\/\">the EU<\/a>\u00a0beginning regulation with the AI act, for example banning applications such as the Chinese government&#8217;s\u00a0<a href=\"https:\/\/www.aiplusinfo.com\/blog\/chinas-social-credit-system\/\">social scoring system<\/a>, and regulating those that might particularly perpetuate bias, like applications which scan and rank CVs. This regulation is not sufficient, and regulating AI is hard &#8211; AI being fundamentally reactive and transient. However, it nods to a global interest in protecting democracy, and so too, the importance of public pressure on governments and corporations to pursue moral AI development and effective regulation.<\/p>\n<p>Doubtless, there will be AI systems created which are limited, biased and greedy. However, with enough public interest, we can have far more which are ethical and inclusive. We are already building systems which are\u00a0<a href=\"https:\/\/www.pwc.com\/gx\/en\/industries\/healthcare\/publications\/ai-robotics-new-health\/transforming-healthcare.html\">improving healthcare<\/a>,\u00a0<a href=\"https:\/\/hbr.org\/2019\/10\/how-ai-and-data-could-personalize-higher-education\">personalised education<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.enterpriseappstoday.com\/news\/artificial-intelligence-in-agriculture-market-growth-usd-13-33-bn-by-2032-at-26-7-cagr-global-analysis-by-market-us.html\">increasing crop productivity<\/a>. Amongst hosts of potential applications, AI has the potential to help\u00a0<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3485128#d1e4323\">tackle climate change<\/a>, and improve communication for non-native language speakers.<\/p>\n<p>Whilst AI can produce bias at scale, it can also produce systems that\u00a0detect\u00a0bias at scale. For example, large language models can be used effectively to pick out nuanced biases in phrasing, sentence structure, and choice of words. Furthermore, fears of AI destroying education as we know it have become extensive, but as this discussion has intensified, early stage\u00a0<a href=\"https:\/\/hai.stanford.edu\/news\/human-writer-or-ai-scholars-build-detection-tool?utm_source=linkedin&amp;amp;utm_medium=social&amp;amp;utm_content=UComm_linkedin_Stanford-University_202302151523_sf175174497&amp;amp;utm_campaign=&amp;amp;sf175174497=1\">AI-or-human-writer-detectors<\/a>\u00a0have been built. The power of AI is immense &#8211; but we currently have the power of\u00a0demand\u00a0and social pressure to influence its development.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Conclusion-2\"><\/span>Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The AI company I work for is diverse, and there is a focus on regulation. But I can account for only one organisation, and with Big Tech and an overrepresentation of privileged men owning key processing power, multi-billions of parameters of global data and monopolising AI innovation, there is valid cause for concern as to the dangers of this lack of diversity. However, the very fact that fear for dangerous AI outputs is so widespread, suggests that this concern, movement for regulation, desire for diversity, and care for thoughtful innovation can be used to spur demand for diverse, unbiased, ethical AI.<\/p>\n<p>AI is inevitable. It is powerful. This should be exciting. This should create huge benefits. This could have an endlessly positive impact, and transform life for the better by creating opportunities for human connection and innovation. Whilst the power and onus to do good, regulate effectively, and innovate ethically lies resoundingly on big corporations funding and developing AI, and governments and international bodies, we should remember, optimistically and rationally, that as a public we do still have some democratic power that we can harness to lobby, boycott, and encourage AI to be a positive force for change.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the ever-evolving world of artificial intelligence, new products, developments and news articles are released on a daily basis, captivating a growing curiosity among people from all walks of life. Playing around with AI is fun, sexy, and immediately engaging: its unparalleled capacity and speed; the excitement of key players, billionaires and tech gurus; the&#8230;<\/p>\n","protected":false},"author":174,"featured_media":3034,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","inline_featured_image":false,"footnotes":""},"categories":[4],"tags":[],"class_list":["post-3033","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-category-2"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Bias In, Bias Out: A Closer Look at Diversity in AI | AutogenAI<\/title>\n<meta name=\"description\" content=\"We offer a concise introduction to the expansive and complex topics of diversity, bias, and ethics in AI. Lets diver deeper in diversity.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Bias In, Bias Out: A Closer Look at Diversity in AI | AutogenAI\" \/>\n<meta property=\"og:description\" content=\"We offer a concise introduction to the expansive and complex topics of diversity, bias, and ethics in AI. Lets diver deeper in diversity.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"AutogenAI APAC\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-08T15:45:42+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-03-26T11:52:59+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/autogenai.com\/apac\/wp-content\/uploads\/sites\/5\/2025\/02\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Mairi Bruce\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mairi Bruce\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/\"},\"author\":{\"name\":\"Mairi Bruce\",\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/#\\\/schema\\\/person\\\/900b088b4b56f0184c40e04efcf35c0c\"},\"headline\":\"Bias In, Bias Out: A Closer Look at Diversity in AI\",\"datePublished\":\"2023-03-08T15:45:42+00:00\",\"dateModified\":\"2025-03-26T11:52:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/\"},\"wordCount\":2541,\"image\":{\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/wp-content\\\/uploads\\\/sites\\\/5\\\/2025\\\/02\\\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg\",\"articleSection\":[\"AI\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/\",\"url\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/\",\"name\":\"Bias In, Bias Out: A Closer Look at Diversity in AI | AutogenAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/wp-content\\\/uploads\\\/sites\\\/5\\\/2025\\\/02\\\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg\",\"datePublished\":\"2023-03-08T15:45:42+00:00\",\"dateModified\":\"2025-03-26T11:52:59+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/#\\\/schema\\\/person\\\/900b088b4b56f0184c40e04efcf35c0c\"},\"description\":\"We offer a concise introduction to the expansive and complex topics of diversity, bias, and ethics in AI. Lets diver deeper in diversity.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/#primaryimage\",\"url\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/wp-content\\\/uploads\\\/sites\\\/5\\\/2025\\\/02\\\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg\",\"contentUrl\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/wp-content\\\/uploads\\\/sites\\\/5\\\/2025\\\/02\\\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg\",\"width\":1920,\"height\":1080,\"caption\":\"Bias In, Bias Out: A Closer Look at Diversity in AI\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Bias In, Bias Out: A Closer Look at Diversity in AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/#website\",\"url\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/\",\"name\":\"AutogenAI APAC\",\"description\":\"Win more business\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/#\\\/schema\\\/person\\\/900b088b4b56f0184c40e04efcf35c0c\",\"name\":\"Mairi Bruce\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4f7ea6723a58daf13eaf3c94b1b2c12bc2a345019bdc83a6fb27b65fac412ae2?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4f7ea6723a58daf13eaf3c94b1b2c12bc2a345019bdc83a6fb27b65fac412ae2?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4f7ea6723a58daf13eaf3c94b1b2c12bc2a345019bdc83a6fb27b65fac412ae2?s=96&d=mm&r=g\",\"caption\":\"Mairi Bruce\"},\"url\":\"https:\\\/\\\/autogenai.com\\\/apac\\\/blog\\\/author\\\/mairi_bruce_989ed621-ee41-43a6-b2d8-443d18de19ee\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Bias In, Bias Out: A Closer Look at Diversity in AI | AutogenAI","description":"We offer a concise introduction to the expansive and complex topics of diversity, bias, and ethics in AI. Lets diver deeper in diversity.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/","og_locale":"en_US","og_type":"article","og_title":"Bias In, Bias Out: A Closer Look at Diversity in AI | AutogenAI","og_description":"We offer a concise introduction to the expansive and complex topics of diversity, bias, and ethics in AI. Lets diver deeper in diversity.","og_url":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/","og_site_name":"AutogenAI APAC","article_published_time":"2023-03-08T15:45:42+00:00","article_modified_time":"2025-03-26T11:52:59+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/autogenai.com\/apac\/wp-content\/uploads\/sites\/5\/2025\/02\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg","type":"image\/jpeg"}],"author":"Mairi Bruce","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Mairi Bruce","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#article","isPartOf":{"@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/"},"author":{"name":"Mairi Bruce","@id":"https:\/\/autogenai.com\/apac\/#\/schema\/person\/900b088b4b56f0184c40e04efcf35c0c"},"headline":"Bias In, Bias Out: A Closer Look at Diversity in AI","datePublished":"2023-03-08T15:45:42+00:00","dateModified":"2025-03-26T11:52:59+00:00","mainEntityOfPage":{"@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/"},"wordCount":2541,"image":{"@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/autogenai.com\/apac\/wp-content\/uploads\/sites\/5\/2025\/02\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg","articleSection":["AI"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/","url":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/","name":"Bias In, Bias Out: A Closer Look at Diversity in AI | AutogenAI","isPartOf":{"@id":"https:\/\/autogenai.com\/apac\/#website"},"primaryImageOfPage":{"@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#primaryimage"},"image":{"@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/autogenai.com\/apac\/wp-content\/uploads\/sites\/5\/2025\/02\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg","datePublished":"2023-03-08T15:45:42+00:00","dateModified":"2025-03-26T11:52:59+00:00","author":{"@id":"https:\/\/autogenai.com\/apac\/#\/schema\/person\/900b088b4b56f0184c40e04efcf35c0c"},"description":"We offer a concise introduction to the expansive and complex topics of diversity, bias, and ethics in AI. Lets diver deeper in diversity.","breadcrumb":{"@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#primaryimage","url":"https:\/\/autogenai.com\/apac\/wp-content\/uploads\/sites\/5\/2025\/02\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg","contentUrl":"https:\/\/autogenai.com\/apac\/wp-content\/uploads\/sites\/5\/2025\/02\/Bias-In-Bias-Out-A-Closer-Look-at-Diversity-in-AI.jpg","width":1920,"height":1080,"caption":"Bias In, Bias Out: A Closer Look at Diversity in AI"},{"@type":"BreadcrumbList","@id":"https:\/\/autogenai.com\/apac\/blog\/bias-in-bias-out-a-closer-look-at-diversity-in-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/autogenai.com\/apac\/"},{"@type":"ListItem","position":2,"name":"Bias In, Bias Out: A Closer Look at Diversity in AI"}]},{"@type":"WebSite","@id":"https:\/\/autogenai.com\/apac\/#website","url":"https:\/\/autogenai.com\/apac\/","name":"AutogenAI APAC","description":"Win more business","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/autogenai.com\/apac\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/autogenai.com\/apac\/#\/schema\/person\/900b088b4b56f0184c40e04efcf35c0c","name":"Mairi Bruce","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/4f7ea6723a58daf13eaf3c94b1b2c12bc2a345019bdc83a6fb27b65fac412ae2?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/4f7ea6723a58daf13eaf3c94b1b2c12bc2a345019bdc83a6fb27b65fac412ae2?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4f7ea6723a58daf13eaf3c94b1b2c12bc2a345019bdc83a6fb27b65fac412ae2?s=96&d=mm&r=g","caption":"Mairi Bruce"},"url":"https:\/\/autogenai.com\/apac\/blog\/author\/mairi_bruce_989ed621-ee41-43a6-b2d8-443d18de19ee\/"}]}},"_links":{"self":[{"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/posts\/3033","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/users\/174"}],"replies":[{"embeddable":true,"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/comments?post=3033"}],"version-history":[{"count":2,"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/posts\/3033\/revisions"}],"predecessor-version":[{"id":3777,"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/posts\/3033\/revisions\/3777"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/media\/3034"}],"wp:attachment":[{"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/media?parent=3033"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/categories?post=3033"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/autogenai.com\/apac\/wp-json\/wp\/v2\/tags?post=3033"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}