Is AI ethically correct?
By Helio Junior
Recently, the Home Office in London inaugurated a new tech department called Office for Artificial Intelligence. According to their recent and competitive strategy "National AI Strategy", the UK needs to efficiently strengthen its AI technology to maximize the UK's presence in global standards. Respectively, 1.3 million companies are expected to adopt AI, achieving approximately £200 billion by 2040. However, the UK's AI strategy is still a "child" compared to China and US, which are in an AI race for supremacy.
One of the issues is the shortage of diversity (staff and tests) in national tech firms. Accordingly, Manchester Digital analysed that approximately 86% of employees in the technology sector (working for public and private sectors) in the North West of England are White, 6% Asian, 2% Black, 2% Mixed-race and 1% other ethnic groups. As AI models itself by learning from its software developer behaviour, if we have a tremendous racial imbalance in the Tech industry AI will be programmed according to white individuals characteristics, cultural perspectives and political concerns.
As a consequence, AI algorithmic decision-making prioritises a certain ethnical group and amplify discriminatory ideas of vulnerable communities because of neglecting design and production processes. For example, Stephen Cave and Kanta Dihal investigated that the representation of AI imaginary is consistently portrayed as white (colour and ethnicity). We can see that the well-known Sophia the human-like Robot is a relevant physical example of why AI humanoids are not multi-racially represented.
Personally, I have never seen a Caribbean, Latino or even Black AI humanoid before. I can tell from my experience assisting in an immigration NGO that non-European features people, specifically Black individuals are struggling to be analysed, identified, and categorized by AI systems while applying for a pre-settled status or a visa. In most cases, facial scan tests do not recognize the gender and features of the applicant. Not to mention that even the Home Office admitted that their AI tool was making immigratory processes uncomfortable for certain racial groups applying for documents, after Foxglove(independent non-profit) sued them.As Cori Crider(founder of Foxglove)said:
This "White racialisation" of AI automatically neglects multi-cultural representation causing racial biases in algorithms. As an illustration, the United Nations commented that an AI facial recognition system in accused an African American the state of Michigan (US) falsely man, which was arrested for a shoplifting crime that he did not commit. The local authorities trusted this AI programme, however, the results indicated that the captured citizen was the wrong one. Also, it was mentioned that the utilized AI tool was not well programmed to identify Black individuals facial differences because it was trained by white developers and tested only on white people.
As a KPMG survey demonstrated, the COVID-19 pandemic has accelerated the artificial intelligence implementation in politics and finances. However, problems including surveillance, job security or digital diversity inclusion were maximized due to the same AI automation. It seems like government regulators expected AI to be totally accurate without causing any ethical implication on citizens lives.
Since governments in the US, Germany, China and now the UK are accelerating AI technology in all sectors, so, offering a more realistic as well as updated image of AI in society is not a requirement, it is a must! I mean, we are already in 2022, the world is interconnected and people from different cultural backgrounds are present in all social spheres. Thus, by embracing all ethnicities and consulting a balanced percentage of representatives from the public sector, academic field and private sector, I believe that AI development will benefit all ethnic communities and significantly reduce the discriminatory levels provoked by AI algorithms.