And if AI turns against us?
By Helio Junior
One of the reasons for the recent rise of Cyberattacks is the disintegration of social connection in digital accessibility affecting vulnerable communities(European Parliamentary Research Service). Digital illiteracy generates marginalization (over minority groups) and leads to further social polarization as digital services driven by AI technology tend to categorize people by race, demographics, and income level. In particular, research by Robert Bartlett declared that digital financial services reduce approximately 40% of presential service biases (discrimination or racism). Nevertheless, in my previous post approaching the "Whiteness of AI", I showed how AI algorithmic systems can discriminate against non-white individuals with algorithmic decision-making.
One example of digital disintegration is in the US rural regions, with only 27% of the population accessing online banking apps, also the lowest users are Black (18%) and Asian (28%)(Terri Friedline). In addition, approximately 90% of developing countries experience inefficient internet communication infrastructure(WEF). Consequently, inequalities while accessing the internet causes digital illiteracy and subsequently maximizes cyber risks towards vulnerable groups with lack of cyber awareness being an easy target of hackers.
Nowadays, the majority of key sectors are using the potential benefits of Artificial Intelligence (AI) to facilitate processes, being expected that the AI investment will surpass $100 billions of profit by 2025 in all industries (UNICRI). However, have you ever thought if AI evolutionary technology can be used for malicious purposes? In general, technology is supposed to provide freedom and democracy, but AI is already becoming a powerful instrument for cybercriminals, which are taking the advantage of algorithmic systems to cause more effective attacks on government's databases, financial corporations reports, and healthcare institutions confidential information. What can these bad guys get? So, these criminal groups, which usually are hacktivists, Nation States (State-sponsored), corporate spies and individual hackers obtain high economic gain, media notoriety, espionage information, and disestablish a country social order. Another key point to remember is that the obtained databases (private information) can be sold on the Deep Web or by corrupt governments.
Why hackers use AI? Basically, by using Machine Learning, which is a form of AI that allows machines (computers or smart gadgets) to independently learn from inserted data(Java point). Also, as clarified by the UK Office for Artificial Intelligence (2021) Artificial Intelligence (AI) is a software machine system able to imitate human intelligence through implemented data and algorithms as outcomes(result). In other words, machine learning predicts outcomes, AI coordinates the action and determines how a certain activity can be better completed.
But how do these attacks work? Accordingly, AI-powered hacking can automate intrusion processes through mimic techniques. For instance, Deepfakes are generated by AI, which manipulates by altering a person feature, voice or an object within a video or audio. This way, Deep learning algorithms replace facial features or voice structures (by analysing data sets) making it look veridic. In addition, the World Economic Forum declared that with Deepfakes, AI-Powered attacks can get a relevant percentage of confidential data, which can be used for the Financial Exploitation of Data or destroy someone’s reputation. As an illustration, a UK energy firm CEO was scammed by a voice Deepfake, losing around 220,000 euros(Penta Security). Percentagewise, the WEF estimated approximately 900% growth in online Deepfakes techniques "from 14,678 in 2019 to 145,277 by June of the following year."
Furthermore, the United Nations Interregional Crime and Justice Institute (UNICRI) examined that AI is recently being utilized for terrorist attacks. It is the case of regional and international competitions (reputation and international influence) that generates conflicts in the virtual world. We can see, for example, the situation between China-US and Russia-Ukraine within both cases technology is being used to threaten the opposite nation. Recently, the Ukraine’s digital transformation ministry said: “Moscow continues to wage hybrid war and is actively growing its information and cyber space capabilities”. The impacts were intimidating warnings in Ukrainian, Russian and Polish languages on the official website of the Ukrainian Foreign Ministry after a cyber-attack, which left malware (software with a file or code) to destroy devices internally. It is strange and kind of futuristic that even conflicts seem to be modernizing from physical disputes to virtual confrontations.
Moreover, in 2020, hospitals in the US such as St. Lawrence Health System in New York and Sonoma Valley Hospital in California suffered frequent Ransomware attacks between September and October, the growth estimated 71% (MITtech). In this case, conventional Ransomware attacks were amplified with AI causing a general interruption of healthcare services risking patients’ lives. By the way, it is expected in 2022 that approximately 30% of all AI cyberattacks will maximize training-data poisoning and model theft.
As seen above, illegal organizations and hostile governments can use AI extrapolative technology for nefarious purposes. I can see that one of the problems is the shortage of regulation (national and international) on AI implementation due to AI still being a "child" under the learning process. Not to mention that digital illiteracy facilitates(a lot) many cyber-attacks due to the lack of basic cyber awareness. So, I believe that this is the suitable time for first governments and then intelligence services to start regulating the AI limits to prevent it to become an uncontrollable tool. Also, cyber awareness programmes for minority groups and individuals with experiencing difficulties accessing internet would be a great initiative to reduce these levels of cyber-attacks.