Privacy invasion and reduction of autonomy: How can tech regulations prioritise citizens?
Digital Illiteracy, discriminatory algorithms and AI-powered attacks have in common the shortage of a solid regulatory system regarding technology. In many cases, personal data is processed in unusual ways to benefit financial entities and reinforce governmental surveillance, which can result in a reduction of human autonomy and leakage of PII (Personality Identifiable Information). Digital technologies are becoming more complex, fast and unrecognisable, able to examine someone's life with just a scan. Not to mention, AI is present everywhere with monitorization, surveillance and security collecting data and indirectly influencing people's privacy.
ICO says:
This may make it harder to understand when and how individual rights apply to this data, and more challenging to implement effective mechanisms for individuals to exercise those rights.
The International Monetary Fund (IMF) mentions that digitalization of everything is a global goal for the IMF. However, the IMF demonstrates preoccupation towards regulation in transition since the world will be financially, socially and politically digitalized technical glitches and privacy reduction will occur due to AI systematise personal details such as names, location and dates of birth, not to mention, currently biometrics and fingerprints. This way, preventing unauthorized access (cybercriminals) or misuse by the programmers is a challenge while integrating legal and ethical reflections for digital purposes, which encounters a traditional regulatory structure and an AI that is not entirely accurate in anonymisation techniques to protect data. Consequently, the current regulatory system cannot prevent malicious activities, mitigate digital inclusion disparities or punish individuals mismanaging data.
Financially speaking, Forbes mentions that in 2021 data leaks of PII (Personality Identifiable Information) impacted a huge percentage of online users causing an average of $3.86 million per breach. Also, currently in 2022 it is calculated approximately $8 billion of privacy technology investment and by 2023 around 40% of privacy technology will utilize AI, Gartner said.
Juridically analysing, more than 60 international legislators are drafting "updated" privacy and data protection laws to deal with AI extrapolative mechanisms and to guarantee that PII is not used without someone consent(Gartner). Personally, it is strange to think that my health details, biometrics (facial recognition and fingerprints) and my online activities can be used without my permission for financial or governmental purposes. It seems like AI-powered tools can predict and indirectly control my behaviour reducing my autonomy of choice.
Additionally, Chinese authorities in Beijing are working to strictly maximize AI regulation regarding big data as well as cloud computing (a place where information is stored) to prevent emergencies and increase security. Thus, the goal is to prepare China to be the first technocrat country in the world, where cities are smartly connected to 5G internet and commanded by AI in sectors including the digital economy (total end of cash), urban governance (social credit) and food supply(technological safety production) (Yew Lun Tian).
The Information Commissioner's Office (ICO) published guidance or a "pseudo" framework focused on data protection principles to help corporations to reduce implications regarding Artificial Intelligence managing data privacy. However, processing data transparently does not imply to only companies but also governmental institutes since confidential data are without permission being marketed and delivered to governments to feed AI systems with a massive amount of data and then predict citizens behaviour. Basically, it is like living in a reality show where you are monitored 24 hours a day. Therefore, data protection officers (DPOs), tech policymakers and machine learning experts should work together to develop a solid regulatory system that prioritises human basic rights.
Accordingly, a Data Protection Impact Assessment (DPIA) seems to be a relevant step to ensure that the private information is under the legislation, not to mention to highlight potential risks in data management and legally require organizations or firms to deal transparently\ethically with AI systems while managing confidential data.
By reflecting on the above perspectives, we can notice that AI regulation is not just laws but individual rights. In other words, human beings are managing a tool (AI) that can promote convenience but at the same time can reduce people's autonomy and maximize invasion of privacy. This way, AI regulation needs to be ethically\transparently implemented to mitigate digital discrimination, reduce digital illiteracy, promote inclusion of all communities, and prioritise human autonomy.