The numerous applications of Artificial Intelligence (AI) are increasingly becoming tangible and accessible to the general public.
From digital personal assistants and autonomous cars to smart homes and robotic surgery, AI-powered technologies have permeated various aspects of our lives.
However, as AI advances at an exponential pace, concerns about potential threats and dangers loom large. Renowned scientists and influential figures in the AI sector, such as Elon Musk, Bill Gates, and the late Stephen Hawking, have openly expressed their apprehensions.
The Future of AI and Real Concerns:
Stephen Hawking's warning about the development of full artificial intelligence potentially heralding the end of the human race has captivated attention. However, this does not imply a sudden emergence of malevolent AI reminiscent of a post-apocalyptic movie. Rather, the crux of the matter lies in the rapid growth and sophistication of AI, encompassing sensitive sectors ranging from weapons production to healthcare and social interactions.
Responsible design and comprehension of the potential consequences are the focal points in this evolving debate, which draws together sociologists, anthropologists, scientists, and AI experts.
Exploring Risks Associated with AI Advancements:
1. Deepfakes, Fake News, and Political Security:
We are a visual society, and the digital transformation played a key role in transforming us into images’ consumers and creators.
From a neurological point of view, the human brain acquires more information through images than other outputs.
According to the Visual Teaching Alliance, 90% of information transmitted to the brain is visual, and visuals are processed 60,000X faster in the brain than text.
In short, images, videos and visual contents are important and powerful tools in our society. What if there was a way to manipulate them?
Today, individual images and videos can be easily manipulated by using AI technologies and machine learning. Through these software, the user can manipulate the image for commercial purposes or remove and replace objects and individuals from photos/videos.
At first, these tools were just used for entertainment purposes. However, in recent years, the political social and media implications of image manipulation are increasingly alarming the scientific community, especially as Deepfake emerged.
Deepfake is based on machine learning, usually deep learning techniques, which is used to produce or alter video.
In 2017, altered porn videos were released showing celebrities’ faces on the bodies of women in pornographic movies. This event inspired a lager debate about the danger of deep fake news which can be weaponized to ‘skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.’
Deep fake news could be, for example, even more dangerous if spread in the domain of politics and political discourses. Politicians are increasingly relying on social media to divulge their messages and create effective political propaganda. We have already seen the effects of deep fake news during electoral campaigns, which can influence votes and citizens' confidence in the trustworthiness of information, elements potentially destructive for a democracy.
2. Autonomous Weapons:
Drones have been a key part of warfare for years, but they’ve generally been remotely controlled by humans. However, current technological developments and AI-powered systems such as image-recognition and autopilot software, are changing the game.
Autonomous weapon systems (AWS) can be defined as ‘weapons that process data from on-board sensors and algorithms to 'select (i.e., search for or detect, identify, track, select) and attack (i.e., use force against, neutralize, damage or destroy) targets without human intervention’ (ICRC, 2021)
In other words, AWS works by themselves, without meaningful human control, as they are simply triggered by an object or a person. Some AWS are already used for specific tasks essentially defensive in nature, such as air defense systems used in military bases and tanks to strike incoming missiles or munitions. However, the potential developments of these weapons to become ‘the third revolution in warfare’ raise meaningful political and moral questions.
The main risk concerns the unpredictability in using AWS. The user will not have control over the timing, the location as well as the specific target of the attack. This raises concerns about the risks for the civilian, the eventuality of conflict escalation and the intensity and destructive potential of the attack.
Second, AWS contributes to creating an emotional distance from the brutality of wars. Remote warfare risks to hide the war behind a false curtain of bloodlessness and irresponsibility, by giving the power to machines to choose who is going to die and who is not.
International organizations and human activists are already calling for a fair regulation and, in some cases, the pre-emptive ban on the development of fully AWS. The reason is that these weapons will not be capable of ‘meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity.’ (HRW, 2021)
3. Invasion of Privacy, Surveillance, and Social Control
Intelligent technology can impact some of our fundamental rights like privacy, freedom of expression and personal security.
One example of how AI is already affecting our privacy is via Intelligent Personal Assistant (IPA). Devices such as Amazon's Echo, Google's Home and Apple's Siri, are already largely used, and the trend is expected to accelerate. These assistants are capable of learning the interests and behaviors of the users while predicting the future ones, by storing their data, collecting personal information and always listening in the background.
Cameras are nearly everywhere, and facial recognition technologies are already sophisticated enough to perform real-time profiling on individuals. Besides, governments, businesses, and enterprises have access to large pools of data collected through social media or via the web. Digital records can be used to predict future behaviors, political preferences, religious beliefs, and in some cases to monitor and predict potential troublemakers.
4. Misalignment Between Human Goals and AI's Actions
During a TED talk, AI researcher Janelle Shane presented new ice cream flavors AI came up with. The algorithm, fed with more than 1,600 existing ice cream flavors, identified peanut butter slime, strawberry cream disease and pumpkin trash as the solution.
By using ice cream, Janelle Shane, introduced one of the dangers you can encounter while working with AI: it is going to do exactly what you asked to do. If the designer is not clear with the goal, and if accidentally chooses the wrong problem to solve, AI can potentially find destructive solutions.
For example, if an AI autopilot is not designed to recognize other vehicles and pedestrians on the highway, or to respect the road rules, it will perform your command, bringing you from point A to point B, but leaving behind accidents and potentially fatal mistakes.
5. Socio-Economic Inequalities
In the multifaceted debate on AI and economic inequality, the most common fear is that robots and algorithm will progressively replace humans in performing certain tasks, causing unemployment and thus socio-economic asymmetries.
Yet, there is another more insidious side-effect of AI: the process of labor market exclusion and (re-)inclusion.
The digital transformation will result in enormous profits for the companies that are incorporating AI technologies in their production plan. This profit will be directly linked to the overall benefits AI has to offer in terms of efficiency, productivity, and time of production. As a result, few hands (the top management stakeholders) will benefit from an enormous wealth, while unskilled workers will be out of work.
This scenario sees a class-based division between the masses who have the skills to develop and design an algorithm, a small enclave who owns the algorithm, the highly skilled workers, and the others. Due to automation, the low and medium skill jobs’ demand is declining, the unemployment rate is increasing along with the income gap between low/middle and high skill workers.
The expected gap is even more alarming if compared to the developing countries’ labor market. Digital division, economic, education and infrastructures barriers are all contributing to the creation of a digital biases which risk to deepen the social inequalities between the economic giants and the developing countries. The ‘Global South’, in fact, seems to be playing a marginal role in the digital revolution, being beneficiary or subject rather than an active player.
Conclusion
As AI continues to advance, it is vital to remain cognizant of the challenges and concerns it presents.
Striving for responsible development, ethical considerations, and comprehensive regulation will enable us to harness the potential benefits of AI while addressing the associated risks.
By understanding the evolving landscape and actively shaping the future of AI, we can ensure its alignment with human well-being and societal progress.
FAQ
1. What are deep fakes, and why are they a concern in the age of AI?
Deepfakes refer to manipulated videos or images created using AI technologies. They raise concerns as they can be used to spread misinformation, manipulate beliefs, and potentially disrupt democratic processes by influencing public opinion.
2. How do autonomous weapons powered by AI raise ethical and political questions?
Autonomous Weapon Systems (AWS) utilize AI algorithms to select and attack targets without human intervention. The lack of meaningful human control raises concerns about unpredictable target selection, civilian risks, and the moral implications of warfare being carried out by machines.
3. How does AI impact privacy and personal security?
AI technologies, such as Intelligent Personal Assistants and facial recognition systems, can infringe upon privacy rights. These technologies collect and analyze personal data, posing risks to individuals' privacy, freedom of expression, and personal security.
4. What is the potential risk of misalignment between human goals and AI actions?
AI systems rely on clear instructions from human designers. If the goals are not accurately specified or unintentional errors occur, AI may carry out actions that lead to unintended and potentially harmful consequences. This highlights the importance of precise goal setting and proper programming.
5. How does AI contribute to socio-economic inequalities?
AI's impact on the labor market can result in job displacement and the concentration of wealth among a few stakeholders. The decline in demand for low and medium-skill jobs can lead to socio-economic disparities, widening the income gap between highly skilled workers and those in less skilled positions.