Pros and Cons of Artificial Intelligence in the Future


Pros and Cons of Artificial Intelligence - What are They?
Keywords : AI / Threat / Humanity
The several applications of Artificial Intelligent (AI) are increasingly becoming tangible and accessible for the great public. Almost everyone knows and/or uses AI-powered technologies such as digital personal assistants, autonomous cars, smart homes and cities, robotic surgery, online shopping and social media, to name a few.
However, as AI is growing at a very fast pace and constantly changing in an almost uncontrollable way, fears and concerns on future threats and dangers are growing in parallel. Notorious scientists and personalities deployed in the AI sector, such as Elon Musk, Bill Gates and Stephen Hawking, have expressed their concerns bluntly.
One of the most threatening warnings came from Stephen Hawking. The great scientist once said that ‘the development of full artificial intelligence could spell the end of the human race’ who will be ‘superseded’.
This does not mean that AI will suddenly turn evil and take control over mankind, like in a post-apocalyptic movie. Nonetheless, AI is growing more sophisticated day by day, covering a wide range of sensitive sectors from weapons production to healthcare and social interactions. The possible and current consequences of these developments as well as the responsibility related to designing AI technologies are the real issues.
The future and present dangers of AI is currently a highly debated topic that brings together sociologists, anthropologists, scientists, and AI experts. Currently, this debate turns around two focal points:
· The first one highlights the risk related to designing incorrectly an AI-powered technology.
· The second one concerns the possibility to programmed a technology with the aim of harming.
Let’s look at some of the common risks that, experts believe, might happen with the advancement of AI in the future.
1. Deepfakes, Fake News, and Political Security
We are a visual society, and the digital transformation played a key role in transforming us into images’ consumers and creators.
From a neurological point of view, the human brain acquires more information through images than other outputs. According to the Visual Teaching Alliance, 90% of information transmitted to the brain is visual, and visuals are processed 60,000X faster in the brain than text.
In short, images, videos and visual contents are important and powerful tools in our society. What if there was a way to manipulate them?
Today, individual images and videos can be easily manipulated by using AI technologies and machine learning. Through these software, the user can manipulate the image for commercial purposes or remove and replace objects and individuals from photos/videos.
At first, these tools were just used for entertainment purposes. However, in recent years, the political social and media implications of image manipulation are increasingly alarming the scientific community, especially as Deepfake emerged.
Deepfake is based on machine learning, usually deep learning techniques, which is used to produce or alter video.
In 2017, altered porn videos were released showing celebrities’ faces on the bodies of women in pornographic movies. This event inspired a lager debate about the danger of deep fake news which can be weaponized to ‘skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.’
Deep fake news could be, for example, even more dangerous if spread in the domain of politics and political discourses. Politicians are increasingly relying on social media to divulge their messages and create effective political propaganda. We have already seen the effects of deep fake news during electoral campaigns, which can influence votes and citizens' confidence in the trustworthiness of information, elements potentially destructive for a democracy.
2. Autonomous Weapons
Drones have been a key part of warfare for years, but they’ve generally been remotely controlled by humans. However, current technological developments and AI-powered systems such as image-recognition and autopilot software, are changing the game.
Autonomous weapon systems (AWS) can be defined as ‘weapons that process data from on-board sensors and algorithms to 'select (i.e., search for or detect, identify, track, select) and attack (i.e., use force against, neutralize, damage or destroy) targets without human intervention’ (ICRC, 2021)
In other words, AWS works by themselves, without meaningful human control, as they are simply triggered by an object or a person. Some AWS are already used for specific tasks essentially defensive in nature, such as air defense systems used in military bases and tanks to strike incoming missiles or munitions. However, the potential developments of these weapons to become ‘the third revolution in warfare’ raise meaningful political and moral questions.
The main risk concerns the unpredictability in using AWS. The user will not have control over the timing, the location as well as the specific target of the attack. This raises concerns about the risks for the civilian, the eventuality of conflict escalation and the intensity and destructive potential of the attack.
Second, AWS contributes to creating an emotional distance from the brutality of wars. Remote warfare risks to hide the war behind a false curtain of bloodlessness and irresponsibility, by giving the power to machines to choose who is going to die and who is not.
International organizations and human activists are already calling for a fair regulation and, in some cases, the pre-emptive ban on the development of fully AWS. The reason is that these weapons will not be capable of ‘meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity.’ (HRW, 2021)
3. Invasion of privacy, surveillance and social control
Intelligent technology can impact some of our fundamental rights like privacy, freedom of expression and personal security.
One example of how AI is already affecting our privacy is via Intelligent Personal Assistant (IPA). Devices such as Amazon's Echo, Google's Home and Apple's Siri, are already largely used, and the trend is expected to accelerate. These assistants are capable of learning the interests and behaviors of the users while predicting the future ones, by storing their data, collecting personal information and always listening in the background.
Cameras are nearly everywhere, and facial recognition technologies are already sophisticated enough to perform real-time profiling on individuals. Besides, governments, businesses, and enterprises have access to large pools of data collected through social media or via the web. Digital records can be used to predict future behaviors, political preferences, religious beliefs, and in some cases to monitor and predict potential troublemakers.
4. Misunderstanding between our goals and the machine’s
During a TED talk, AI researcher Janelle Shane presented new ice cream flavors AI came up with. The algorithm, fed with more than 1,600 existing ice cream flavors, identified peanut butter slime, strawberry cream disease and pumpkin trash as the solution.
By using ice cream, Janelle Shane, introduced one of the dangers you can encounter while working with AI: it is going to do exactly what you asked to do. If the designer is not clear with the goal, and if accidentally chooses the wrong problem to solve, AI can potentially find destructive solutions.
For example, if an AI autopilot is not designed to recognize other vehicles and pedestrians on the highway, or to respect the road rules, it will perform your command, bringing you from point A to point B, but leaving behind accidents and potentially fatal mistakes.
5. Socio economic inequalities
In the multifaceted debate on AI and economic inequality, the most common fear is that robots and algorithm will progressively replace humans in performing certain tasks, causing unemployment and thus socio-economic asymmetries.
Yet, there is another more insidious side-effect of AI: the process of labor market exclusion and (re-)inclusion.
The digital transformation will result in enormous profits for the companies that are incorporating AI technologies in their production plan. This profit will be directly linked to the overall benefits AI has to offer in terms of efficiency, productivity, and time of production. As a result, few hands (the top management stakeholders) will benefit from an enormous wealth, while unskilled workers will be out of work.
This scenario sees a class-based division between the masses who have the skills to develop and design an algorithm, a small enclave who owns the algorithm, the highly skilled workers, and the others. Due to automation, the low and medium skill jobs’ demand is declining, the unemployment rate is increasing along with the income gap between low/middle and high skill workers.
The expected gap is even more alarming if compared to the developing countries’ labor market. Digital division, economic, education and infrastructures barriers are all contributing to the creation of a digital biases which risk to deepen the social inequalities between the economic giants and the developing countries. The ‘Global South’, in fact, seems to be playing a marginal role in the digital revolution, being beneficiary or subject rather than an active player.
Related Posts

December 2, 2021

September 8, 2021

September 15, 2021

October 7, 2021

February 21, 2023

February 26, 2023

March 4, 2023

March 8, 2023

March 10, 2023

March 12, 2023

March 12, 2023

March 16, 2023

March 17, 2023

March 19, 2023

March 22, 2023

March 24, 2023

April 10, 2023

April 17, 2023

April 17, 2023

April 24, 2023

June 11, 2023

June 21, 2023

June 22, 2023

June 25, 2023

July 11, 2023

July 15, 2023

July 18, 2023

July 20, 2023

July 25, 2023

July 30, 2023

August 9, 2023