Articles

Can Artificial Intelligence Cause Ethical Issues in the Future?

background image

Can Artificial Intelligence Cause Ethical Issues in the Future?

SubulOctober 10, 2021

In the last decades, artificial intelligence (AI) and machine learning experienced rapid developments, becoming increasingly essential for a wide range of industries. Digital marketing, healthcare, banking, retail, manufacturing, administration, online and in-store shopping and automotive industry are only few of the several fields where AI is showing its potential. 

On the other hand, more consumers are becoming aware of these technologies’ power enough to consider them essential in our daily lives. Most everyone benefits from online platforms such as Amazon, Google, Facebook and social media are harnessing the power of AI to expand their use beyond mere communication.

As the global digital transformation and AI technologies are turning into reality what was only visible in a sci-fi thriller, questions and worries concerning ethical issues are rising in parallel. 

The effects of AI are surely appealing. The complex system on which AI technology relies  brings benefits in almost every field of application,  by improving efficiency, cutting down costs and accelerating research and developments. As a result, more industries are moving to AI in order to survive the 4th industrial revolution.  

According to International Data Corporation (IDC) global spending on AI is forecast to accelerate over the next few years,  growing from $50.1 billion in 2020 to more than $110 billion in 2024, as part of the global effort to remain competitive in the digital economy. 

However, the question remains: at what cost? Will AI cause societal harm more than economic good? The concerns grow bigger for sensitive issues such as health and medicine, employment, privacy, and data gathering, as well as criminal justice and intellectual property. The system's opacity and the lack of a proper legislation have encouraged the flourishing of a vivid debate about the ethical, regulatory, and policy implications related to AI and its developments.

Here are five main ethical dilemmas of Artificial Intelligence technologies.


 

Creativity, Copyright and AI

Intellectual creation and inventiveness have been purely human activities for centuries, and as such have been protected and regulated by specific copyright and patent laws. With the advent of AI and the use of technologies as tools of expression and content-creation, the world of human intellectual property has been challenged. 

AI systems do not have a personality and cannot be considered persons or individuals, running away from the concept of authorship. However, an important question arises: is it right to give a legal personality to AI technologies? 

The correct legal framework for protecting AI-generated inventions is still in question. However, international organizations and actors are increasingly becoming aware of the need to protect AI-generated inventions with a view to enhancing research and technology.



 

AI-systems deliver biased results.

AI- powered technologies are based on specific features, metrics, variables, indicators and analytics structures all decided by a developer. This means that, when the designers feed the data into the system, they can potentially transfer their preconceptions and biases. This is particularly possible in structures relying on machine learning and data reflecting certain demographic groups. 

Different cases of discrimination and societal biases perpetuating through AI already occurred. For instance, COMPAS - a machine learning based software used in the US to predict the tendency of  a criminal to reoffend- was strongly biased against black Americans. Other examples are AI-based recruitment systems, some of which favour men over women.

Besides, lack of transparency and in some case the opacity of machine learning models makes it more difficult to recognise and address questions of bias and discrimination. Machine learning can create, in fact, patterns and models not understandable even for the designer, making it more difficult to predict and understand any possible discriminatory practice related to the technology. 


 

Surveillance, privacy, and human rights 

Intelligent technology can impact some of our fundamental rights like privacy, freedom of expression and personal security. 

One example of how AI is already affecting our privacy is via Intelligent Personal Assistant (IPA). Devices such as Amazon's Echo, Google's Home and Apple's Siri, are already largely used, and the trend is expected to accelerate. Forecasts suggest that by 2024, the number of digital voice assistants will reach 8.4 billion units- a number higher than the world’s population- while in the US only, more than 132 million people will use a voice assistant device by 2021. 

These assistants are capable of learning the interests and behaviors of the users while predicting the future ones, by storing their data, collecting personal information and always listening in the background. 

Another aspect of how AI affects privacy concerns Big Data. Governments, businesses, and enterprises have access to large pools of data collected through social media or via the web. Digital records can be used to predict future behaviors, political preferences, religious beliefs, and in some cases to monitor and predict potential troublemakers. 

Unemployment and new (un)balances 

Discussions and worries concerning the displacement of workers by technology are nothing new. Every industrial and technology revolution that occurred over the centuries, represented a substantial shift in the labor market. 

Nowadays, there is a widespread concern that artificial intelligence and related technologies will create massive unemployment at global level. AI is, in fact, spreading in almost every sector, progressively replacing humans in performing several works. 

The economists’ opinion is not unilateral but can be summarized in two main perspectives. The first one argues that robots and computers would replace significant numbers of both 'blue' and 'white' collar workers, triggering spill-over effects such as vast increases in income and unemployment rate, social disorder, and inequality. 

The second perspective sees a balance in the number of jobs AI will displace and create. In other words, some economists believe that robotics and digital agents will take over some of the jobs performed now by humans, but that, at the same time, will create new jobs. 

Nevertheless, it is clear that a range of sectors will be affected more than others, and consequently, some categories – including but not limited to women, young people entering the labor market alongside those without high-skill training, and people working in the less developed countries- will be more at risk than others. 

From this perspective, AI can create inequalities of resources, opportunity, and power in the business world, as well as perpetuate historic injustices.


 

Robot ethics

The issues concerning ethics and robots lead to a broader debate on accountability, liability, and legislation. To put it simply: if a robot acts, will it be responsible for itself, or the responsibility will fall on the developer?

These questions grow more urgent and delicate if applied on specific robotic areas especially those where robots interact with humans- such as elder care, medical robotics, military robotics, and entertainment robots. In this case, it is needed to rethink security schemes as well the long-term psychological and emotional effects of forming a relationship with a robot. 


 

To Conclude

Questions and dilemmas about how to act and react in front of ethical issues and AI remain open. However, considering the large impact and the growing presence AI is having in our life, a global effort to set boundaries and provide proper legislation is under way. As Brad Smith, Microsoft President in 2018, said: 

‘[…] technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses.'


 

Related Posts

blog date img

December 2, 2021

blog background image
When the COVID-19 pandemic broke out two years ago, much of the world ...
READ MORE

Subul

blog date img

September 8, 2021

blog background image
How to Make Traveling Safe with Artificial Intelligence...
READ MORE

Subul

blog date img

September 8, 2021

blog background image
Intersection of the Internet of Things with AI (AIoT)...
READ MORE

Subul

blog date img

October 7, 2021

blog background image
4 Ways AI-Powered Marketing Can Skyrocket Your Business?...
READ MORE

Subul

blog date img

December 2, 2021

blog background image
Is AI a Future Threat to Humanity? Pros and Cons of Artificial Intell...
READ MORE

Subul