May 14, 2021

[AI Experience] Building Ethics Into Artificial Intelligence

Six icons introducing the key themes of the AIX Exchange report with an image of an AI robot displayed in the background.

In the second episode of the AI Experience series, we delve into the role of ethics in the evolution of artificial intelligence.

The benefits of AI notwithstanding, there continues to exist concerns about the ethical implications of a technology that could potentially know more about its users than they do about themselves. And for as long as there have been intelligent machines, there have been skepticism and distrust. While some of this wariness could be traced back to the way artificial intelligence has been portrayed in science fiction books and movies, it’s no exaggeration to say that suspicion toward such technology is widespread. In fact, the late Stephen Hawking once said, “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate.”

The acronym AI encircled by the stars of the European Union flagPhoto Credit: European Commission

To prevent such scenarios from ever coming true, the European Union recently put into place regulations to ensure that “good ethics” are built into all AI technologies. Regarding the new guidelines, executive vice president of the European Commission for A Europe Fit for the Digital Age, said, “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.”1

An illustration of a man being influenced by AI and lectures to represent the theme, 'AI ethics'

Before we delve into ethical AI, it is important to understand how AI learns – and what (and whom) it is learning from – in the first place. Because AI gains insight from data collected from existing societal structures and based on parameters set by human beings – such as researchers and developers – and the companies they work for, it is enevitable that the technology will reflect at least some of the biases, tendencies and preconceptions that exist within those separate but intimately related elements.

A photo of Yoshua Bengio, the 2019 Turing Award-winning AI researcher and founder of Mila

“Human centric means to take into consideration the human aspect of how the tools are going to be used, for what purpose, and what’s the consequence for humans who use the tool,” said Yoshua Bengio, 2019 Turing Award-winning AI researcher and founder of Mila (originally Montreal Institute for Learned Algorithms). “It’s important because those tools are becoming more and more powerful, and the more powerful the tool is, the more we need to be careful about how to use it.”

So now let’s take a look at five areas that are integral to the ethical development and use of AI, moving forward: inclusivity, values, governance, data privacy and purpose.

 An illustration with a robot typing on a laptop with a hologram of a human brain and data hovering

Inclusivity

Acknowledging and factoring in diversity is central to producing AI systems that meet the needs of a diverse global population. However, a study published by in 2019 concluded that there is an alarming lack of diversity in the AI field and that this is perpetuating all kinds of gender, race and religious biases.2

In response to this state of affairs, groups such as the African Institute for Mathematical Science, which launched courses to train young Africans in machine learning and its application to diversify the talent pipeline, are calling for initiatives to achieve a more equitable future for AI. But more consistent and far-reaching efforts are needed if there is to be better cultural and gender representation in the lab and in the industry.

A photo of a gavel with the scales of justice in the background

“If you don’t have diversity among the people who are doing the designing and the people doing the testing, the people who are involved in the process, then you’re all guaranteed to have a narrow solution,” says Charles Isbell, dean of computing at Georgia Tech and a strong advocate for increasing access to and diversity in higher education.

Values

To a great extent, the values of a nation determines the philosophy behind a country’s AI development. The decoupling of technology in such countries is often made on ethical grounds rooted in fundamental differences in ideologies and values. For example, there is wide disparity in how far governments can intrude upon people’s private lives from country to country.

To alleviate this, private entities should consider universal human values when designing AI systems and be responsible for their products that strongly impact society.

A photo of Dr. Yuko Harayama, executive director of International Affairs at Riken

“That’s why it’s not just about maximizing their profits… but taking the responsibility, in the way that the action will have an impact on society,” explains Dr. Yuko Harayama of Japanese scientific research institute, RIKEN, and former executive member of Japan’s Council for Science, Technology and Innovation Cabinet Office. “It’s up to us because we are all human beings and that means you are responsible for your action, including your action within your company.”

Governance

According to a list of 20 AI-enabled crimes put together by researchers at University College London, the biggest threat to civil order comes not from the technology itself, but from humans using it to their own illegal ends. Using driverless vehicles as weapons was among the possible crimes presented in the study.

Historically it has been the role of governments to ensure public safety through regulation and oversight but with AI, lawmakers are faced with the difficulty of legislating a technology that is constantly evolving and challenging to comprehend. Rather than leaving this matter exclusively in the hands of the government, a broad, interdisciplinary effort is required so that any legislation encompasses a wider range of viewpoints and is built on a deeper fundamental understanding of AI.

 Someone using a laptop with a holographic image representing data privacy

Data Privacy

AI systems for consumers have relatively few safeguards compared to those designed for industrial or military use, making them more susceptible to personal data breaches or misuse. This is why fostering trust is so important when it comes to human-centric AI systems – trust that users’ data is safe and protected and that it isn’t being used for purposes without owner consent.

A photo of Alexandra Zafiroglu, deputy director at the 3A Institute

“I think the biggest things we need to consider is what data is being collected, who is collecting it, where it is staying, and how it is being used and reused,” said Alex Zafiroglu, deputy director at the 3A Institute of The Australian National University, emphasizing the need for transparency in the use and collection of data for consumer AI solutions.

Purpose

To deliver relevant services and maximum convenience, AI-based systems require users to share certain pieces of personal data. The more personalized the experience provided, the more private data the user has to share.

A line of red head silhouettes with one of them blue and displaying a keyhole

It is critical, then, that the purpose for which this data is used is clearly defined at the outset, and strictly adhered to by the service providers and manufacturers concerned. If AI only employs collected data for a stated purpose, end users would feel less concerned sharing their personal information. This would enable intelligent products and services to deliver more value and the companies that produce them to better fine-tune their offerings.

A computer chip with the image of a human brain inscribed

As its presence and uses continue to expand in all areas of life, artificial intelligence presents human society with the potential for incredible advancement. While significant risks exist, these can be effectively mitigated by making ethics the center of all AI development.

Visit www.AIXexchange.com to discover the perspective of AI experts and leaders in various fields such as design, anthropology, policy, consumer and employee advocacy. LG ThinQ platform (thinq.developer.lge.com) is also available to external partners and developers to accelerate the development of its AI technology.

# # #

1 https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682
2 https://ainowinstitute.org/AI_Now_2019_Report.html

 

SUBSCRIBE

Sign up to receive LG Newsroom announcements by email