[AI Experience] Inspiring Trust in AI
In part three of the AI Experience series, we discuss the importance of transparency in the evolution of artificial intelligence.
Trust is the foundation of any good relationship. It is what allows us to feel confident that we can rely upon someone, or something, without reservation. When it comes to our relationships with government institutions, companies and other individuals, what’s crucial in developing trust is knowing that any personal information they know about us remains confidential. This is especially important in creating public acceptance of AI.
Trust takes years to build, seconds to break.
Data breaches and the misuse of sensitive information in the name of business has eroded consumers’ trust in AI solutions as the nascent industry attempts to find its footing.* In a recent study published by Capgemini, 75 percent of respondents surveyed said they wanted more transparency from services powered by AI and 76 percent felt there should be more regulation on how companies use AI. It goes without saying that the challenge for AI developers is how to secure and further improve consumers’ trust, not only in the technology itself, but in their own intentions, motivations and policies.
According to Dr. Christina J. Colclough who advocates for global workers’ rights in the digital age, AI developers have an obligation to “introduce or enable trust in these systems.” The way to do that, she says, “is by having demands for transparency, fairness, and auditability, so that humans don’t feel that they are controlled by this algorithmic system, which knows more about me than I do.”
Transparency in AI can be broken down into five key areas – explainability, communication, purpose, data privacy and interface, all which we’ll take a closer look at below.
Explainability
With machines now making more and more decisions that affect human lives, it’s important to be able to understand the process by which such decisions are reached. For this to happen, AI developers must be more open about their systems so that consumers and regulators can have a better understanding of what’s going on under the hood. This will allow users and other key stakeholders to decide whether or not they think the recommendations or findings of AI-based systems come from processes that reliably prioritize fairness and accuracy.
“We need to build explainability into the process,” said Sri Shivananda, senior vice president and CTO of PayPal. “A customer should be able to see why something happened on a product or an experience and platforms need to be able to explain why any choice was made.”
Communication
End users will place more trust in AI – and have more realistic expectations for the technology – when they have a better understanding how it works, making clear, accessible communications essential. AI developers have a responsibility to accurately portray how their products do what they do, being upfront about any and all uses of personal data and addressing potential concerns customers might have in a straight-forward, factual manner.
“You have to have an honest, authentic conversation with your consumers so that they know exactly what’s going on,” said Jeff Poggi, Co-CEO of the McIntosh Group, who sees simpler, more forthright communications as being critical to the wider adoption of AI.
Purpose
A clearly defined purpose is also essential, both for the AI system in question and any personal information the user must provide to take advantage of its features. This is so users can be sure their data and habits aren’t being used for unapproved reasons or to provide recommendations or functionalities that serve commercial interests rather than their own.
A full disclosure of purpose helps consumers assess if they’re comfortable or not sharing their information, allowing them to determine whether it is being used to enhance the capabilities of AI, or merely as a means to gather data about their behaviors and preferences for marketing purposes. Developers must also enact effective oversight to ensure that the AI systems and services they develop continue to operate on the premise of providing value to the customer.
Data Privacy
For over a decade now consumers have provided their personal data in exchange for access to apps and online services, and more recently, to make use of AI solutions. So, probably many aren’t fully aware of the potential consequences of having their information stolen by bad actors. In fact, in a recent survey by PwC, 55 percent of respondents said they would continue to use or buy services from companies even after a breach.
“I think the majority of ordinary citizens and ordinary workers cannot even imagine the power and potential of these technologies,” said Dr. Colclough. “We don’t know what the threats to our privacy and human rights are.”
As AI systems become more prevalent in all areas of daily life, the opportunity for sensitive information to be compromised increases. Companies will need to invest ever more resources in data security if they are to foster trust and ensure the future success of AI technology. But, equally, investment is also needed to educate consumers about the inherent risks of sharing their personal information.
Interface
Making user-AI interactions as intuitive and seamless as possible will help companies boost the uptake of their intelligent offerings. A system that can read and interpret one’s behaviors and predict actions ahead of time could potentially be viewed as more invasive than helpful, preventing people from sharing accurate data or enabling all recognition and learning features. The development of interfaces that can speak and comprehend conversational language and recognize needs without being asked will introduce a human-machine relationship that is more personal and natural, helping to bring about the next level of the AI experience.
# # #
* https://www.edelman.com/news-awards/trust-technology-continues-erode-2020