Skip to Content

Trust and Risk Resilience in the Age of AI

Jennifer Zeifman

The image depicts a glowing, neon-lit brain over a dark background filled with circuit board designs and the letters

Artificial intelligence (AI) is exciting, pushing boundaries, and changing the game. For organizations that use it, AI also introduces new dimensions of reputational risk. 

Reputational risk isn’t new. In healthcare communications, where I focus, there’s long been a struggle against Hollywood’s negative portrayal of “big pharma.” Add to this, the growing expectations and scrutiny on how, or if, all businesses and leaders should take public positions on issues like global conflict, climate change and social inequality, and challenges abound. However, AI escalates these risks to a new level. 

In healthcare, AI’s need for large amounts of patient data raises significant privacy and security concerns. AI systems can make mistakes, such as diagnostic errors or incorrect drug development plans – especially if the data used to train AI systems is not representative of all patient populations. These risks are heightened by societal mistrust in AI. 

Artificial Intelligence has a Trust Problem 

At Proof Strategies, we’ve studied trust in AI for six years. Our 2024 CanTrust Index shows a steady decline in Canadians’ trust that AI will positively contribute to the economy, down to 33% in 2024 from 39% in 2018. Only 27% trust AI’s competence in healthcare, despite hopes that AI can help cure diseases like cancer. This becomes a reputational risk multiplier when it comes to any problem that can be linked to AI, because there’s already so little trust in the bank. Big businesses are also already starting from a trust deficit. 

(Mis)trust in Big Business  

Our CanTrust Index reveals that fewer than one-third of Canadians trust large corporations, and only one-quarter trust their executives. Similarly, the survey reveals that fewer than half of Canadians trust their boss to be competent and ethical, and employees give their employers a C grade on building external trust. In other words, if something goes wrong, many customers and employees are not ready to forgive and forget. With AI added to this cocktail of mistrust, organizations must understand the factors driving trust and apply them to AI. 

Applying the Science of Trust Building to AI 

Trust isn’t automatic. Rather, it can be deliberately built, rebuilt, and protected by nurturing its three ingredients: ability (competence), benevolence (kindness), and integrity (doing the right thing). Applying the ABI formula to the use of AI, organizations should take the following approach: 

Ability: Demonstrate competent AI use. Show understanding of AI’s capabilities and limitations, such as the ability to make moral or ethical judgments. 

Benevolence: Address low trust in AI by showing empathy and kindness. Use clear, transparent communication about privacy and security, and build feedback loops for stakeholder concerns. 

Integrity: Ensure ethical AI use. Develop a code of conduct covering honesty, accountability for mistakes, and safeguards like human oversight. 

AI Risk Resilience 

Applying ABI to AI helps create a solid foundation, but organizations must also prepare for worst-case scenarios. We’ve developed an AI Risk Resilience process to safeguard clients. This includes: 

Benchmark Research: Establishing a baseline understanding of organizational trust and risks. 

Rapid Response Protocols: Developing strategies to address AI-related crises swiftly. 

Spokesperson Training: Preparing representatives to communicate effectively about AI issues. 

Listening Tools Powered by Predictive Analytics: Monitoring conversations and sentiments around crises in real-time. 

Trust Recovery and Rebuilding Strategies: Implementing plans to restore trust after an AI-related incident. 

The pace of change driven by AI is unprecedented, and its future impact is uncertain. Organizations must take deliberate steps to build trust and mitigate risks associated with AI. For those ready to enhance trust in their AI initiatives, Proof Strategies offers consultation services to help navigate these challenges. 

In an era where change is the only constant, proactive trust-building and risk mitigation efforts are not just beneficial—they’re essential. Are you ready to build trust in your AI initiatives? Contact Proof Strategies for a consultation. 

A version of this article appeared on Healthy Debate