BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Building Consumer Trust In The Age Of Generative AI Personalization

Forbes Technology Council

Chief Information Officer at TELUS International, a global customer experience provider powered by next-gen digital solutions.

Imagine a world where generative AI (GenAI) creates personalized customer experiences based on consumers’ unique preferences, needs and emotions. Amazing, right? But what if, to accomplish that, companies are collecting, analyzing and using their customers' personal data without their knowledge or consent? What if the outputs discriminate against certain groups or lead to decisions that are inaccurate or biased? It's not so amazing anymore.

That’s why U.S. President Biden’s October 2023 Executive Order on artificial intelligence is so important. It sets standards for the ethical and responsible development of AI that help safeguard society against fraud and deception. The order also challenges companies to develop GenAI solutions to ensure they respect consumer privacy, promote equity and provide transparency.

Even if the actions detailed by President Biden in the order aren’t yet law, companies leveraging the power of AI to enhance their customer experience (CX) through personalization should proactively incorporate its best practices into their AI governance framework. In doing so, they're taking steps to mitigate against harm while simultaneously building consumer trust.

Maintaining Trust In The Age Of GenAI

In the digital age, trust has become imperative as AI, and in this area, GenAI specifically presents both unprecedented opportunities and novel and intricate challenges. Today’s GenAI-powered chatbots can create convincing fake content, including text, images, videos, voice clips and “deepfakes” that seem real but can be used to spread misinformation and tarnish reputations.

Recent deepfakes that have made headlines involve financial scams and celebrity impersonations. With the upcoming U.S. election, there's already real concern that GenAI could be used by adversaries to undermine a fair political process. Deepfakes can be potent tools used to make disinformation campaigns more believable, undermining the reliability of visual content. Without proper oversight, these bots could erode voter trust by spreading harmful lies and propaganda.

The use of GenAI to power LLMs also raises copyright concerns. Bots typically scrape the internet for content—and this means they often use copyrighted and protected works without permission. For example, there have been recent debates around AI-generated art and literature, highlighting the need for robust mechanisms to protect intellectual property.

Not surprisingly, if consumers think a company’s chatbot is plagiarizing (paywall), it can quickly erode trust. This can be mitigated in a number of ways, including obtaining proper licensing for any copyrighted training data, screening for plagiarism and allowing users to flag AI-generated content that violates these policies so that it can be quickly removed. As technology advances, the potential for copyright infringement raises complex questions, necessitating a proactive approach to protect artists and content creators.

Moreover, the issue of hallucinations continues to plague GenAI platforms, whereby a model confidently fabricates content that doesn’t align with reality. These hallucinations often stem from insufficient or biased training data, a lack of common sense or its priority of providing a fluent answer over a truthful one. Although the responses might seem plausible, these imaginary answers harm consumer trust. Continuous monitoring and refinement are key to reducing hallucinations and boosting customer confidence.

A Solid Governance Framework

To ensure GenAI systems adhere to truth and trustworthiness, companies need to create a comprehensive framework for ethical AI governance. This framework should cover the entire life cycle of GenAI, from data collection and analysis to content generation, delivery and evaluation. In addition to aligning with a company’s values, it should also follow the ethical principles and best practices outlined by the Executive Order on AI and other relevant global standards and regulations.

Key components of an ethical AI governance framework include:

Privacy-by-design principles and practices that embed privacy into GenAI systems through data minimization, anonymization and other privacy-enhancing techniques.

• Investing in privacy-enhancing technologies that enable GenAI systems to analyze data and generate content without compromising customer privacy.

• Complying with data regulations and continuous monitoring, which involves conducting third-party audits, reviews and monitoring of data practices to detect violations.

• Incorporating reinforcement learning from human feedback (RLHF), which uses human ratings, reviews and corrections as reward signals to improve GenAI quality, fairness and accountability.

The Impact Of Transparency On Trust

Trust and transparency are essential in GenAI development to promote ethical practices, build consumer confidence and strengthen brand reputation, especially when applied to CX. A recent survey of 1,000 Americans familiar with GenAI conducted by my firm, TELUS International, found that nearly three in four (71%) expect companies to be transparent in how they’re using GenAI. By providing clear and reliable information, companies can establish credibility, accountability and responsibility with their customers, strengthening brand reputation and trust.

Companies should inform consumers about the presence and purpose of GenAI in their platforms and how they’re using it to enhance the customer experience. They should also disclose how they collect, use and protect customer data, as well as the benefits and risks of data sharing. Clear and easy-to-understand privacy policies that explain customers’ rights and choices regarding their data should also be communicated. Additionally, brands should enable customers to easily access and review their personal data and to correct any inaccuracies, letting them have some control over how their data is collected, stored and used.

The age of GenAI offers unprecedented opportunities for creating personalized CX that generates loyalty and connections with consumers. By adopting proactive, responsible AI principles and diligent governance, following best practices and working with industry experts to stay ahead of the curve on evolving standards and regulations, companies can ensure they're proceeding thoughtfully and leveraging GenAI ethically, ultimately building lasting consumer trust.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website