BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

10 Strategies For Biz Teams To Prevent Bias In AI Data Results

Forbes Business Development Council

With an uptick in the reliance on artificial intelligence across various industries, companies and regulators must ensure that regulatory compliance standards are in place and continue to be updated if they want to solidify their trust and transparency among stakeholders.

Business leaders and their teams are responsible for monitoring inappropriate data that may result in negative outcomes when it comes to analyzing underrepresented communities that are often marginalized from the start, due to a broken internal system. Below, 10 experts from Forbes Business Development Council each share one best practice for managers to address bias in AI and adhere to ethical data usage.

1. Investigate Current Datasets For Existing Bias

First, look at the bias that may exist in current datasets that are less-than-representative. This is especially essential when it comes to categories like race, gender, sexual orientation and so on. Don’t assume that AI necessarily introduces bias. Instead, what we need to be mindful of is that AI may not introduce bias but rather may amplify, institutionalize and render invisible existing bias. - Hari Suthan, Constellation Software Inc.

2. Make Data Transparent And Comprehensive

The ability to access relevant data and contextual information in real time supports compliance with regulations and industry standards. Make this data transparent, and make your AI explainable to avoid the “black box” people naturally don’t trust. - Elizabeth Kiehner, Nortal

3. Highlight AI's Benefits And Comply To Industry Standards

To address AI bias and promote ethical data use, leaders must foster a culture of transparency and continuous learning. Incorporating ethics into AI from design to deployment is key here. Engaging with stakeholders openly about AI's impact builds trust. To navigate fear and ensure compliance, leaders should offer education on AI's benefits and demonstrate commitment to compliance standards. - Eddy Vertil, Vertil & Company

4. Address AI Bias And Keep All Stakeholders Accountable

Addressing the AI bias starts with acknowledging that debiasing is not a finite process for AI or humans. A diverse team of experts will keep each other in check when selecting varied data sources following ethical data collection principles. Transparency in data sourcing, honest communication about the risks of bias, and constant corrections to mitigate them will earn stakeholder trust. - Tomas Montvilas, Oxylabs


Forbes Business Development Council is an invitation-only community for sales and biz dev executives. Do I qualify?


5. Establish Clear Guidelines For Data Handling And AI Application

To mitigate AI bias and uphold ethical data usage, leaders must prioritize transparency, regularly audit algorithms and diversify training datasets. Establishing clear guidelines for data handling and AI application ensures regulatory compliance. Engaging stakeholders through open communication about AI policies and practices fosters trust and reinforces our commitment to ethical innovation. - Rahul Saluja, Cyient

6. Check In Regularly And Welcome Diverse Perspectives

Build a diverse and objective task force to lead all AI efforts internally and externally. When crafting and socializing norms around gathering unbiased datasets and ethical data use, keep in mind compliance standards. Transparency and trust can be built with regular check-ins and by including diverse perspectives across the organization. This is key to empowering and authorizing the task force to maintain standards. - Archana Rao, Innova Solutions

7. Continue To Update Ethical Guidelines

It’s key to constantly monitor and evaluate the data and algorithms used, regularly engage with diverse stakeholders, and have strict policies for ethical data collection and usage. Leaders can ensure regulatory compliance standards by consulting with legal professionals and implementing measures, such as disclosing datasets. Trust can be achieved through accountability and by updating ethical guidelines. - Luke Boddis, Checkout.com

8. Form An ‘AI Ethics Steering Committee’

The first step would be to set up an “AI Ethics Steering Committee” with full autonomy and authority. The committee typically comprises a diverse group of stakeholders, including ethicists, legal experts, technologists and representatives from affected communities. The team addresses the bias and ethical use of AI with compliance standards, transparency and trust. This contributes positively to humanity. - Saurabh Choudhuri, SAP

9. Implement An AI-Backed, Blockchain-Based Survey

Try using an AI-backed, blockchain-based survey. An anonymous survey is essential to achieve the organization's long-term goal because change management often brings about conflicts, which a survey can reduce. You can gather trustworthy insights and results using the algorithm within the AI-backed blockchain framework. - Gyehyon Andrea Jo, MVLASF

10. Remember That AI Must Be Directed With Human Insight

AI isn't perfect and won't ever be. The first step to addressing bias is simply to understand that AI isn't human; it doesn't understand ethics or race and isn't inherently evil. Creating company-focused guardrails and putting them in place for your AI endeavors now, though, will ultimately lead to fortuitous long-term results. - Brandon Batchelor, ReadyCloud Suite

Follow me on LinkedIn