BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The State Of AI In Production In 2021

Forbes Technology Council

Ofer Razon is the CEO of superwise.ai, a fast-growing SaaS company pioneering the AI Assurance space.

2020 has been an outlier year in many respects. We've seen widespread disruption, change and uncertainty in every sphere of business. Yet, chaotic, unstable times tend to also bring great leaps forward in terms of technology and innovation. 2020 is the year that accelerated digital transformation, pushed remote working and highlighted the value of AI. 

In 2020, I've seen numerous enterprises discover just how much AI and ML tools can help their organization remain stable and even continue to grow despite the turmoil rolling through the markets. But this growth comes with the necessity to assure the health of ML models in production to avoid drifts, biases and anomalies. While AI adoption has taken a giant leap forward, we've learned that ML models need to be adaptable and robust. 

The question now is what will happen next? What will 2021 look like, and what does it mean for AI practitioners? 

Here are my predictions and expectations based on what I have witnessed over the last 12 months. 

AI Is Growing More Agile

Agile methodologies are spreading out from software development into every business use case, including the development and deployment of AI models. Enterprises increasingly understand that if they wish to unlock the true value of AI, they need to establish an agile, flexible data culture that is constantly learning and improving. 

Yet, ML behaves differently from traditional software as it relies on live data. Besides, when AI fails, it fails silently, requiring a robust monitoring strategy. Data science teams are learning to finetune the CI/CD approach to support more model changes, faster cycles and improved model monitoring that can detect drift, anomalies and blips in the system faster and earlier than before. 

On The Path To Pervasive AI

2020 is the year that moved AI out of the corners and into the mainstream. The "early adopter" advantage is petering out, with a Deloitte survey revealing that only 27% of enterprises now rank as "starters" in terms of AI adoption, while 47% are considered "skilled" and 26% "seasoned." 71% of participants plan to increase their investment in 2021, and the IDC forecasts that AI spending will grow to $97.9 billion by 2023, 2.5 times that of 2019. 

As AI becomes ubiquitous, the most mature organizations are looking for more use cases. While AI is still primarily utilized for IT, cybersecurity, and engineering and production, it's seeping into more business-critical functions like marketing, legal, HR and procurement. 

Advanced enterprises are moving on from using AI to automate processes and optimize efficiency and are becoming more creative. AI is slowly being used to develop new products and services, with seasoned adopters rating this as the second most desired outcome, while starter adopters still rank it a distant fourth. It's clear that we're moving swiftly toward pervasive AI as it becomes integrated into the fabric of business. 

New Roles In The AI Field

I'm also seeing the emergence of new roles within the field of AI, notably that of AI validator. AI validators perform the same role for AI models that QA assessors play for software development — namely, to challenge AI models and do their best to break them, find adversarial examples and understand how they function. 

AI validators are connected to another important and growing issue within AI: explainability. When a validator understands how the models function, the models are no longer a black box, which increases transparency and explainability for the enterprise. Overall, and especially as AI is becoming more pervasive, AI is not only the responsibility of the data science teams, but it is also one of subject matter experts and operational teams alike. For a lot of our customers, scaling AI is a matter of empowering their data science and operational teams who rely on the predictions for their day to day activities. 

Responsible AI Is Coming

AI models can suffer from model drift and flawed training data, leading them to make decisions that are not just incorrect, but can be unfair, unethical and discriminatory. Black-box models are frequently impenetrable even to the data science team that built them, producing decisions that can't be understood or traced. 

The recent Netflix movie The Social Dilemma increased general awareness about AI decision-making in areas like medicine, finance and education that can actively harm people's lives. Enter explainable AI, or XAI. 

Technology executives, like Google CEO Sundar Pichai, recognize the advantages AI regulations. Corporations are acting independently to improve transparency and traceability, especially verticals that deploy AI for significant decisions like accepting insurance applications or approving or denying healthcare. 

Enterprises are wise to preempt top-down regulations. Governments and international bodies are drafting legislation to regulate AI, similar to GDPR and other standards for data use. The EU's High-Level Expert Group on Artificial Intelligence produced ethics guidelines for trustworthy AI. The U.S. issued 10 principles for government bodies to use in developing AI regulations for the private sector. And China's New Generation Artificial Intelligence Development Plan (AIDP) includes eight principles for using AI to aid humanity. 

2021 Is The Year Of AI Assurance

We're entering 2021 with AI expanding across all business ecosystems, industries and use cases. It's not surprising, given the long-term human fear of a robot takeover, that one of the biggest trends underpinning its evolution is the need for better AI assurance, considering not just the production stage but what happens to AI models in the future. 


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on LinkedInCheck out my website