The Economic Times daily newspaper is available online now.

    View: AI regulation must be more nuanced than set out in GoI’s half-baked advisory

    Synopsis

    A recent advisory says that under-testing or unreliable AI models that are available to users on the 'Indian internet' must take GoI's explicit permission, and be deployed only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated. But this is problematic.

    Nikhil Pahwa

    Nikhil Pahwa

    The writer is founder, Medianama

    For every idea you can think of, there's probably an AI model being developed. At last count, the Hugging Face community hosted as many as 547,898 AI models for use cases, ranging from architecture to legal research, online hate-speech detection, speech recognition and video classification.

    Alongside this, MLC LLM, a project still in its early stages, allows us to deploy AI models natively on our mobile phones or laptops. This is also the promise of open-sourcing AI. In the future, we might all have our own AI deployments on our phones, which we can train and develop to serve our needs.

    With deployment of AI in the military, scamming, risk of deepfakes and GoI's sensitivity towards chatbot outputs - such as the recent controversial output from Google's Gemini about India's PM - there is a case for regulating AI. However, we need a more thoughtful approach than India's recent half-baked and ham-handed attempt. The ridicule it elicited from technologists was deserved.

    A recent advisory says that under-testing or unreliable AI models that are available to users on the 'Indian internet' must take GoI's explicit permission, and be deployed only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated. But this is problematic.

    What is 'Indian internet'?
    There is no legal definition of 'Indian internet'. Internet is a global network of networks. This move left AI companies wondering if they'll have to ban Indian users until they receive GoI's approval.

    Reliable kya?
    Most popular AI models are unreliable, their outputs probabilistic. This is why almost every publicly available AI chatbot delivers a different response to the same query each time. Each word it generates is based on a prediction of what the next word is most likely to be, depending on its statistical modelling, and your input context and query.

    Unsurprisingly, this advisory was criticised as India's attempt to 'regulate mathematics'. If the probability is set for low accuracy, it gives a wildly creative but potentially nonsensical reply, much like a poet wandering through a maze of metaphors. You can't rely on AI for facts - they often fabricate information.

    Context fix
    The advisory says AI platforms must also ensure they don't permit users to host, display, publish, upload or share any unlawful content. A chat with a chatbot is a private conversation, not a communication to the public. But how can a platform prevent a user from taking a screenshot and publishing it online?

    The advisory is also issued under a provision of the IT Act that specifies due diligence requirements, and doesn't allow platforms to act on content unless there's a government or court order directing them to censor content.

    Intermediaries?
    The advisory appears to presume that AI platforms are 'intermediaries', which refers to the act of being a mere conduit between two users. However, if I'm chatting with a chatbot, who's the other user? If the chatbot is to be treated as a unique person, then will it be liable for output it generates? Any output from a chatbot is dependent on data it is trained on, its training weights and its transformation into a language model, and, subsequently, by fine-tuning, which impacts quality of output.

    Every output is also dependent on inputs the chatbot receives from the user as both context and query, and, finally, the probability setting that enables creativity in output. As such, there's no law or legal precedence to hold a chatbot responsible for its output. Also, whether AI chatbots are intermediaries or publishers remains an unanswered legal question.

    Comply or...
    The advisory also demanded impossible feats, like ensuring AI tools don't permit bias, discrimination or electoral process interference. While it is possible to modify AI tools to prevent discriminatory outputs, ensuring 100% compliance is next to impossible. How will this advisory be applicable once AI deployments and fine- tuning move to personal devices from the web?

    IT MoA Rajeev Chandrasekhar clarified on X that this advisory applies only to large platforms and not startups. But there appears to be no such mention in the advisory. And, surely, a tweet can't be a mechanism to modify a government advisory. Later, IT minister Ashwini Vaishnaw said at a press conference that the advisory is non-binding. But this is contradicted by the fact that the advisory states that platforms are required to ensure compliance with immediate effect and submit an ATR to the IT ministry by today, March 15. It's also unfortunately becoming common for GoI departments to regulate via non-binding advisories and FAQs.

    Despite these flaws, discordant tweets and press statements, the advisory is still operational and hasn't yet been withdrawn. While the intent of both ministers is sound - to try and ensure unlawful content isn't generated, and the risk of harm via an AI output to both reputation and the electoral process isn't impacted - perhaps a multi-stakeholder approach involving a public consultation with citizens, technologists, academia, lawyers and tech companies could lead to a better approach towards achieving India's goal of becoming an AI superpower, while at the same time ensuring trust and safety for Indian users.
    (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com.)
    The Economic Times

    Stories you might be interested in