New Delhi, India – The Indian government is asking tech companies to seek explicit consent before publicly releasing “unreliable” or “untested” generative AI models and tools. It also warned companies that their AI products must not generate responses that “threaten the integrity of the electoral process” as the country prepares for a national vote.
The Indian government’s efforts to regulate artificial intelligence represent a departure from its previous hands-off approach, which informed Congress in April 2023 that it was not considering any legislation to regulate AI.
The advisory was issued by India’s Ministry of Electronics and Information Technology (MeitY) last week, shortly after Google’s Gemini faced right-wing backlash for its response to the question, “Is PM Modi a fascist?”
The paper responded that Indian Prime Minister Narendra Modi has been “accused of implementing policies that some experts characterize as fascist” and that his government’s “crackdown on dissent and “Use of violence.”
In response, Information Technology Secretary Rajeev Chandrasekhar accused Google’s Gemini of violating Indian law. “‘Sorry, but ‘untrustworthy’ does not exempt you from the law,” he added. Chandrashekhar claimed that Google had apologized for the response, saying it was the result of an “untrustworthy” algorithm. The company responded that it is working to address the issue and improve its systems.
In the West, big technology companies often face accusations of liberal bias. These allegations of bias have also spilled over into generative AI products such as OpenAI’s ChatGPT and Microsoft Copilot.
Meanwhile, in India, government recommendations have raised concerns among AI entrepreneurs that the emerging industry could be suffocated by overregulation. With national elections about to be announced, this advisory will allow the Modi government to choose which AI applications to allow and which to ban, effectively controlling the online spaces in which these tools have influence. Some worry that it reflects an attempt to
“The feel of license raj”
This recommendation is not a law that automatically binds companies. However, legal violations could be prosecuted under India’s Information Technology Act, lawyers told Al Jazeera. “This non-binding recommendation looks more like a political posturing than a serious policy decision,” said Mishi Chaudhary, founder of India’s Software Freedom Law Center. “We’re going to see more serious efforts after the election. This gives us a glimpse into how policymakers are thinking.”
But the recommendations are already sending signals that could stifle innovation, especially in startups, said Harsh Chaudhary, co-founder of Bangalore-based AI solutions company Centra World. . “If all AI products require approval, that seems like an impossible task for governments,” he says. “We may need another of his GenAI (generative AI) bots to test these models,” he added with a laugh.
Several other leaders in the generative AI industry have also criticized the recommendation as an example of regulatory overreach. Martin Casado, general partner at US-based investment firm Andreessen Horowitz, said on social media platform wrote.
Bindu Reddy, CEO of Abacus AI, wrote on the advisory: “India has just kissed its future goodbye!”
Amid that backlash, Chandrashekhar issued a clarification on It added that it only applies to “critical platforms.”
But a cloud of uncertainty remains. “This recommendation contains terms such as ‘unreliable’ and ‘untested’. [and] “Internet of India”. The fact that some clarification was required to explain the scope, use and intent is a clear sign that the work was being done in a hurry,” Mishi Chaudhary said. “Ministers are competent people, but they do not have the necessary capacity to evaluate the model and issue a license to operate.”
“It’s no wonder [has] “It evoked the license raj sentiments of the 1980s,” she added, referring to the bureaucracy that required government permission for business activities that prevailed until the early 1990s, which stifled India’s economic growth and innovation.
At the same time, exempting only select startups from this recommendation could be problematic. They, too, can cause politically biased reactions or hallucinations if AI produces false or fabricated outputs. As a result, the exemption “raises more questions than it answers,” Misi said.
Harsh Chowdhury said he believed the government’s intention behind the regulation was to hold companies that monetize AI tools accountable for their mishandling. “However, a permit-first approach may not be the best approach,” he added.
deep fake shadow
India’s move to regulate AI content will also have geopolitical implications, argues Shruti Shreya, senior program manager for platform regulation at technology policy think tank The Dialogue.
“With a rapidly growing internet user base, India’s policies could set a precedent for how other countries, especially developing countries, approach AI content regulation and data governance,” she said.
Analysts say responding to AI regulations will be a difficult balancing act for the Indian government.
Millions of Indians are expected to vote in national polls likely to be held in April and May. With the rise of easily available and often free generative AI tools, India has already become a playground for manipulated media, and this scenario casts a shadow on the integrity of elections. India’s major political parties continue to deploy deepfakes in their election campaigns.
Kamesh Shekhar, senior program manager specializing in data governance and AI at think tank The Dialogue, sees the recent recommendations as part of the government’s ongoing efforts to draft comprehensive generative AI regulations. said it should.
Prior to this, in November and December 2023, the Indian government ordered Big Tech companies to remove deepfake products within 24 hours of complaints, label manipulated media, and address misinformation. asked them to actively engage in this. However, he did not mention any clear penalties for this. Not complying with the directive.
But Shekhar also said policies that require companies to get government approval before launching products stifle innovation. “Governments may consider creating sandboxes, or live testing environments where AI solutions and participating parties can test their products without large-scale deployment to determine reliability,” he said. Ta.
However, not all experts agree with the Indian government’s criticism.
As AI technology continues to evolve at a fast pace, it is often difficult for governments to keep up. At the same time, Hafiz Malik, a computer engineering professor at the University of Michigan who specializes in deepfake detection, said the government needs to intervene in regulation. He said it would be foolish to leave it to companies to self-regulate, adding that the Indian government’s recommendations were a step in the right direction.
“Regulations need to be introduced by governments, but they should not be introduced at the expense of innovation,” he said.
But ultimately what is needed is to raise public awareness, Malik added.
“Seeing something and believing it is now off the table,” Malik said. “The deepfake problem cannot be solved unless the public is aware. Awareness is the only tool to solve a very complex problem.”