The Ministry of Electronics and Information Technology (MeitY) in India announced last Friday that any AI technology not yet completed must get government approval before it’s made available to the public.
Developers must also label these technologies to highlight any possible errors or unreliability in their outcomes before they can use them.
Additionally, the advisory introduces a plan for a “consent popup” that will warn users about any potential mistakes or issues in AI outputs. It also requires that deepfakes be clearly marked to avoid misuse.
Finally, the advisory states that all platforms must check that their AI products, like large language models, do not create bias, discrimination, or harm the fairness of elections.
Some industry leaders believe India’s new AI rules are too strict:
The advisory requires developers to adhere to its guidelines within 15 days of its release. They might need to demonstrate their product to government officials or undergo stress tests after complying and applying for permission to launch.
While the advisory isn’t a law right now, it shows what the government expects and suggests where AI regulation might be heading.
IT Minister Rajeev Chandrasekhar stated, “We are issuing this as an advisory today, asking AI platforms to follow it,” indicating that these guidelines will eventually become law.
Chandrasekhar further emphasized, as local media report, that AI platforms on the internet must fully account for their actions and cannot claim immunity due to being in a testing phase.