Challenges & approaches in fostering tech progress, social welfare, ET Government

<p>As AI becomes increasingly woven into the societal fabric, India's regulatory stance on AI must evolve to strike a delicate balance between fostering technological progress and safeguarding citizen welfare, where the onus is not lopsided, for developers and users alike.</p>
As AI becomes increasingly woven into the societal fabric, India’s regulatory stance on AI must evolve to strike a delicate balance between fostering technological progress and safeguarding citizen welfare, where the onus is not lopsided, for developers and users alike.

With increasing deployment of artificial intelligence (AI) and machine learning solutions, particularly generative AI across the board within the last 18 months or so, it is quite evident that its scope of impact on business and society is practically limitless.

While classic intellectual ‘boon or bane’ debates about AI have been aplenty, recent attention on use of deepfakes and alleged role of generative AI systems in spreading misinformation has brought forth tangible concerns with its regulation. These concerns highlight the challenges associated with the development and use of generative AI, as well as the difficulties in regulating it.

Need for Regulation
It has long been clear that both creators of the technology and policymakers around the world are seeking regulation of AI, especially since the advent of generative AI models and tools which are accessible to the masses. Yet this remains challenging with diverse, and at times, conflicting regulatory approaches proposed.

For legislators, the task is particularly challenging, as it involves imagining the extent of the technology’s application, identifying areas where regulation is necessary and determining the extent of regulation. Although the technology is still in its early stages, its rapid evolution and proliferation across various sectors have necessitated a ‘sooner than later’ demand for regulation.

The European Union has already taken a bold step in this direction by adopting its AI Act, and many other countries, including India, are making serious efforts to formalize a suitable AI regulatory framework.

MeitY Advisories
In March 2024, the Ministry of Electronics and Information Technology (MeitY) in India issued a set of advisories to address immediate concerns related to the use of AI, placing emphasis on due diligence by intermediaries and platforms as required under the Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Intermediary Rules).

The first advisory, circulated on March 1, 2024, among other things, mandated that intermediaries and platforms seek explicit permission from the government before making ‘under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s)’ (AI Tools) available to Indian users. However, the government later attempted to clarify that the permission requirement only applied to ‘significant’ or ‘large’ platforms, but not to start-ups, before completely doing away with this requirement in its updated advisory issued on March 15, 2024.

The updated advisory broadly required intermediaries and platforms to ensure (a) use of AI tools on or through their computer resources does not permit users to deal with any unlawful content or violate any provisions of the Information Technology Act, 2000 or other laws (b) their computer resources do not permit bias or discrimination or threaten the integrity of the electoral process through use of AI tools, (c) under-tested or unreliable AI tools are made available to users in India only after labeling possible inherent fallibility or unreliability of the output generated and (d) synthetic content, which may be potentially used as misinformation or deepfake, is labeled or embedded with permanent unique metadata or an identifiers.

Challenges with Regulation
While the legal effect of the MeitY advisories is crucial in the present context and raises questions about the applicability of existing legal frameworks to generative AI, there are more fundamental questions that require further deliberation.

Generative AI models learn through rules set by humans, interpret data generated by humans, and cater to humans by providing specific outputs based on requirements and user-specific inputs. Each step of this process is naturally exposed to human interface and influence. As humans tend to be biased, often subconsciously, AI can inherit these biases when generating content. For instance, a model developed in one part of the world using primarily local datasets would generate results tuned to the preferences of the local population.

However, such results may not be acceptable or may even seem biased when accessed by a user based in a different part of the world. Similarly, for a diverse country like India, these issues may hold true when applied to different regions within the country. In such situations, unless the user is made aware of the background to the output in a transparent manner, such as the data sets used for training the AI model and the methodology undertaken for its evaluation and processing, the output may potentially get tagged as misinformation, especially where it is difficult to separate facts from opinions.

Part of the near-term solution lies in placing accountability on proponents of the technology in form of (a) minimum standards for training, transparency, and verifiability, (b) mandating disclaimers for unreliable or under-tested systems and making users aware of the limits of the system’s reliability, as well as (c) labeling synthetic content such as deep fakes – some of which already form part of the MEITY advisory. However, some onus must also be placed on users who choose to rely on such systems beyond what is declared by the provider or ignore disclaimers and fail to conduct their own due diligence on the outputs generated.

Longer term regulatory solutions should encourage the use of technology, rather than relying primarily on human intervention, and more importantly, would also need to ensure that responsibility is distributed appropriately among developers and users.

General or Sector-Specific Regulation
One of the main challenges that policymakers around the world are grappling with is creating an effective system of regulation for AI; one that does not hinder its development but effectively prevents its abuse. There are different opinions on whether a universal set of regulations should apply to everything AI, or whether each sector should come up with its own set of rules, depending on the particulars of such a sector.

As AI is being employed by users across the spectrum, leading to its integration into the mainstream, each sector may require its own unique set of regulations. For example, an LLM for general purposes trained on publicly available datasets will be subject to greater concerns about the integrity of data and the spread of misinformation, requiring tighter regulation in the public interest.

This will not be as critical for a specific tool created to improve product design in the automobile sector. The use of AI tools in healthcare, on the other hand, requires even closer scrutiny of input data to ensure that unreliable or misclassified data is excluded from consideration for drug discovery or disease detection. The manner of training the models and their subsequent scrutiny may also need to be different for different sectors.

Therefore, sector-specific regulation may be better suited for regulating generative AI, so that concerns for one sector do not impact the development of others, where not relevant. However, all sectors should adhere to certain common guiding principles to protect larger interests of public policy, national security and integrity, and human rights.

While the EU has taken the path of introducing a generalized framework under its Artificial Intelligence Act, classifying standards of regulation based on the risk level of an AI system, operationalizing its implementation across various sectors may require tweaking to better suit use cases.

In the context of India, after identifying high risk sectors or use cases, ideas such as a regulatory sandbox for generative AI products may be deployed specifically for these use cases or sectors to ensure responsible development of technology before its introduction to the public. This would also provide policymakers with a better hands-on understanding of the challenges faced by developers, while also making developers aware of the do’s and don’ts from a regulatory point of view.

Currently, where regulations are yet to crystallize in a rapidly evolving technological landscape, users are often left to discern the reliability of AI-generated data—a task that may be beyond their capacity or sometimes, beyond their intent.

This overreliance on AI, devoid of necessary checks and balances, has profound implications, potentially influencing public opinion and impacting decision-making in critical areas. As AI becomes increasingly woven into the societal fabric, India’s regulatory stance on AI must evolve to strike a delicate balance between fostering technological progress and safeguarding citizen welfare, where the onus is not lopsided, for developers and users alike.

(The Authors are Partner, Principal Associate, and Associate at Saraf and Partners; Views are personal)

  • Published On Apr 20, 2024 at 08:03 AM IST

Join the community of 2M+ industry professionals

Subscribe to our newsletter to get latest insights & analysis.

Download ETGovernment App

  • Get Realtime updates
  • Save your favourite articles


Scan to download App


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *