“AI is the new electricity.” - Andrew Ng (Tech Entrepreneur)
“AI will be the best or worst thing ever for humanity.”- Elon Musk (Tech Entrepreneur)
The above-mentioned quotable quotes from renowned technology entrepreneurs pretty much sum up the state of affairs that we are in. Today, there is no aspect of our lives and jobs that has not been impacted by Artificial Intelligence (AI). 2023 will perhaps go down as the year of ChatGPT. Generative AI (Gen_AI) models led by ChatGPT and including Microsoft Co-pilot, Google Gemini, Llama, and Claude, using Large Language Models (LLM) became ubiquitous and pervasive in all aspects of human endeavor. Any type of multimodal and multimedia data can be mined for insights and predictions as well as summarization, Question & Answers, classification, or out-of-the-box tasks. However, there is always the risk of inaccuracies and falsehoods being propagated. These are further complicated by bias and copyright violations due to this technology. Having said that, disruptions due to AI deployment have been there in every sector and/or vertical due to Generative AI. Healthcare, skilling, manufacturing, supply chain management, transportation, governance… AI is everywhere. Moreover, interestingly, we have only scratched the surface…
However, the widespread appearances of deep fakes amidst the global concerns of election interference using AI and other issues such as bias, copyright infringements, job losses, energy guzzling, misinformation, and low-quality content have opened the floodgates of potential use-cases of misuse of AI. Fears of dangerous capabilities leading to cyberattacks and cyber terrorism are also doing the rounds. There are also concerns of monopolistic practices as the field is presently dominated by a few high-technology giants such as Google, OpenAI, Microsoft, Nvidia, Baidu, Amazon, Meta, and Apple.
There is no doubt, a need for Ethical AI, Responsible AI, Sustainable AI, Trustworthy AI, human-centric AI, and in the larger context, AI for Good. The underlying principle towards AI regulation is based on the ethical considerations in AI, which have been in vogue since 2016. These include issues such as bias, fairness, accountability, privacy, accountability, machine ethics, safety, morality, transparency, trust, autonomy, freedom, and sustainability. Biases could be algorithmic, gender, language, or political in nature.
Today, governments are clamoring towards regulation of AI. It is estimated that the annual number of legislations concerning AI passed in various countries jumped from one in 2016 to 37 in 2022. These efforts are complemented by supranational agencies such as the International Telecommunication Union (ITU) agency of the United Nations (UN) & Organization for Economic Cooperation & Development (OECD) and professional societies such as IEEE and ACM. Regulation involves developing policy frameworks and enforcing laws for the regulation of AI. Regulations and policy for AI are, at best, a work in progress worldwide. One reason is the immense possibilities of this technology, as also the ever-emerging landscape in terms of applications, domains, and use-cases. There are also concerns that very strong regulation may stifle the innovation of companies who are engaged in AI.
Global Partnership on Artificial Intelligence (GPAI) was launched in June 2020 by the OECD, subsuming earlier discussions on the topic by various nations as early as 2017. India was a founder member of this partnership along with the EU and the US. Various working groups have been formed centered along various themes such as Responsible AI, Data Governance, Future of Work, and Innovation & Commercialization. These efforts have been complemented by UNESCO as well as the G20, who have also deliberated on AI in their summits and working group discussions. An initiative of the International Telecommunication Union (ITU) in partnership with 40 UN sister agencies, AI for Good is a global platform which aims to identify practical applications of AI to advance the UN SDGs and scale those solutions for global impact. Simultaneously, various countries are also working on their own sets of AI policy frameworks and regulations. Some of them, like the US, have been following a market-driven approach. On expected lines, China has a state-driven approach, whereas the EU follows a rights-driven approach.
The AI safety summit hosted by the United Kingdom in 2023 was a landmark event with respect to regulation to AI. At this summit, most advanced nations such as US, UK, China, India, and the EU more or less agreed towards a road map, which is defined in the Bletchley Declaration, a global agreement by national governments towards regulation and human oversight of AI. Signed by 28 nations, this Bletchley Declaration affirms that AI should be designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy, and responsible. A subsequent AI summit in Seoul in May 2024 and AI action summit in Paris in February 2025 only reaffirmed these concerns. The Paris summit saw 100 countries participate and sign the declaration on the promotion of inclusive and sustainable AI.
Some of the guiding principles of the Bletchley declaration for AI regulation are as follows:
- Protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy, and data protection needs to be addressed.
- General-purpose AI models, also called Frontier of AI, including foundation models, as well as relevant specific narrow AI that could exhibit capabilities, both need to be addressed differently. Frontier AI applications that need to be carefully examined include biotechnology and cybersecurity applications.
- International cooperation is necessary for AI regulation, and collaboration is needed between all stakeholders, such as countries, international fora, industry, civil society, and universities.
- Frontier AI risk shall be addressed as follows:
- Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
- Building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognizing our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
Many discussions and white papers, as well as high-ranking consultations on AI, have been deliberated in the USA. Some states are now working on legislation and Top AI companies, most of whom are based in the USA, have also provided voluntary commitments with respect to the AI risks and safety. The AI executive order by the last US administration was scrapped by the present administration. The present administration’s policy is towards deregulating AI in support of innovation over safeguarding risks.
EU has brought out an AI Act in August 2024, and this is the best one available in the current context. It covers all types of AI across a broad range of sectors, with exceptions for AI systems used solely for military, national security, research, and non-professional purposes. As a piece of product regulation, it does not confer rights on individuals, but regulates the providers of AI systems and entities using AI in a professional context. The Act classifies non-exempt AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, and minimal – plus an additional category for general-purpose AI such as ChatGPT.
- Applications with unacceptable risks are banned, such as those that manipulate human behavior and those which use biometric identification and social scoring.
- High-risk applications must comply with security, transparency, and quality obligations and undergo conformity assessments. These are concerning health, safety, and fundamental rights of citizens.
- Limited-risk applications only have transparency obligations. This includes AI applications that make it possible to generate or manipulate images, sound, or videos like deepfakes
- Minimal-risk applications are not regulated, which covers common applications such as those for video games or spam filters.
At present, India does not have an exclusive law dedicated to the regulation of AI. However, deliberations and discussions are happening at various levels, which can be characterized as more of a cautious approach. The focus is on responsible development and deployment rather than strict, one-size-fits-all laws. The Indian government is implementing a series of initiatives, guidelines, and advisories to manage AI-related risks and ensure ethical AI practices. Generally speaking, the approach of the Indian government is towards the promotion of innovation and startups leveraging AI. India is one of the favorite destinations for Deeptech startups, and overregulation can stifle these emerging startups.
The Ministry of Electronics and Information Technology (MeitY) of the Government of India is anchoring various efforts on behalf of the government. Existing laws like the Information Technology Act, 2000, and the Digital Personal Data Protection Act, 2023, play a role in overseeing AI activities. MeitY has issued advisories and guidelines to address specific concerns like deepfakes, privacy risks, and cybersecurity threats, such as the need for companies to obtain permission before deploying certain AI models. Sectoral regulators like the Reserve Bank of India (RBI) and the Telecom Regulatory Authority of India (TRAI) are also developing their own perspectives on AI risks and regulation.
An upcoming Digital India Act by MeitY, which is under discussion and deliberation, will subsume all earlier legislations. This proposed Digital India Act is expected to introduce AI-specific provisions, including those related to deepfakes, data protection, misinformation, cybersecurity, algorithmic accountability, and regulatory oversight for high-risk AI systems. Nevertheless, the Indian government is betting big on AI. A massive India AI mission, launched in March 2024 with an initial budget of Rs. 10,300 crores ($1.3 billion) over five years, spanning strategic initiatives across compute, foundational models, datasets, skilling, and safe and trustworthy AI. To sum up, the Indian government's vision is "AI for All," emphasizing the importance of ensuring that AI benefits all sectors of society, driving innovation and growth.
No doubt, ‘AI for good’ is the need of the hour
About the Author
Dr. Prashant R. Nair, Head, Internal Quality Assurance Cell (IQAC), Amrita Vishwa Vidyapeetham, Coimbatore, has over twenty-five years of academic and administrative experience. He has taught in academic programs in the USA and Europe at the University of California, San Diego; Sofia University, Bulgaria, and the University of Trento, Italy as an Erasmus fellow. He has written 6 books, 2 edited books, 1 book chapter, and over 60 publications in reputed journals, books, and conferences. He is active as a science writer & columnist. He has been serving as the Chairman/Vice-Chairman/Coordinator of Amrita University’s Internal Quality Assurance Cell (IQAC) since 2009. He has coordinated 3 cycles of NAAC accreditation, 2 cycles of NBA Accreditation, NIRF rankings, and Swachh campus rankings, and also served as UGC nodal officer for the university, with achievements such as an A++ grade by NAAC, top private university in India in both national and international university rankings. A very sought-after speaker, by conservative estimates, has addressed 150,000 students and trained 15,000+ faculty on technology, innovation, professional bodies, and quality aspects of higher education in India, USA, Thailand, Russia, Italy, Bulgaria, Hong Kong, Singapore, Nepal, Bhutan, etc. Awards & recognitions won include multiple CSI best faculty and academic excellence awards, IEEE Education Society global chapter achievement award, IETE & IE fellowships, Fulbright program reviewer, etc.