The Science Blog

Focus Knowledge

The Science Blog

A business professional interacts with a tablet, surrounded by glowing digital icons representing productivity and analytics.

How Governments are Regulating Artificial Intelligence

One of the most transformative changes in recent years has been the arrival of Artificial Intelligence (AI) in many sectors, from healthcare to finance. This rapid change comes with new efficiencies as well as ethical dilemmas. As AI sees its way into daily life, policymakers around the globe are establishing rules for how to navigate these shifts best.

Governments need to foster innovation and mitigate risks such as algorithmic bias, data privacy concerns, misinformation, and job loss. Countries prioritise different aspects of AI regulation. While the European Union (EU) has strict rules, the United States focuses on innovation. This article examines how different countries are leading AI governance and major global trends.

Person typing on a laptop with digital icons representing technology, security, and analytics above the keyboard.

United States: A Dynamic Approach to AI Governance

Executive Orders Shaping AI Policy

The United States approaches AI regulation flexibly, allowing policies to shift with technology. Hey, and thanks for hanging out here—a little online barnstorming. Basically, what the government sees are executive orders and agency initiatives.

In October 2023, President Joe Biden signed Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order was intended to promote competition in the AI industry, protect civil liberties and national security, and keep America competitive. It also mandated that federal agencies appoint “chief artificial intelligence officers” to oversee AI use in their departments.

In January 2025, a policy shift occurred when President Donald Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” The order encourages innovation by relaxing regulatory constraints while  pursuing AI for national security and economic growth. Unlike Biden’s order, it favoured an open-market approach with less government oversight.

National Security and AI Oversight

AI is also vital for U.S. national security. In October 2024, a National Security Memorandum was released to strengthen U.S. leadership in AI. It focused on developing safe and ethical AI for cybersecurity, intelligence, and military use, emphasising its strategic importance.

Although no comprehensive AI law exists in the U.S., like the EU’s AI Act, many federal and state initiatives help shape AI governance. For example, the Federal Trade Commission (FTC) and the Department of Commerce are involved in investigating AI-related fraud and protecting consumer rights.

European Union: Pioneering Comprehensive AI Legislation

The EU AI Act: A Landmark in AI Governance

The European Union leads in AI regulation with its Artificial Intelligence Act (AI Act), one of the most comprehensive legislative frameworks. This proposed law classifies AI systems by risk level:

  • Unacceptable risk: Harmful AI applications, like social scoring systems, are banned.
  • High risk: AI in critical areas like healthcare and law enforcement faces strict compliance.
  • Limited risk: Tools like chatbots must disclose their AI nature but have fewer restrictions.
  • Minimal risk: Applications like spam filters are primarily unregulated.

Spain’s Crackdown on AI-Generated Misinformation

Individual countries are adding extra safeguards as part of the EU’s AI regulations. In March 2025, Spain passed a law imposing heavy fines on companies that fail to label AI-generated content. This law aims to tackle deepfakes and AI misinformation, threatening democracy and public trust. Companies not disclosing AI-generated media could face fines of up to €35 million or 7% of their annual global revenue. The law also bans subliminal AI messages aimed at vulnerable groups and enforces strict rules on biometric data use.

Spain’s new Artificial Intelligence Supervision Agency (AESIA) will oversee these regulations, ensuring compliance and investigating violations.

A human hand shakes a robotic hand with a digital circuit pattern, symbolizing collaboration between technology and humanity.

United Kingdom: A Balanced Approach to AI Regulation

AI in Government Operations

The UK is taking a balanced approach to AI governance, integrating AI into government while exploring regulations. In early 2025, Peter Kyle, the UK’s Technology Secretary, shared his use of OpenAI’s ChatGPT for research on policy and media. While this shows the UK government’s openness to AI, critics warn that overreliance on AI could introduce biases and ethical issues.

Proposed AI Copyright Reforms

The UK government has also suggested controversial changes to AI copyright laws. These changes would allow tech companies to use copyrighted works without explicit permission unless creators opt-out. This proposal has faced backlash from artists, writers, and filmmakers who fear it may harm the UK’s £125 billion creative industry. Legal experts warn that this might violate international copyright laws, such as the Berne Convention, which protects authors’ rights worldwide.

China: Accelerating AI Development Amid Regulatory Constraints

AI Innovation and Global Competition

China is rapidly advancing its AI capabilities, investing heavily in research and infrastructure. The country has launched AI models like DeepSeek R1, a cost-effective competitor to Western systems. However, these advances raise concerns about China’s AI governance standards and the risks of uncontrolled AI deployment.

AI Censorship and Ethical Challenges

Unlike Western democracies, China’s AI regulations are closely linked to state control. The government monitors AI content to suppress sensitive political material and enforce censorship. In 2023, China mandated security assessments for AI models before deployment, ensuring compliance with data protection and stability guidelines.

India: Fostering AI Development Through National Initiatives

INDIAai: A Hub for AI Innovation

India has taken a development-focused approach to AI, balancing innovation and ethics. In May 2020, the government launched INDIAai, the National AI Portal, as a central hub for AI resources, policy updates, and educational content. The platform offers:

  • AI news and expert insights
  • Investment opportunities for AI startups
  • AI training programs and courses

AI for Social Good

India uses AI for public sector projects, like improving healthcare access and boosting agriculture. The government has started AI initiatives to monitor crop health, predict weather, and enhance disaster response.

Global Collaborations: Towards a Unified AI Governance Framework

While countries have unique AI policies, there’s a growing focus on global AI governance. Organisations like the United Nations (UN) and the Organisation for Economic Co-operation and Development (OECD) are working to create international AI standards that promote ethical use and interoperability.

For example, the Global Partnership on AI (GPAI) includes the EU, the U.S., the UK, Canada, and India. It aims to align AI governance globally and focus on ethics, fairness, and transparency.

A person interacts with a futuristic touchscreen displaying code, biometric data, and scanning interfaces in a high-tech environment.

The Future of AI Regulation

Balancing innovation and regulation as AI reshapes society. The diversity of approaches shows the complexities involved in regulating AI, as witnessed in the U.S., EU, U.K., China , and India. While some countries focus on minimum constraints for growth, others apply heavy regulations to mitigate risks.

Moving forward, international collaboration will play a vital role in ensuring that AI technologies align with humanity’s needs while mitigating ethical and security challenges. Over time, as AI capabilities improve, governments must also balance their regulations to address these new challenges while maintaining public trust in AI systems.

What are your thoughts on AI regulation? Are stricter rules necessary, or would governments prefer a more flexible approach ? Let us know what you think in the comment section!

Leave a Reply

We appreciate your feedback. Your email will not be published.