Vietnam Enacts Sweeping AI Law, Testing Data Control and Digital Sovereignty

HANOI, Vietnam — Vietnam on Friday put into force a sweeping law to regulate artificial intelligence, making it the first country in Southeast Asia to impose a comprehensive, cross-sector AI regime and testing how far governments can go in tying AI oversight to tighter control over data and digital infrastructure.

The Law on Artificial Intelligence 2025, known as Law No. 134/2025/QH15, took legal effect March 1, less than three months after the National Assembly approved it in Hanoi on Dec. 10 with an overwhelming majority of delegates in favor.

The statute borrows heavily from Europe’s risk-based approach to AI, but is being rolled out in a very different political and regulatory environment. It combines rules on algorithmic transparency, safety testing and content labelling with an explicit push to secure what officials describe as Vietnam’s “national sovereignty regarding AI.”

A European-style framework on a compressed timetable

The new law applies to the research, development, provision, deployment and use of AI systems by organizations and individuals in Vietnam. It also covers foreign entities whose AI systems are used in Vietnam, with an exemption for defense, security and cryptography systems governed under separate laws.

Like the European Union’s AI Act, Vietnam’s statute groups systems into tiers of risk. The final text defines three categories — high, medium and low — and separately lists types of AI-related activities that are outright prohibited.

  • High-risk systems are those that could cause “significant harm” to life, health, legitimate rights and interests, national interests or security.
  • Medium-risk systems are defined as applications that may confuse, influence or manipulate users because they cannot recognize they are interacting with AI or AI-generated content.
  • Low-risk systems include all other applications.

Providers must classify their systems before deployment. Those offering medium- and high-risk AI must notify the Ministry of Science and Technology via a new national AI portal before the systems go into operation, and maintain supporting technical documentation. Regulators say oversight will scale with risk, from periodic inspections of high-risk systems to sample checks and incident-based supervision for lower-risk tools.

The timing stands out. While the EU plans a phased rollout of its AI Act over several years, Vietnam allowed less than a quarter of a year between passage and entry into force. Several law firms and industry groups have warned that secondary regulations and technical standards are still being drafted, leaving companies uncertain what compliance will require in practice.

What is banned — and what must be labelled

Although the law does not formally use the EU’s term “unacceptable risk,” it sets out a list of AI-related activities that are prohibited in all cases.

These include using AI systems to commit acts already banned by other laws; deploying AI-generated images, video or audio of real people or events in ways that are intended to systematically deceive or manipulate people and cause serious harm; and exploiting the vulnerabilities of children, the elderly, people with disabilities, ethnic minorities or others with limited capacity.

The statute also bars creating or spreading fabricated content that can seriously damage national security, social order or safety, and prohibits misusing data in violation of Vietnam’s laws on personal data protection, cybersecurity and intellectual property. Developers and deployers are forbidden from disabling human oversight mechanisms or concealing required information and labels.

To address deepfakes and misinformation, the law requires that AI-generated audio, images and videos be marked in a machine-readable format. When AI is used to simulate a person’s appearance or voice, or to recreate real events, deployers must clearly label the content so viewers can distinguish it from genuine material.

“For images and videos created or edited using artificial intelligence that can be confused with real ones, organizations and individuals must provide a clear notification that they are AI-generated,” the law states.

Separate provisions cover AI systems that interact directly with people, requiring that users be able to recognize they are engaging with a machine.

These rules build on the Law on Digital Technology Industry, which took effect Jan. 1 and already requires identification markings for certain AI-generated digital products and disclosure when users are dealing with AI systems rather than humans.

Digital sovereignty through data and infrastructure

Vietnam’s AI statute does not introduce sweeping new data-localization mandates on its own. But it sits on top of a dense set of existing rules that already make the country one of the more restrictive data environments in the region.

The 2018 Cybersecurity Law requires certain online service providers to store data on Vietnamese users in the country and maintain a local presence. A 2023 decree on personal data protection tightens consent requirements and mandates impact assessments for cross-border data transfers. A separate Law on Data, in force since mid-2025, classifies some datasets as “important” or “core,” with risk-based controls on exporting them based on national defense and security considerations.

In parallel, a national data strategy adopted by the government emphasizes building out domestic data centers, national and regional data hubs, and shared open-data platforms as a way to “ensure national digital sovereignty over Vietnam’s data.”

The AI law is explicitly linked to this agenda. It calls for the development of national AI infrastructure, including supercomputers and GPU-based cloud systems, a national database on AI systems and a state-backed National AI Development Fund to support research, startups and talent programs. The Ministry of Science and Technology will run a single-window portal and manage a centralized registry of AI systems operating in the country.

Draft implementing decrees reviewed by analysts have suggested that “important” AI applications — such as those in critical infrastructure or public services — should prioritize state-invested or domestic infrastructure. While later revisions appear to soften that into a preference rather than a strict mandate, foreign providers of cloud and foundation models are expected to detail how they will reduce reliance on overseas infrastructure for sensitive uses.

Industry urges delay as companies scramble

Global tech firms and local startups are now trying to work out how to comply.

The Computer & Communications Industry Association, which represents major platforms and cloud providers, wrote to the Vietnamese government on Feb. 27 urging it to “recalibrate” the implementation timeline. The group called for an 18-month moratorium on certification and enforcement to give time for detailed regulations to emerge and for conformity-assessment bodies to be accredited.

“Vietnam’s AI Law is highly prescriptive in nature,” the association said. It warned that enforcing requirements before secondary rules and technical standards are finalized could “create uncertainty for developers and deployers and risk slowing down digital innovation.”

Legal practitioners in Hanoi and Ho Chi Minh City have issued similar cautions. Advisory notes circulated by regional firms advise companies to inventory where they are using AI in Vietnam, map those systems to the three risk tiers and identify overlaps with personal data, financial services, health care or other heavily regulated sectors.

While the law contains provisions for regulatory sandboxes and emphasizes support for small and medium-sized enterprises, some lawyers argue that compliance costs could fall hardest on smaller players without in-house legal teams or local infrastructure.

Citizens’ rights and state control

Vietnamese officials and state media have framed the AI law as a way to protect citizens from fraud, disinformation and opaque automated decision-making, while ensuring that people can benefit from AI-enabled services.

Government publications highlight new rights for users to know when they are interacting with AI systems, to see clear labels on AI-generated images and videos, and to access AI-driven tools in education and social services. The explicit ban on systems that exploit vulnerable groups has been welcomed by some legal scholars as a rights-protective element.

At the same time, rights advocates say the combination of AI rules with existing cybersecurity, state secrets and data laws gives authorities broad discretion over online content and digital systems. Provisions against “fabricated content” that harms national security or social order are not clearly defined in the AI statute and could, critics argue, be applied to a wide range of speech, including politically sensitive material.

The creation of a national database of AI systems — while intended to aid transparency and oversight — also raises questions about how much visibility the state will have into the tools used by companies, universities and civil society groups.

A regional test case

Vietnam’s move puts it ahead of regional peers such as Singapore and Indonesia, which so far have relied more on non-binding guidelines and sectoral rules than on dedicated AI legislation. Policymakers and businesses across Asia are watching closely to see whether Hanoi’s approach becomes a model for other middle-income countries seeking to regulate AI while asserting control over data and infrastructure.

The outcome may hinge less on the law’s broad principles than on how the government writes and enforces the detailed decrees that will follow. Those will determine which applications are classified as high risk, how demanding conformity assessments become, what technical standards apply to watermarking and labelling, and how strictly authorities interpret the law’s security-related clauses.

For now, Vietnam’s AI ecosystem — from multinational cloud providers to local startups building chatbots and vision systems — is confronting a new reality. Every system must be classified, many will have to be notified to the state, and any AI-generated image or video that could be mistaken for reality will need a label.

Whether that framework ultimately boosts public trust and responsible innovation, or constrains development and digital trade, will become clearer in the months and years after the law’s rapid debut.

Tags: #vietnam, #artificialintelligence, #regulation, #datasovereignty, #deepfakes