Inhaltsverzeichnis
Overview

It’s Up to Industry to Regulate AI: The White House’s AI Action Plan is long on ambition, but short on guardrails

It’s Up to Industry to Regulate AI: The White House’s AI Action Plan is long on ambition, but short on guardrails

On July 23, the White House released President Trump’s AI Action Plan, striking a welcome tone of both optimism and urgency. The optimism stems from the administration’s view of AI’s potential to ignite, in its words, a new industrial revolution, a new information revolution, and a cultural renaissance. The urgency arises from the fact that American companies are competing globally to make America the world's leading supplier of AI-powered technology.

While these sentiments will undoubtedly spur innovation, they also introduce risks. As we’ll see, some of the Plan’s prescriptions could hinder AI developers and adopters from ensuring their models’ trustworthiness. Consequently, the burden of governance will fall on industry. More than ever, organizations working on or with AI will need to be sure they know and understand the data it's trained on.

Making America the world’s AI leader…

The administration aims to accelerate AI development and adoption by removing regulatory obstacles, investing in domestic AI infrastructure and workforce upskilling, and promoting AI adoption across the public and private sectors. 

Start with regulatory reform. The administration wants to streamline permitting for new datacenters and semiconductor manufacturing facilities. It also aims to support strong, stable power supplies for those projects by modernizing the nation’s complex energy grid. 

As for infrastructure, the Plan envisions American dominance across the entire AI technology stack, including “hardware, models, software, applications, and standards.” By increasing science funding and onshoring AI development at every stage, the administration hopes to establish America the world’s leading AI technology exporter.

The administration’s focus on studying AI’s labor market impact is also welcome. The Plan directs the Department of Labor to gather data on AI automation and its effects on employment, while also providing funding for retraining and upskilling workers to compete in an AI-powered workforce.

To promote AI adoption, the Plan directs government agencies - particularly the Department of Defense - to integrate AI into their operations wherever feasible. To ensure cybersecurity remains a priority for agency leaders, the Plan requires AI developers to incorporate secure-by-design principles.

… with one hand tied behind our back?

While there’s much to cheer for in the Plan, it also contains some potential pitfalls. For instance, the White House has proposed reviewing FTC investigations “to ensure that they do not advance theories of liability that unduly burden AI innovation.” This proposal will likely concern privacy and consumer protection advocates, as a hamstrung FTC will struggle to prevent AI systems from misusing Americans’ sensitive personal data.

In other respects, the Plan appears to contradict itself or other aspects of President Trump’s broader policy agenda. For example, while the Plan calls for more government investment in AI research, the administration has already cut federal science funding by 34 percent, including in areas directly impacting America’s AI competitiveness such as math and physics ($289 million), engineering ($127 million), computer science ($85 million), and technology ($18 million).

The Plan aims to spur innovation by encouraging the distribution of open-source and open-weight AI models. While this could certainly increase the pace of innovation and protect end user privacy, it also presents risks. Depending on their power, these models could be used to generate instructions for building chemical or biological weapons, automate the generation of zero-day exploits, or create public disinformation campaigns - all undermining the secure-by-design approach the Plan otherwise promotes.

The Plan strives to maintain a strategic advantage over China by ensuring America’s domestic capacity to develop the full AI computing stack with American products and infrastructure. However, if having access to the semiconductor chips necessary to build AI systems is a major strategic advantage, then the administration relinquished it when chip maker Nvidia announced it would resume sales of advanced chips to Chinese buyers.

The Plan also seeks to counter China’s influence in multilateral treaty organizations currently defining AI standards and best practices. But recent budget cuts and layoffs at the State Department have dismantled the Bureau of Cyberspace and Digital Policy and the International Cyberspace Security division, two offices that had been ideally positioned to achieve these objectives.

Finally, the Plan requires AI systems to “be free from ideological bias and be designed to pursue objective truth rather than social engineering agendas.” It sounds fine in theory, but what does it mean in practice?
In 2015 Google faced a public relations crisis when its AI-powered photo-tagging app mislabeled Black Americans as “gorillas.” Similar technology in digital cameras mislabeled Asian faces as “blinking.” These and other instances highlighted the critical need for more representative training datasets.
Similarly, when Google developed the word2vec method of word embeddings - a key technique for training large language models - users were amazed at how the technology allowed a system to learn analogies such as “man is to woman as king is to _____,” with the system correctly returning the answer “queen.” But the same system also generated embarrassingly anachronous analogies like “man is to woman as computer programmer is to homemaker.”

For AI developers, the message was clear: successfully aligning AI with our values, and not simply regurgitating the biases of the past, would require the deliberate curation of training datasets. But how much and what kind of curation is appropriate, and when does it cross over into “ideological bias” or “social engineering”? 

It’s not just an academic question. The Plan envisions America as the world’s largest exporter of AI systems, yet as Raul Brens Jr. of the GeoTech Center points out, it may be challenging to market American-made AI to countries “across Europe and the Indo-Pacific that have invested heavily in building their own AI rules around transparency, climate action, and digital equity.”

Let a hundred models bloom

While the Plan certainly incentivizes innovation, industry leaders must translate this innovation into widespread adoption by aligning on appropriate standards for safe and trustworthy AI.

Cyera is committed to this goal. Its AI-native data security platform is already helping many Global 2000 companies securely enable AI by discovering and classifying sensitive data in AI training sets, identifying AI applications and agents, and preventing them from ingesting or sharing PII, intellectual property, or other sensitive data.

If you’d like to see how Cyera can help your organization take charge of your data to harness the power of AI, request a demo today at Cyera.com

Erlebe Cyera

Um Ihr Datenversum zu schützen, müssen Sie zunächst herausfinden, was darin enthalten ist. Lassen Sie uns helfen.

Holen Sie sich eine Demo →
Decorative