Biden-Harris administration secures commitments from more Artificial Intelligence companies to manage risks

With Artificial Intelligence becoming common in the workplace, the White House has gotten a voluntary commitments from seven prominent AI businesses to enhance stable, trustworthy AI.
Photo submitted

By Stacy M. Brown
NNPA Newswire Senior
National Correspondent
@StacyBrownMedia

President Joe Biden and Vice President Kamala Harris announced they have taken decisive action to limit Artificial Intelligence risks and maximize advantages.

The White House said it is working on regulations for AI and partnering with top AI companies to improve responsible AI development. 

For example, in July, the Biden-Harris administration received voluntary commitments from seven prominent AI businesses to enhance stable, trustworthy AI. U.S. Secretary of Commerce Gina Raimondo, White House Chief of Staff Jeff Zients, and senior administration officials met at the White House to announce that eight companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability—have pledged to help develop AI technology safely, securely, and with trustworthiness.

The Biden-Harris administration said its holistic approach to AI’s promises and risks includes several commitments that bridge government action. The Administration said it would develop an Executive Order and pursue bipartisan legislation to promote responsible AI development in America.

The firms’ immediate promises emphasize three fundamental AI principles—safety, security, and trust—and are a crucial step toward responsible AI. As innovation accelerates, the Biden-Harris Administration has pledged to act decisively to defend Americans’ rights and safety.

Among the pledges made by top AI startups include:

  • Ensuring product safety before it is publicly released.
  • Building security-first systems.
  • Earning the public’s trust.

Even as innovation accelerates, the Biden-Harris administration will act decisively to defend Americans’ rights and safety. The White House noted that independent specialists would conduct tests against bio-security, cyber-security, and the societal impacts of AI.

AI corporations would invest in cyber-security and insider threat protection for proprietary and unreleased model weights. 

“The most important aspect of an AI system is its model weights, which organizations agree must be released only when necessary and security threats are assessed,” administration officials wrote in a release. “The businesses (must) promise to help third parties find and report AI system problems. After an AI system is released, a robust reporting process can find and fix any remaining faults.”

Further, the firms will create watermarking systems to alert viewers to AI-generated content.

“This boosts AI creativity and production while reducing fraud and dishonesty,” administration officials said.

They said corporations would disclose their AI systems’ capabilities, limitations, and suitable and inappropriate use. According to the White House, this would address security and societal threats like fairness and bias.

Additionally, the corporations would prioritize AI system social hazards research, including avoiding negative bias and discrimination and preserving privacy. The history of AI indicates the potential size and prevalence of these threats, and organizations commit to implementing AI that mitigates them, officials stated.

The firms also promise to create and implement robust AI systems to solve society’s biggest problems. Administration officials asserted that, if managed well, AI can improve prosperity, equality, and security through preventing cancer, mitigating climate change, and more.

“Today’s announcement is part of the Biden-Harris administration’s commitment to securely and responsibly develop AI, defend Americans’ rights and safety, and prevent harm and prejudice,” officials said.