US lawmakers demand clarity on OpenAI safety practices in joint letter

Share This Post

A group of US Senators sent a detailed letter to OpenAI CEO and co-founder Sam Altman seeking clarity on the company’s safety measures and employment practices.

The Washington Post first reported about the joint letter on July 23.

Senators Brian Schatz, Ben Ray Luján, Peter Welch, Mark R. Warner, and Angus S. King, Jr. signed the joint letter, which has set an Aug. 13 deadline for the firm to provide a comprehensive response addressing the various concerns raised in it.

According to the July 22 letter, recent reports regarding potential issues at the company prompted the Senators’ inquiry. It emphasized the need for transparency in the deployment and governance of artificial intelligence (AI) systems due to issues of national security and public trust.

Lawmaker inquiry

The Senators have requested detailed information about several concerns, including confirmation on whether OpenAI will honor its previously pledged commitment to allocate 20% of its computing resources to AI safety research. The letter emphasized that fulfilling this commitment is crucial for the responsible development of AI technologies.

Additionally, the letter inquired about OpenAI’s enforcement of non-disparagement agreements and other contractual provisions that could potentially deter employees from raising safety concerns. The lawmakers stressed the importance of protecting whistleblowers and ensuring that employees can voice their concerns without fear of retaliation.

They also sought detailed information on the cybersecurity protocols OpenAI has in place to protect its AI models and intellectual property from malicious actors and foreign adversaries. They asked OpenAI to describe its non-retaliation policies and whistleblower reporting channels, emphasizing the need for robust protections against cybersecurity threats.

In their inquiry, the Senators asked whether OpenAI allows independent experts to test and assess the safety and security of its AI systems before they are released. They emphasized the importance of independent evaluations in ensuring the integrity and reliability of AI technologies.

The Senators also asked if OpenAI plans to conduct and publish retrospective impact assessments of its already-deployed models to ensure public accountability. They highlighted the need for transparency in evaluating the real-world effects of AI systems.

Critical role of AI

The letter highlighted AI’s critical role in the nation’s economic and geopolitical standing, noting that safe and secure AI is essential for maintaining competitiveness and protecting critical infrastructure.

The Senators stressed the importance of OpenAI’s voluntary commitments made to the Biden-Harris administration and urged the company to provide documentation on how it plans to meet these commitments.

The letter stated:

“Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems. This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies.”

The letter marks a significant step in ensuring that AI development proceeds with the highest standards of safety, security, and public accountability. This action reflects the growing legislative scrutiny on AI technologies and their societal impacts.

The five lawmakers emphasized the urgency of addressing these issues, given the widespread use of AI technologies and their potential consequences for national security and public trust. They called on OpenAI to demonstrate its commitment to responsible AI development by providing thorough and transparent responses to their questions.

The Senators referenced several sources and previous reports that have detailed OpenAI’s challenges and commitments, providing a comprehensive backdrop for their concerns. These sources include OpenAI’s approach to frontier risk and the Biden-Harris administration’s voluntary safety and security commitments.

The post US lawmakers demand clarity on OpenAI safety practices in joint letter appeared first on CryptoSlate.

Read Entire Article
spot_img
- Advertisement -spot_img

Related Posts

Crypto․com Acquires Fintek Securities, To Offer Equity Trading In Australia!

The post Crypto․com Acquires Fintek Securities, To Offer Equity Trading In Australia! appeared first on Coinpedia Fintech News On Monday, Vakul Talwar, the General Manager of Crypto․com’s

Shiba Inu Burn Rate Soars 6,200% Today, Over 290k Tokens Destroyed!

The post Shiba Inu Burn Rate Soars 6,200% Today, Over 290k Tokens Destroyed! appeared first on Coinpedia Fintech News The Shiba Inu burn rate witnessed a massive 6200% surge today Notably, this has

Floki Price To Smash a New ATH With A 33% Surge This Week?

The post Floki Price To Smash a New ATH With A 33% Surge This Week appeared first on Coinpedia Fintech News With the speculations of Floki memecoin being listed on Coinbase, one of the major

Andrew Tate’s Bold Claim: Why Bitcoin Outshines Gold and Real Estate

The post Andrew Tate’s Bold Claim: Why Bitcoin Outshines Gold and Real Estate appeared first on Coinpedia Fintech News Andrew Tate, a former kickboxing champion turned social media influencer, has

Financial ‘Indiana Jones’: The Massive Bitcoin Rally Has Not Even Started

Sean Brodrick, also known as the ‘Indiana Jones’ of natural resources, believes that even with the latest price hikes, bitcoin still has room for growth Brodrick stated that in 2025,

BONK Price Today: Can It Catch Up to DOGE, SHIB, and PEPE?

The post BONK Price Today: Can It Catch Up to DOGE, SHIB, and PEPE appeared first on Coinpedia Fintech News The meme coins market has seen a growth of nearly 31% in the last 24-hours Among the top