G7 countries to launch AI code of conduct: Report

Share This Post

The Group of Seven (G7) countries will agree on a voluntary AI code of conduct for companies developing AI to reference for mitigating risks and benefits of the technology.

The Group of Seven (G7) industrial countries are scheduled to agree upon an artificial intelligence (AI) code of conduct for developers on Oct. 30, according to a report by Reuters. 

According to the report, the code has 11 points that aim to promote “safe, secure, and trustworthy AI worldwide” and help “seize” the benefits of AI while still addressing and troubleshooting the risks it poses.

The plan was drafted by G7 leaders in September. It says it offers voluntary guidance of actions for “organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.”

Additionally, it suggests that companies should publicize reports on the capabilities, limitations, use and misuse of the systems being built. Robust security controls for said systems are also recommended.

Countries involved in the G7 include Canada, France, Germany, Italy, Japan, the United Kingdom, the United States and the European Union.

Cointelegraph has reached out to the G7 for confirmation of the development and additional information.

Related: New data poisoning tool would punish AI for scraping art without permission

This year’s G7 took place in Hiroshima, Japan, with a meeting held between all participating Digital and Tech Ministers on April 29 and 30.

Topics covered in the meeting included emerging technologies, digital infrastructure and AI, with an agenda item specifically dedicated to responsible AI and global AI governance.

The G7’s AI code of conduct comes as governments worldwide are trying to navigate the emergence of AI with its useful capabilities and concerns. The EU was among the first to establish guidelines with its landmark EU AI Act, which had its first draft passed in June.

On Oct. 26, the United Nations established a 39-member advisory committee to tackle issues related to the global regulation of AI.

The Chinese government also launched its own AI regulation, which began to take effect back in August.

From within the industry, the developer of the popular AI chatbot ChatGPT, OpenAI, announced that it plans to create a “preparedness” team that will assess a range of AI-related risks.

Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

Read Entire Article
spot_img
- Advertisement -spot_img

Related Posts

Analyst Says Bitcoin Has Entered The ‘Thrill’ Phase, Here’s What To Expect Next

Crypto analyst Ash Crypto has revealed that Bitcoin has entered the ‘thrill’ phase The analyst further explained what to expect from the flagship crypto moving forward, having entered this phase

Breaking BRICS: Analyst Warns of Trump’s Bid to Weaken Global Alliance

Trump’s foreign policy will target BRICS as its main adversary, aiming to disrupt the coalition and counter the influence of China and Russia, an analyst predicts Analyst Sees the Battlefields of

Can Trump Order A Strategic Bitcoin Reserve? Exploring The Law

At the Bitcoin 2024 conference in Nashville, Donald Trump unveiled his proposal to establish a Strategic Bitcoin Reserve (SBR) for the United States upon his return to office The plan has swiftly

Trump Appoints Former SEC Chair to a Role That Could Influence Crypto Oversight

US President-elect Donald Trump has appointed former SEC Chair Jay Clayton to a key role, drawing attention to his crypto regulatory legacy and enforcement record Former SEC Chair Steps Into a Role

WIF Slide Below $3.582 Sparks Fears Of Further Losses

WIF latest dip below the crucial $3582 support has triggered concerns across the market, as bearish sentiment appears to be gathering strength Its break below this key level could pave the way for

Court filings reveal Elon Musk blocked OpenAI’s ICO plans to protect its reputation

Elon Musk revealed in recent court filings that he personally intervened to stop OpenAI from launching an initial coin offering (ICO) in 2018, a move he claimed would have severely damaged the