AI could threaten humanity in 2 years, warns UK AI task force adviser

Share This Post

The U.K. prime minister’s AI task force adviser said large AI models would need regulation and control in the next two years to curb major existential risks.

The artificial intelligence (AI) task force adviser to the prime minister of the United Kingdom said humans have roughly two years to control and regulate AI before it becomes too powerful.

In an interview with a local U.K. media outlet, Matt Clifford, who also serves as the chair of the government’s Advanced Research and Invention Agency (ARIA), stressed that current systems are getting “more and more capable at an ever-increasing rate.”

He continued to say that if officials don’t consider safety and regulations now, the systems will become “very powerful” in two years.

“We’ve got two years to get in place a framework that makes both controlling and regulating these very large models much more possible than it is today.”

Clifford warned that there are “a lot of different types of risks” when it comes to AI, both in the near term and long term, which he called “pretty scary.”

The interview came following a recent open letter published by the Center for AI Safety, signed by 350 AI experts, including OpenAI CEO Sam Altman, that said AI should be treated as an existential threat similar to that posed by nuclear weapons and pandemics.

“They’re talking about what happens once we effectively create a new species, sort of an intelligence that’s greater than humans.”

The AI task force adviser said that these threats posed by AI could be “very dangerous” and could “kill many humans, not all humans, simply from where we’d expect models to be in two years’ time.”

Related: AI-related crypto returns rose up to 41% after ChatGPT launched: Study

According to Clifford, regulators and developers’ primary focus should be understanding how to control the models and then implementing regulations on a global scale.

For now, he said his greatest fear is the lack of understanding of why AI models behave the way they do.

“The people who are building the most capable systems freely admit that they don’t understand exactly how [AI systems] exhibit the behaviors that they do.”

Clifford highlighted that many of the leaders of organizations building AI also agree that powerful AI models must undergo some kind of audit and evaluation process before deployment. 

Currently, regulators around the world are scrambling to both understand the technology and its ramifications, while trying to create regulations that protect users and still allow for innovation. 

On June 5, officials in the European Union went so far as to suggest mandates that all AI-generated content should be labeled as such in order to prevent disinformation.

In the UK a minister in the opposition party echoed the sentiments mentioned in the CAIS letter, saying that the technology should be regulated as are medicine and nuclear power

Magazine: AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more

Read Entire Article
spot_img
- Advertisement -spot_img

Related Posts

$150K Bitcoin: Scaramucci Sees Trump Depoliticizing Crypto, Fueling BTC’s Next Surge

Skybridge Capital’s founder predicts bitcoin could reach $150,000 with eased regulations, noting that the Trump administration’s approach could help depoliticize crypto policy Can

XRP Price Will Escalate To $1.03 This Week If This Happens: Analyst

In a new analysis, crypto analyst Dark Defender (@DefendDark) highlights a significant pattern in the daily XRP/USDT chart that suggests a possible sharp rise in the price of XRP According to his

Gensler’s Potential Exit Fuels XRP’s Breakout, 75% Rally Incoming

The post Gensler’s Potential Exit Fuels XRP’s Breakout, 75% Rally Incoming appeared first on Coinpedia Fintech News After consolidating for over a year, Ripple’s native token (XRP) has

Tether launches tokenization platform called Hadron for institutions, governments

Tether launched a new tokenization platform called Hadron, which aims to service institutions, corporations, fund managers, and governments, according to a Nov 14 press release The platform will

Is Gensler Resigning: SEC Chair’s Cryptic Remarks Ignite Exit Speculation

SEC Chair Gary Gensler’s reflection on his tenure has fueled speculation he may step down before Trump’s administration can move to replace him ‘It’s Been a Great Honor to

China Could Reassess Crypto Ban Due To Trump, HashKey CEO Claims

HashKey Group Chairman and CEO Xiao Feng has indicated that China’s stringent stance on cryptocurrencies could soften within the next two years, influenced by the pro-crypto policies expected