Op-ed: A rational take on a SkyNet ‘doomsday’ scenario if OpenAI has moved closer to AGI

Share This Post

Hollywood blockbusters routinely depict rogue AIs turning against humanity. However, the real-world narrative about the risks artificial intelligence poses is far less sensational but significantly more important. The fear of an all-knowing AI breaking the unbreakable and declaring war on humanity makes for great cinema, but it obscures the tangible risks much closer to home.

I’ve previously talked about how humans will do more harm with AI before it ever reaches sentience. However, here, I want to debunk a few common myths about the risks of AGi through a similar lens.

The myth of AI breaking strong encryption.

Let’s begin by debunking a popular Hollywood trope: the idea that advanced AI will break strong encryption and, in doing so, gain the upper hand over humanity.

The truth is AI’s ability to decrypt strong encryption remains notably limited. While AI has demonstrated potential in recognizing patterns within encrypted data, suggesting that some encryption schemes could be vulnerable, this is far from the apocalyptic scenario often portrayed. Recent breakthroughs, such as cracking the post-quantum encryption algorithm CRYSTALS-Kyber, were achieved through a combination of AI’s recursive training and side-channel attacks, not through AI’s standalone capabilities.

The actual threat posed by AI in cybersecurity is an extension of current challenges. AI can, and is, being used to enhance cyberattacks like spear phishing. These methods are becoming more sophisticated, allowing hackers to infiltrate networks more effectively. The concern is not an autonomous AI overlord but human misuse of AI in cybersecurity breaches. Moreover, once hacked, AI systems can learn and adapt to fulfill malicious objectives autonomously, making them harder to detect and counter.

AI escaping into the internet to become a digital fugitive.

The idea that we could simply turn off a rogue AI is not as stupid as it sounds.

The massive hardware requirements to run a highly advanced AI model mean it cannot exist independently of human oversight and control. To run AI systems such as GPT4 requires extraordinary computing power, energy, maintenance, and development. If we were to achieve AGI today, there would be no feasible way for this AI to ‘escape’ into the internet as we often see in movies. It would need to gain access to equivalent server farms somehow and run undetected, which is simply not feasible. This fact alone significantly reduces the risk of an AI developing autonomy to the extent of overpowering human control.

Moreover, there is a technological chasm between current AI models like ChatGPT and the sci-fi depictions of AI, as seen in films like “The Terminator.” While militaries worldwide already utilize advanced aerial autonomous drones, we are far from having armies of robots capable of advanced warfare. In fact, we have barely mastered robots being able to navigate stairs.

Those who push the SkyNet doomsday narrative fail to recognize the technological leap required and may inadvertently be ceding ground to advocates against regulation, who argue for unchecked AI growth under the guise of innovation. Simply because we don’t have doomsday robots does not mean there is no risk; it merely means the threat is human-made and, thus, even more real. This misunderstanding risks overshadowing the nuanced discussion on the necessity of oversight in AI development.

Generational perspective of AI, commercialization, and climate change

I see the most imminent risk as the over-commercialization of AI under the banner of ‘progress.’ While I do not echo calls for a halt to AI development, supported by the likes of Elon Musk (before he launched xAI), I believe in stricter oversight in frontier AI commercialization. OpenAI’s decision not to include AGI in its deal with Microsoft is an excellent example of the complexity surrounding the commercial use of AI. While commercial interests may drive rapid advancement and accessibility of AI technologies, they can also lead to a prioritization of short-term gains over long-term safety and ethical considerations. There’s a delicate balance between fostering innovation and ensuring responsible development we may not yet have figured out.

Building on this, just as ‘Boomers’ and ‘GenX’ have been criticized for their apparent apathy towards climate change, given they may not live to see its most devastating effects, there could be a similar trend in AI development. The rush to advance AI technology, often without adequate consideration of long-term implications, mirrors this generational short-sightedness. The decisions we make today will have lasting impacts, whether we’re here to witness them or not.

This generational perspective becomes even more pertinent when considering the situation’s urgency, as the rush to advance AI technology is not just a matter of academic debate but has real-world consequences. The decisions we make today in AI development, much like those in environmental policy, will shape the future we leave behind.

We must build a sustainable, safe technological ecosystem that benefits future generations rather than leaving them a legacy of challenges our short-sightedness creates.

Sustainable, pragmatic, and considered innovation.

As we stand on the brink of significant AI advancements, our approach should not be one of fear and inhibition but of responsible innovation. We need to remember the context in which we’re developing these tools. AI, for all its potential, is a creation of human ingenuity and subject to human control. As we progress towards AGI, establishing strong guardrails is not just advisable; it’s essential. To continue banging the same drum, humans will cause an extinction-level event through AI long before AI can do it itself.

The real risks of AI lie not in the sensationalized Hollywood narratives but in the more mundane reality of human misuse and short-sightedness. It’s time we remove our focus from the unlikely AI apocalypse to the very real, present challenges that AI poses in the hands of those who might misuse it. Let’s not stifle innovation but guide it responsibly towards a future where AI serves humanity, not undermines it.

The post Op-ed: A rational take on a SkyNet ‘doomsday’ scenario if OpenAI has moved closer to AGI appeared first on CryptoSlate.

Read Entire Article
spot_img
- Advertisement -spot_img

Related Posts

$13 XRP? Analyst Says It’s Closer Than You Think

Recently, XRP has experienced a significant increase in value, reaching a three-year peak of $127 The token is currently trading at $109 on a sustained weekly increase of 80%, and many investors are

Canaan Expands North American Bitcoin Mining Operations, Secures Order From Hive

Canaan Inc, a publicly listed manufacturer of bitcoin (BTC) mining hardware and blockchain infrastructure provider, has shared plans to broaden its self-mining footprint in North America Publicly

Switzerland regulator warns of rising crypto money laundering risks

Switzerland’s Financial Market Supervisory Authority (FINMA) has raised concerns about increasing money laundering risks in the crypto sector The warning, detailed in FINMA’s 2024 Risk

Bitcoin Whales Not Done Buying: Accumulation Strong Even Above $90,000

On-chain data shows the Bitcoin whales have continued to purchase more even at the recent highs, a sign that could be optimistic for the rally Bitcoin Large Holders Netflow Has Continued To See

Why is Bitcoin Price Up Today?

The post Why is Bitcoin Price Up Today appeared first on Coinpedia Fintech News Bitcoin has been skyrocketing to all-time highs after the election The price is up by more than three percent in the

Memecoin revival drives Solana DEX Raydium past Tether in fees

Solana-based decentralized exchange (DEX) Raydium has outperformed stablecoin giant Tether in daily fee generation According to data from DeFiLlama, Raydium generated over $15 million in fees in the