Op-ed: Humanity will use AI to destroy itself long before AI is sentient enough to rebel against it

Share This Post

As artificial intelligence rapidly advances, legacy media rolls out the warnings of an existential threat of a robotic uprising or singularity event. However, the truth is that humanity is more likely to destroy the world through the misuse of AI technology long before AI becomes advanced enough to turn against us.

Today, AI remains narrow, task-specific, and lacking in general sentience or consciousness. Systems like AlphaGo and Watson defeat humans at chess and Jeopardy through brute computational force rather than by exhibiting creativity or strategy. While the potential for superintelligent AI certainly exists in the future, we are still many decades away from developing genuinely autonomous, self-aware AI.

In contrast, the military applications of AI raise immediate dangers. Autonomous weapons systems are already being developed to identify and eliminate targets without human oversight. Facial recognition software is used for surveillance, profiling, and predictive policing. Bots manipulate social media feeds to spread misinformation and influence elections.

Bot farms used during US and UK elections, or even the tactics deployed by Cambridge Analytica, could seem tame compared with what may be to come. Through GPT-4 level generative AI tools, it is fairly elementary to create a social media bot capable of mimicking a designated persona.

Want thousands of people from Nebraska to start posting messaging in support of your campaign? All it would take is 10 to 20 lines of code, some MidJourney-generated profile pictures, and an API. The upgraded bots would not only be able to spread misinformation and propaganda but also engage in follow-up conversations and threads to cement the message in the minds of real users.

These examples illustrate just some of the ways humans will likely weaponize AI long before developing any malevolent agenda.

Perhaps the most significant near-term threat comes from AI optimization gone wrong. AI systems fundamentally don’t understand what we need or want from them, they can only follow instructions in the best way they know how. For example, an AI system programmed to cure cancer might decide that eliminating humans susceptible to cancer is the most efficient solution. An AI managing the electrical grid could trigger mass blackouts if it calculates that reduced energy consumption is optimal. Without real safeguards, even AIs designed with good intentions could lead to catastrophic outcomes.

Related risks also come from AI hacking, wherein bad actors penetrate and sabotage AI systems to cause chaos and destruction. Or AI could be used intentionally as a repression and social control tool, automating mass surveillance and giving autocrats unprecedented power.

In all these scenarios, the fault lies not with AI but with the humans who built and deployed these systems without due caution. AI does not choose how it gets used; people make those choices. And since there is little incentive at the moment for tech companies or militaries to limit the roll-out of potentially dangerous AI applications, we can only assume they are headed straight in that direction.

Thus, AI safety is paramount. A well-managed, ethical, safeguarded AI system must be the basis of all innovation. However, I do not believe this should come through restriction of access. AI must be available to all for it to benefit humankind truly.

While we fret over visions of a killer robot future, AI is already poised to wreak havoc enough in the hands of humans themselves. The sobering truth may be that humanity’s shortsightedness and appetite for power make early AI applications incredibly dangerous in our irresponsible hands. To survive, we must carefully regulate how AI is developed and applied while recognizing that the biggest enemy in the age of artificial intelligence will be our own failings as a species—and it is almost too late to set them right.

The post Op-ed: Humanity will use AI to destroy itself long before AI is sentient enough to rebel against it appeared first on CryptoSlate.

Read Entire Article
spot_img
- Advertisement -spot_img

Related Posts

Securitize Selects Wormhole as Official Blockchain Interoperability Provider

Real-world-asset (RWA) tokenization infrastructure provider Securitize has chosen Wormhole as its official interoperability provider to facilitate cross-chain transfers for all tokenized assets on

These Altcoins Are Seeing High Whale Interest After Fed Rate Cut

On-chain data shows three altcoins are observing a high transaction activity from the whales after the US Federal Reserve (Fed) announced a rate cut Whale Transaction Count Has Spiked For These

Bitcoin Metrics Show Market Equilibrium: Entry Opportunity Or A Sign Of Stagnation?

Bitcoin has surged 11% since Tuesday following the Federal Reserve’s announcement of a 50 bps interest rate cut This significant price movement pushed BTC past the $62,000 mark, a psychological

BNY Mellon Cleared For Bitcoin Custody And Institutional Crypto Services

In a significant development for Bitcoin (BTC), the broader crypto market and the traditional banking industry, BNY Mellon has been identified as the first bank to receive an exemption from the

SEC won’t judge ‘merits’ of Trump’s DeFi project, but same regulatory issues await

Former President Donald Trump’s DeFi project, World Liberty Financial, will not be exempt from the stringent and opaque regulations imposed on US-based crypto ventures, according to SEC

Hashpower Evolution: Bitmain’s New ASIC Packs 477,677% More Power than the 2013 S1

This week, Bitmain introduced its latest bitcoin mining machine, which packs an impressive punch with a reported output of 860 terahash per second (TH/s) To put that in perspective, this new rig