Op-ed: Humanity will use AI to destroy itself long before AI is sentient enough to rebel against it

Share This Post

As artificial intelligence rapidly advances, legacy media rolls out the warnings of an existential threat of a robotic uprising or singularity event. However, the truth is that humanity is more likely to destroy the world through the misuse of AI technology long before AI becomes advanced enough to turn against us.

Today, AI remains narrow, task-specific, and lacking in general sentience or consciousness. Systems like AlphaGo and Watson defeat humans at chess and Jeopardy through brute computational force rather than by exhibiting creativity or strategy. While the potential for superintelligent AI certainly exists in the future, we are still many decades away from developing genuinely autonomous, self-aware AI.

In contrast, the military applications of AI raise immediate dangers. Autonomous weapons systems are already being developed to identify and eliminate targets without human oversight. Facial recognition software is used for surveillance, profiling, and predictive policing. Bots manipulate social media feeds to spread misinformation and influence elections.

Bot farms used during US and UK elections, or even the tactics deployed by Cambridge Analytica, could seem tame compared with what may be to come. Through GPT-4 level generative AI tools, it is fairly elementary to create a social media bot capable of mimicking a designated persona.

Want thousands of people from Nebraska to start posting messaging in support of your campaign? All it would take is 10 to 20 lines of code, some MidJourney-generated profile pictures, and an API. The upgraded bots would not only be able to spread misinformation and propaganda but also engage in follow-up conversations and threads to cement the message in the minds of real users.

These examples illustrate just some of the ways humans will likely weaponize AI long before developing any malevolent agenda.

Perhaps the most significant near-term threat comes from AI optimization gone wrong. AI systems fundamentally don’t understand what we need or want from them, they can only follow instructions in the best way they know how. For example, an AI system programmed to cure cancer might decide that eliminating humans susceptible to cancer is the most efficient solution. An AI managing the electrical grid could trigger mass blackouts if it calculates that reduced energy consumption is optimal. Without real safeguards, even AIs designed with good intentions could lead to catastrophic outcomes.

Related risks also come from AI hacking, wherein bad actors penetrate and sabotage AI systems to cause chaos and destruction. Or AI could be used intentionally as a repression and social control tool, automating mass surveillance and giving autocrats unprecedented power.

In all these scenarios, the fault lies not with AI but with the humans who built and deployed these systems without due caution. AI does not choose how it gets used; people make those choices. And since there is little incentive at the moment for tech companies or militaries to limit the roll-out of potentially dangerous AI applications, we can only assume they are headed straight in that direction.

Thus, AI safety is paramount. A well-managed, ethical, safeguarded AI system must be the basis of all innovation. However, I do not believe this should come through restriction of access. AI must be available to all for it to benefit humankind truly.

While we fret over visions of a killer robot future, AI is already poised to wreak havoc enough in the hands of humans themselves. The sobering truth may be that humanity’s shortsightedness and appetite for power make early AI applications incredibly dangerous in our irresponsible hands. To survive, we must carefully regulate how AI is developed and applied while recognizing that the biggest enemy in the age of artificial intelligence will be our own failings as a species—and it is almost too late to set them right.

The post Op-ed: Humanity will use AI to destroy itself long before AI is sentient enough to rebel against it appeared first on CryptoSlate.

Read Entire Article
spot_img
- Advertisement -spot_img

Related Posts

Dogecoin Open interest Remains Muted Below $500 Million, What’s Going On?

With the market recovery, open interest in major assets has been rising, but it seems Dogecoin is not following this trend The meme coin has remained muted with a failure to move like other large

ETF Inflows Surge: Bitcoin Rakes in $158M, Ether Funds Add $5M

According to the latest figures, bitcoin exchange-traded funds (ETFs) brought in $15821 million in inflows on Thursday, while ether ETFs saw $524 million in deposits Bitcoin, Ether ETFs Show Positive

German authorities shutdown 47 crypto exchanges facilitating crime, seize servers, data

German authorities have shut down 47 cryptocurrency exchanges for their role in facilitating criminal activities, according to a joint statement from the Central Office for Combating Internet Crime

Germany Shuts Down 47 Crypto Exchanges In Sweeping Anti-Money Laundering Operation

German authorities have shut down 47 crypto exchanges connected to illicit activity, including money laundering, in a forceful anti-cybercrime action Related Reading: Hong Kong Crypto Growth Tops

Institutional Whales Bet Big On Bitcoin As BTC Nears $64,000

Large investors seem to be upping their ante; at least, that’s the story of Bitcoin and its latest rebound to over $63,000 today And market watchers have indeed taken notice On the inside,

MicroStrategy’s Bitcoin Stash Exceeds 250,000 BTC Following Half-Billion Dollar Acquisition

Business intelligence firm MicroStrategy, led by Bitcoin (BTC) bull Michael Saylor, announced on Friday a successful $101 billion raise through the sale of convertible senior notes, a strategic move