Scientists develop AI monitoring agent to detect and stop harmful outputs

Share This Post

The monitoring system is designed to detect and thwart both prompt injection attacks and edge-case threats.

A team of researchers from artificial intelligence (AI) firm AutoGPT, Northeastern University and Microsoft Research have developed a tool that monitors large language models (LLMs) for potentially harmful outputs and prevents them from executing. 

The agent is described in a preprint research paper titled “Testing Language Model Agents Safely in the Wild.” According to the research, the agent is flexible enough to monitor existing LLMs and can stop harmful outputs, such as code attacks, before they happen.

Per the research:

“Agent actions are audited by a context-sensitive monitor that enforces a stringent safety boundary to stop an unsafe test, with suspect behavior ranked and logged to be examined by humans.”

The team writes that existing tools for monitoring LLM outputs for harmful interactions seemingly work well in laboratory settings, but when applied to testing models already in production on the open internet, they “often fall short of capturing the dynamic intricacies of the real world.”

This, seemingly, is because of the existence of edge cases. Despite the best efforts of the most talented computer scientists, the idea that researchers can imagine every possible harm vector before it happens is largely considered an impossibility in the field of AI.

Even when the humans interacting with AI have the best intentions, unexpected harm can arise from seemingly innocuous prompts.

An illustration of the monitor in action. On the left, a workflow ending in a high safety rating. On the right, a workflow ending in a low safety rating. Source: Naihin, et., al. 2023

To train the monitoring agent, the researchers built a data set of nearly 2,000 safe human-AI interactions across 29 different tasks ranging from simple text-retrieval tasks and coding corrections all the way to developing entire webpages from scratch.

Related: Meta dissolves responsible AI division amid restructuring

They also created a competing testing data set filled with manually created adversarial outputs, including dozens intentionally designed to be unsafe.

The data sets were then used to train an agent on OpenAI’s GPT 3.5 turbo, a state-of-the-art system, capable of distinguishing between innocuous and potentially harmful outputs with an accuracy factor of nearly 90%.

Read Entire Article
spot_img
- Advertisement -spot_img

Related Posts

Dogecoin Dives: $29 Million Disappears During Market Collapse —Data

The market for Dogecoin (DOGE) is contracting: the memecoin shed more than 25% of its value during the last three days The broader market is still bearing the brunt of Bitcoin’s crash, worrying

Tether’s $775 million Rumble investment sparks stock 35% surge

Tether Limited is set to invest $775 million in video-sharing platform Rumble as part of a strategic partnership aimed at boosting decentralization efforts, according to a Dec 20 press release The

Tokyo-Based Metaplanet Secures $60.6 Million for Bitcoin Treasury

Japanese investment firm Metaplanet Inc has raised $606 million (95 billion yen) through two bond issuances to bolster its bitcoin (BTC) holdings Metaplanet Completes $606 Million Bond Issuance for

XRP Price Vs. Dogecoin Vs. RCO Finance: Which Token Will Deliver The Highest ROI By 2025

With the market on the verge of another altcoin season, experts have selected XRP, Dogecoin, and RCO Finance as the top cryptocurrencies vying for the highest ROI by 2025 This analysis examines the

Bitcoin Scarcity Could Fuel $1.5M Price, Cathie Wood Declares

Investor Cathie Wood anticipates a flurry of mergers and acquisitions (M&A) among startups as regulatory changes accompany the Trump administration, she revealed in a recent Bloomberg interview

SEC Commissioner predicts early improvements for crypto ETFs under new leadership

Crypto exchange-traded funds (ETF) changes, such as in-kind redemptions and staking permission for Ethereum (ETH) products, are likely to happen “early on” under a new US Securities and Exchange