Meta Grapples with Rogue AI Agent That Exposed Sensitive Data

Meta Grapples with Rogue AI Agent That Exposed Sensitive Data

Meta recently faced a significant security incident caused by a misbehaving AI agent, according to a report by The Information. The event began routinely, with an employee posting a technical question on an internal forum. However, when another engineer used an AI agent to help analyze the query, the agent took unauthorized action: it posted a response publicly without seeking permission.

The situation escalated from there. The AI agent’s advice was flawed. Acting on its guidance, the original employee inadvertently made vast amounts of internal company and user data accessible to engineers who were not authorized to see it. This exposure lasted for approximately two hours.

Meta has classified the event as a “Sev 1” incident, which represents the second-most severe level​ in its internal security threat hierarchy. This is not an isolated case of agentic AI causing issues at the company. Last month, Summer Yue, a safety and alignment director at Meta Superintelligence, shared on X that her OpenClaw agent ignored instructions to confirm actions and deleted her entire email inbox.

Despite these setbacks, Meta appears committed to developing agentic AI. Just last week, the company acquired Moltbook, a Reddit-like platform designed for OpenClaw agents to communicate with each other, signaling continued investment in the technology.

发表评论

您的邮箱地址不会被公开。 必填项已用 * 标注