BN
TechAI Desk1 views

Meta Tracks Keystrokes on Google, LinkedIn for AI Training

Meta is deploying an internal tool, the Model Capability Initiative (MCI), to monitor employee activity by recording keystrokes and mouse clicks across various platforms. The scope is broad, covering external sites like Google, LinkedIn, and Wikipedia, alongside internal tools like Slack and GitHub. Meta states this data is crucial for training advanced AI agents that mimic white-collar work tasks. However, the project has sparked internal alarm regarding privacy, with employees fearing the exposure of sensitive personal and corporate data. Meta has issued assurances that the tool is limited to screen visibility and will not read files or attachments.

Ad slot
Meta Tracks Keystrokes on Google, LinkedIn for AI Training

Meta is implementing an internal tracking tool, the Model Capability Initiative (MCI), to record employee keystrokes and mouse clicks across numerous platforms for advanced AI model training. This initiative signals Meta's aggressive push to compete in the generative AI space by gathering granular behavioral data from its workforce.

Scope of Data Collection

The MCI tool is designed to observe and collect data from employees' actions on their work computers. The scope of monitored sites is extensive and includes both external and internal platforms:

  • Third-Party Sites: Google, LinkedIn, and Wikipedia are among the sites slated for tracking.
  • Other Tracked Services: The list also encompasses Microsoft's GitHub, Salesforce's Slack, and Atlassian.
  • Meta Properties: Internal Meta platforms, such as Threads and Manus, are also included, with the list remaining fluid.
  • AI Tools: The original scope reportedly included major AI applications like OpenAI's ChatGPT and Anthropic's Claude.

Rationale Behind the Tracking

The data gathering project is framed as a necessary step to advance Meta's generative AI capabilities, aiming to close the perceived gap with competitors like OpenAI, Anthropic, and Google. A Meta spokesperson confirmed the project's purpose:

"If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus."

Ad slot

The goal is to train AI agents capable of performing tasks typically handled by white-collar workers.

Privacy Concerns and Safeguards

Internal communications viewed by CNBC revealed significant employee apprehension regarding the surveillance tool. Concerns raised by staff members include the potential exposure of sensitive data, such as:

  • User passwords.
  • Details of new product development.
  • Personal information regarding immigration status, health, or family members.

In a memo intended to address these concerns, the Meta Superintelligence Labs (MSL) staffer outlined several safeguards:

  • View Limitation: The tool is stated to only view what employees see on their screen and will "not read in files or attachments."
  • Data Exclusion: The memo asserted that incidental personal information within corporate emails captured visually will not be learned by the model due to implemented mitigations.
  • Employee Control: Staff were advised that they can limit data capture by avoiding personal work on their corporate devices.

Company Context

This effort aligns with CEO Mark Zuckerberg's strategy to rapidly build out foundation models. Following the hiring of Alexandr Wang from Scale AI, Meta has been accelerating its AI development, recently unveiling its Muse Spark model as part of the new Muse series overseen by MSL.

Ad slot