Workers contracted by Meta, located in Kenya, were reportedly exposed to highly sensitive user-generated content, including footage of individuals using the bathroom and engaging in sexual acts, according to a lawsuit and multiple reports. This data, captured by Meta's AI-powered smart glasses, was allegedly reviewed by human contractors as part of efforts to improve the artificial intelligence system.

The practice has drawn the attention of regulators, with the UK's Information Commissioner's Office (ICO) writing to Meta regarding a report on the matter. The lawsuit names Meta and its manufacturing partner, Luxottica, alleging violations of consumer protection laws.

Meta has stated that contractors are used to review data shared with Meta AI to enhance user experience, a process described in its privacy policy. However, the exact location of this disclosure and the extent to which users are aware of human oversight remain points of contention. A mention of human review was reportedly found within Meta’s UK AI terms of service.
Read More: Australia VPN Downloads Rise After New Online Age Laws

The investigation, a collaboration involving Swedish newspapers and a Kenya-based journalist, is based on interviews with over 30 employees at Sama, a company involved in data annotation for Meta's AI systems. These workers reportedly handled video, image, and speech annotation. Authors of the report indicated they did not have direct access to the materials or the specific work areas.

Concerns regarding the privacy implications of these smart glasses echo those raised by earlier wearable technology, such as the defunct Google Glass. Meta's terms of service indicate the company may share user data from its AI and wearable devices with moderators for review. The smart glasses are equipped with an indicator light that illuminates when the camera is active.