The Oversight Labs, a Kenyan digital rights organization, has formally requested that the Office of the Data Protection Commissioner (ODPC) investigate whether footage captured by Ray‑Ban Meta Smart Glasses is being unlawfully used to train artificial intelligence systems.
The complaint adds fresh regulatory scrutiny to the global AI data pipeline that increasingly passes through Nairobi, where thousands of contract workers annotate images and videos for major technology companies. It follows recent reports that Kenyan data‑labeling workers have been tasked with reviewing user‑generated footage from the smart glasses to help train Meta’s AI models.
Concerns About Consent and Data Legality
According to the complaint, The Oversight Labs has asked the ODPC to examine whether individuals recorded by the glasses ever consented to having their images, voices, and personal conversations used to train AI tools. The group argues that such processing may violate Kenya’s Data Protection Act, which requires informed consent, lawful processing, and strict safeguards around sensitive data.
The organization also urged regulators to assess whether the devices enable covert recording in public or private settings, potentially capturing people without their knowledge or awareness that the footage may be routed abroad for AI development.
Kenya’s Role in the Global AI Supply Chain Under Scrutiny
The petition comes after investigations by Swedish publications Göteborgs‑Posten and Svenska Dagbladet. The reports alleged that workers employed by outsourcing firm Sama reviewed images and videos recorded by Ray‑Ban Meta Smart Glasses to train Meta’s artificial intelligence systems.
Footage collected from users worldwide is reportedly sent to data annotation teams in Kenya, who classify objects, people, environments, and actions to improve AI recognition capabilities.
According to The Oversight Labs’ filing, the material reviewed by workers may include highly sensitive scenes, such as:
- bathroom and intimate moments
- financial information, including bank cards
- instances of individuals viewing explicit content
- private household interactions
Annotators reportedly label this content so that Meta’s systems can better understand and interpret real‑world scenarios captured by the glasses.
Privacy Risks Linked to Ray‑Ban Meta Smart Glasses
The glasses, which are built with cameras, microphones, and voice‑enabled AI, allow users to capture images and video from a first‑person perspective. Some of the device’s AI‑powered features rely on Meta’s cloud, meaning user data may be transmitted internationally for processing.
The Oversight Labs argues that neither the subjects recorded nor many of the device’s users fully understand how and where this data is being processed. The group says this raises serious questions about cross‑border data transfers, lawful basis for processing, and whether appropriate data protection impact assessments (DPIAs) were conducted.
Mercy Mutemi, Executive Director of The Oversight Labs, commented:
We are deeply concerned by the development of harmful technology through the exploitation of vulnerable communities.
The organisation also noted that it is in contact with Kenyan data‑labelling workers who reviewed the footage and who are willing to give testimony anonymously.
The complaint points to earlier labour disputes involving Meta’s outsourcing operations in Kenya, where content moderators sued Meta and its partners over allegations of unsafe working conditions and exposure to traumatic material.
The Oversight Labs argues that the new allegations reinforce concerns about how global tech giants use Kenyan labour for high‑risk data work without adequate safeguards or transparency.
Request for a 90‑Day Regulatory Investigation
The group has asked the ODPC to complete its investigation within 90 days and determine whether Meta, Sama, or any associated entities adhered to Kenyan data protection laws.
This case highlights Kenya’s growing, but controversial, role in the global AI industry. The country has become a major hub for data annotation due to its English‑speaking workforce and developed outsourcing sector, but labour advocates continue to warn about poor conditions, low pay, and repeated exposure to harmful content.
