Meta Glasses Using AI Stir Privacy Concerns With Doxing Potential

A new use of Meta’s smart glasses has raised significant privacy concerns after two Harvard students developed a way to use the glasses to dox individuals in real-time. AnhPhu Nguyen and Caine Ardayfio modified the glasses, known as I-XRAY, to work with an AI-powered large language model (LLM) that automatically identifies people and gathers personal information about them by searching the internet.

The glasses utilize facial recognition technology to find pictures of individuals online and then cross-reference those images with publicly available data, quickly compiling a profile that may include addresses, phone numbers, and social security information. This process, conducted without the individual’s consent, has sparked fears of privacy breaches.

Critics are alarmed at how easily this technology can expose sensitive data, leading to potential risks such as identity theft and other forms of harassment. The glasses’ ability to gather personal information without any input from the subject raises questions about how such tools will be regulated in the future.

Nguyen and Ardayfio suggest that people protect themselves by opting out of databases that store personal data and using services to remove themselves from reverse face search engines. However, privacy experts warn that these measures may not be enough in an age where AI technology is advancing at a rapid pace.

The growing use of AI in everyday gadgets like smart glasses is pushing the boundaries of what is possible, but it is also raising serious questions about privacy and data security. Many are calling for tougher regulations to prevent the misuse of such technologies before they become widespread.