قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Technology https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Researchers on Facebook have trained AI to mislead facial recognition systems

Researchers on Facebook have trained AI to mislead facial recognition systems



Facebook continues to be involved in a multi-billion-dollar lawsuit for its face-recognition practices, but that has not stopped its artificial intelligence department from developing technology to combat the very crimes the company is accused of . According to VentureBeat Facebook AI Research (FAIR) has developed a state-of-the-art de-identification system that works on video, including even live video. It works by changing the key features of a video object's face in real time, using machine learning to lure the facial recognition system to incorrectly identify the object.

This identification technology has existed in the past and has entire companies such as Israeli AI and the D-ID privacy firm dedicated to providing it with still images. There are also a whole category of facial recognition images that you can wear on your own, called racing examples, that work by exploiting weaknesses in the way computer vision software is trained to identify certain features. Take this pair of sunglasses with an inscription that may make the face recognition system think you're an actress, Mila Jovovich.

But this type of facial recognition impediment usually means changing a photo or still image taken from a security camera or other source after the fact. Or in the case of racing examples, prevent yourself from misleading the system. It is estimated that Facebook research does similar work in real time and on video footage, both pre-recorded and live. This is a first for the industry, FAIR argues, and good enough to combat sophisticated facial recognition systems. You can see an example of this in this YouTube video, which, since it is listed, cannot be embedded elsewhere.


Image: Facebook AI Research

"Face recognition can lead to loss of privacy and face swapping technology can be misused to create misleading videos," a document explaining the approach of the company cited by VentureBeat . "Recent world events related to the advancement of, and abuse of, face recognition technology call for the need to understand methods that successfully tackle de-identification. Our contribution is the only one suitable for video, including live video, and presents a quality that goes far beyond the methods of literature. "

Facebook clearly does not intend to use this technology in any of its commercial products, VentureBeat reports. But the study may affect future tools designed to protect people's privacy and, as the "misleading videos" research emphasizes, prevent the use of one's likeness in movie fakes.


Image: Facebook AI Research

Currently, the AI ​​industry is working on ways to combat the spread of deep fakes and the increasingly sophisticated tools used to create them. This is one method, and lawmakers and tech companies are trying to come up with other tools, such as deep-fake detection software and regulatory frameworks, on how to control the distribution of fake videos, images and audio.

Another concern for FAIR research is the recognition of persons, which is also unregulated and is of concern to lawmakers, academics and activists, who fear that it may violate human rights if it continues to be implemented without oversight by the authorities. order, governments and corporations. ,


Source link