A team from Canada is researching whether or not facial recognition services violates an individual’s right to privacy.
Facial recognition services are a valuable tool when used properly and in an ethical manner. Some IT companies are using facial recognition software in ways that a 4-man team of investigators from Canada says may be in violation of international privacy standards. One company, in particular, Clearview AI, is scraping images from a variety of locations and then saving them in their database. Many of the images are from social media accounts. The problem is that the images remain in the database even after they have been deleted from the users’ accounts.
While facial recognition services aren’t illegal when used on an international level, the question of privacy comes into play when the images being scraped or saved are stored for later use. San Francisco is one of the first communities to ban the use of facial recognition software. Police and other law enforcement agencies are no longer allowed to use the software to investigate criminal cases. The only place this type of technology is allowed to be used is in international airports or on international flights.
The variables that affect the accuracy of facial recognition software are extensive. They include the amount of light, distance from the camera, and the angle at which the camera is placed. Systems used by government agencies are extremely sophisticated and can have an accuracy rating of over 90%. When using images from an internet website, the accuracy rating goes even higher, especially when the pictures are clear and of good quality. The team investigating whether or not the use of facial recognition software is legal has raised several viable questions. One of their biggest concerns is whether or not a company can legally scrape photos or images from an international social media account. Is it a violation of privacy? Is keeping the photos in their database long after they have been deleted a different type of violation?
By using biometric maps, facial recognition software can match facial features and use photographs to identify an individual. Clearview AI offers its services to police departments and law enforcement agencies to be used in identifying individuals who are believed to be participating in various types of criminal activity. In addition to only using public images, the company also has specific safeguards in place to prevent unauthorized personnel from gaining access to the sensitive information they offer. The team of investigators from Canada claims, however, that even though the publicly posted images are used by law enforcement, that using the images without the knowledge of the owner is a violation of their privacy under both federal and international law.
While many companies may argue that if an image has been posted or made “public” and anyone can see it, is it still a violation of privacy? Because privacy laws are different from country to country, who is in charge of stopping the violations or pressing charges against the company that is violating the law. Clearview AI claims that it only offers its facial recognition services to law enforcement agencies. It also claims that all of the images it uses are public. No private images are ever retrieved or put through the software.
The fact that facial recognition software loses its accuracy when interpreting the features of women and individuals of specific ethnic backgrounds is considered to be one of the biggest threats to the public in general. The 4-man team from Canada are continuing to look into how the facial recognition software is used and how individuals can better protect their privacy until the laws can be figured out.
April 20, 2020
Compunet InfoTech offers Managed IT Support & Hosted IT Services For Vancouver & Surrounding Areas. Serving Vancouver, Burnaby, Richmond, Surrey, Coquitlam and New Westminster.