HomeStrategyPoliticsAmericans’ Fingerprints and Eye Scans May Be Subject to Data Manipulation

Americans’ Fingerprints and Eye Scans May Be Subject to Data Manipulation



The threat to personal data privacy is amplified with the widespread usage of biometric information in daily circumstances

Biometric information like facial recognition, fingerprints, and voice and eye scans being collected by businesses can be hacked and manipulated, with the threat amplified following the proliferation of artificial intelligence, said a recent warning by the U.S. Federal Trade Commission (FTC).

In a May 18 policy statement (pdf), the FTC said that consumers are facing increasing risks associated with the use and collection of biometric information, including those powered by AI technologies. Biometric information includes data that denote the physical, biological, or behavioral traits or measurements of individuals. Facial recognition, fingerprint recognition, iris recognition, and voice recognition are some of the technologies prevalent today.

Some technologies can use biometric info to determine characteristics of individuals, ranging from their age, race, and gender, to personality traits. Accessing biometric data can allow criminals to glean sensitive information about an individual, like whether they attend specific political meetings, religious services, or gatherings, the type of healthcare they accessed, and more.

“Large databases of biometric information may also be attractive targets for malicious actors because of the information’s potential to be used for other illicit purposes, including to achieve further unauthorized access to devices, facilities or data. These issues pose risks not only to individual consumers, but also to businesses and society,” the agency said.

The FTC highlighted the contribution of AI in making “significant advances” in biometric technologies. For instance, the National Institute of Standards and Technology (NIST) found that facial recognition became 20 times better at finding a matching photograph from a database in just four years between 2014 and 2018.

“Such improvements are due in significant part to advancements in machine learning, along with data collection, storage, and processing capabilities sufficient to support the use of these technologies.”

The FTC warning comes amid the increasing use of AI in biometric scams. In May, McAfee Group, a global leader in online protection, published a report about AI technology fueling a rise in online voice scams.

In fact, just three seconds of audio was found to be enough to clone an individual’s voice, the report stated.

Out of the over 7,000 people from seven countries surveyed by the company, a quarter of the adults were found to have experienced some kind of AI voice scam.

An incident involving the misuse of AI for criminal purposes took place in April when Jennifer DeStefano, a mother from Arizona, received an unexpected phone call from her 15-year-old daughter who was sobbing and asking for help.

A man’s voice on the phone suggested that he had kidnapped the child. DeStephano quickly confirmed that her daughter was safe. Her daughter was actually in the house. The criminal had cloned her daughter’s voice in an attempt to scam her.

FTC’s Section 5 Rule

The FTC said in the policy statement that it intends to scrutinize companies that collect, use, or market biometric data and technologies to ensure compliance with Section 5 of the FTC Act that prohibits “‘unfair or deceptive acts or practices in or affecting commerce.”

As such, any false or unsubstantiated claims related to biometric technologies or about the collection and use of such info may violate the FTC Act, the agency said.

Section 5 of the FTC Act empowers the agency to impose a civil penalty of up to $50,120 per violation against any individual, partnership, or corporation found to have breached the regulations.

In recent years, FTC has brought enforcement actions against Facebook and photo app maker Everalbum on charges that these tech firms misrepresented the uses of facial recognition technology.

FTC Scrutinizing Businesses

In order to determine if a business’s use of biometric info violates the FTC Act, the agency will consider multiple factors, including whether the entity failed to assess foreseeable harms to consumers before collecting such data and whether it engaged in secret or unexpected collection or use of biometric data.

The FTC will also look into whether a business failed to promptly address foreseeable risks and implement tools to minimize or eliminate such risks.

Businesses are expected to evaluate practices and capabilities of third parties with whom they share access to customers’ biometric data.

The FTC also wants businesses to continuously monitor technologies related to biometrics that it develops, uses, or offers for sale to ensure that these are functioning as expected and are unlikely to harm customers.

“In recent years, biometric surveillance has grown more sophisticated and pervasive, posing new threats to privacy and civil rights,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, according to a May 18 press release from the agency.

“Today’s policy statement makes clear that companies must comply with the law regardless of the technology they are using.”

Biometric Data Proliferation

From unlocking mobiles and Alexa playing songs to accessing health apps and identifying bank accounts, biometric data plays an increasing role in modern life. Fingerprints are used everywhere from workplaces to schools.

While proponents have touted the ease of usage as the primary reason for the growth of this technology, critics complain about the deep and absolute penetration of personal privacy through the sharing of biometric information.

This concern is elevated further when big tech companies are constantly subject to data breaches and AI is increasingly deployed within regularly used apps. With large amounts of resource power, AI can easily clone data on a large scale, and with access to personally identifiable information can effectively clone individual identities.

According to multinational biometric solution provider Mitek Systems, biometric recognition that can help build a profile of a person’s identity includes physiological factors like fingerprints, hand geometry—how far your fingers are apart from one another, finger length, etc., palm print—hand lines found on your palm and palm thickness/width, DNA, blood type, facial measurements, iris and retinas, vein patterns in eyes and hands, and heartbeats.

Behavioral patterns include typing rhythm and keystroke dynamics, walking gait, voice and speech inflections, gestures, web navigation—scrolling and swiping, written text recognition like a signature or font, geo-location and IP addresses, purchasing habits, device use, and browser history and cookies.

In the hands of totalitarian regimes, biometric information can be used as a tool of control. The Chinese Communist Party (CCP) deploys a wide range of advanced facial recognition systems to monitor and govern the population.

When people fail to fall in line with party commands, they can restrict movements within certain locations for the individual and prevent them from traveling, curb access to bank accounts, and even restrict access to mobile apps.

This was especially evident during the COVID-19 pandemic and among specific groups of people targeted by the CCP like Falun Gong practitioners, Uyghurs, and Tibetans.



Source link

NypTechtek
NypTechtek
Media NYC Local Family and National - World News

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read