Over the past few years, waves of shocking privacy misuses, data breaches, and abuses have crashed on the world’s biggest companies and billions of their users. At the same time, many countries have bolstered their data protection rules. Europe set the tone in 2016 with the General Data Protection Regulation, which introduces strong guarantees of transparency, security, and privacy. Just last month, Californians got new privacy guarantees, like the right to request deletion of collected data, and other states are set to follow.
The response from India, the world’s largest democracy, has been curious, and introduces potential dangers. An emerging engineering powerhouse, India impacts us all, and its cybersecurity or data protection maneuvers deserve our careful attention. On the surface, the proposed Indian Data Protection Act of 2019 appears to emulate new global standards, such as the right to be forgotten. Other requirements, like having to store sensitive data in systems that are located within the subcontinent, may put constraints on certain business practices and are considered more controversial by some.
WIRED OPINION
ABOUT
Dr. Lukasz Olejnik (@lukOlejnik) is an independent cybersecurity and privacy researcher and consultant.
One feature of the bill that’s received less inspection but is perhaps most alarming of all is that how it would criminalize illegitimate re-identification of user data. While seemingly prudent, this may soon put our connected world at greater risk.
What is re-identification? When user data is processed at a company, special algorithms decouple sensitive information like location traces and medical records from identifying details like email addresses and passport numbers. This is called de-identification. It may be reversed, so organizations can recover the link between the users’ identities and their data when needed. Such controlled re-identification by legitimate parties happens routinely and is perfectly appropriate, so long as the technical design is safe and sound.
On the other hand, if a malicious attacker were to get ahold of the de-identified database and re-identify the data, the cybercriminals would gain an extremely valuable loot. As we see in continued data breaches, leaks, or cyber espionage, our world is full of potential adversaries seeking to exploit weakness in information systems.
India, perhaps in direct response to such threats, intends to ban re-identification without consent (aka illegitimate re-identification) and subject it to financial penalties or jail time. While prohibiting potentially malicious actions might sound compelling, our technological reality is much more complicated.
Researchers have demonstrated the risks of re-identification due to careless design. Take the recent prominent case in Australia as a typical example. In 2018, Victoria’s public transport authority shared the usage data patterns of its contactless commuter cards with participants of a data science competition. The data was effectively made publicly accessible. The following year a group of scientists discovered that flawed data protection measures allowed anyone to link the data to individual commuters.
Fortunately, there are ways to mitigate such risks with the appropriate use of technology. Furthermore, to ascertain the system’s protection quality, companies can conduct rigorous tests of cybersecurity and privacy guarantees. Such tests are typically done by experts, in collaboration with the organization controlling the data. Researchers may sometimes resort to performing tests without knowledge or consent of the organization, nevertheless acting in good faith, with public interest in mind.
When data protection or security weaknesses are found in such tests, the culprit may not necessarily always be promptly addressed. Even worse, via the new bill, software vendors or system owners might even be tempted to initiate legal action against security and privacy researchers, hampering research altogether. When research becomes prohibited, personal risk calculus changes: Faced with a risk of fines or even prison, who would dare partake in such a socially useful activity?
Today, companies and governments increasingly recognize the need for independent testing of security or privacy protection layer and offer ways for honest individuals to signal the risk. I raised similar concerns when in 2016 the UK’s Department for Digital, Culture, Media & Sport intended to ban re-identification. Fortunately, by introducing special exceptions, the final law acknowledges the need for researchers working with the public interest in mind.
Such universal and outright ban of re-identification may even increase the risk of data breaches, because owners may feel less incentivized to privacy-proof their systems. It is in the clear interest of policymakers, organizations, and the public to receive feedback from security researchers directly, instead of risking the information reaching other potentially malicious parties. The law should enable researchers to honestly report any weaknesses or vulnerabilities they detect. The common goal should be to fix security problems quickly and efficiently.
Criminalizing crucial parts of researchers jobs could cause unintended harm. Furthermore, the standards set by an influential country like India carry a risk of exerting negative impact worldwide. The world as a whole cannot afford the risks resulting from impeding cybersecurity and privacy research.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.
More Great WIRED Stories