The flow of ideas between the tech sector and older industries runs on a very fast one-way street from Silicon Valley to the doorstep of any company with a time-tested business model. Healthcare is the most ancient of all businesses, and there are surely many tech-based disruptions in store for it. But at this moment, tech companies dealing in troves of personally identifiable information (PII) are at a reflection point. They must now increasingly assess the legal and ethical implications for gathering, processing, and storing sensitive data about people. If this self-assessment does not arise out of an organic sense of duty or swelling consumer demand, it will increasingly be required as a matter of law. Prime among these laws is the European Union’s General Data Protection Regulation (GDPR) which, on May 26, 2018, firmly enshrined the digital privacy and autonomy of 511 million EU citizens globally.
In the U.S., hospital systems, clinicians, health researchers, and insurers have traveled a very similar road with HIPAA (1996) and the HITECH Act’s (2009) mandates for handling protected health information (PHI). With this experience, in addition to already established legal and ethical principles, the U.S. healthcare industry may serve as a significant source of leadership in teaching tech not only how to handle personal data, but also how to empower users’ decision-making when intimately interacting with an organization.
Lesson 1: How to Handle PII
PII, just like healthcare, is an extremely valuable commodity. Anyone who doubts the intrinsic value of PII must scratch their head at the explosive 10-year rise in value of tech companies in the social media space. These companies’ business model is, essentially, to acquire and distill vast amounts of PII to create extraordinary insight into how to market products and ideas . Increasingly, big data analytics reveal how the little pieces of data can add up. For example, connecting the dots between geolocation, call/text metadata, and purchasing habits can paint as detailed of a picture of a person as any medical chart. Even more, biometric data such as facial recognition, fingerprints, and gait analysis blur the line between PII and PHI near completely.
Of course, protecting PII from loss or misuse should be a serious concern for tech companies. Major data breaches have shown no sign of abating and companies have generally done a poor job self-policing the issue. Just this week, Facebook disclosed a new data breach affecting some 50 million users, the substance and extent of which is still unknown (1/27/2020 update: this was the Cambridge Analytica scandal). While health is paramount, it is hard to argue against the fact that, like a botched medical procedure, misuse of PII can have lasting and irreversible effects.
Some of the most important safeguards are logistical and human. On the logistics side, since the advent of HIPAA/HITECH, health systems have employed the concepts of “minimum necessary” and “as-needed” access to PHI. For example, healthcare providers place especially sensitive categories of PHI (i.e., mental health counseling, substance abuse, and HIV/AIDS diagnosis and treatment) from other, more routine PHI. Persons who are granted access may need additional credentials to “break the glass” and access that information. Indeed, these ideas of are a key part of the GDPR, which sets out certain especially sensitive categories of information (e.g., data pertaining to racial/ethnic status, political affiliations, and genetic information). Other logistical safeguards, such as non-use of portable media drives, back-end database encryption, two-factor authentication, and data de-identification, where practicable, are also employed.
For the more human component, health organizations governed by HIPAA/HITECH mandate regular training and certification for employees dealing in PHI. Business associate agreements set out the obligations of contractors and third parties for handling information and implement deterrent measures for misuse. Health systems have independent privacy offices to identify and handle compliance for potential breaches and breach risks. Notably, as the risk of civil liability exposure increases, insurance companies will drive the adoption of data handling standards as a requirement of coverage. Most importantly, a cultural change in the ethos of handling PII is needed. Certainly, “move fast and break things” doesn’t work in the operating room, nor should it when influencing millions of users’ purchasing, relational, and political decisions based on psychometric profiling.
Lesson 2: Informed Consent
Informed consent has been a critical legal and ethical component of medical treatment in the U.S. since the 1950s. At its essence, informed consent has three elements: 1) understanding of proposed treatment options by the patient; 2) no particular treatment option is mandated; and 3) patient acquiescence.
In view of increasing significance of PII, European regulators have made consent a key part of the GDPR. Under the GDPR, clear and unambiguous consent must now be obtained in a context not obscured by a volume of information or fine print. Users must be able to withdraw consent to the processing of data just as easily as it is given. Explicit “opt-in” consent must be required for obtaining and processing of sensitive personal information. Beyond obtaining consent in a perfunctory way, companies may be best served when their users have a simplified understanding of what will be done with their data. Will the data be sold or transferred to third parties? Who are those third parties and what will they do with the data? Does any part of the sharing of that information benefit the user? Does the company have a vested financial interest in the transfer and sharing of user PII? Companies must increasingly assess the need to provide answers to these types of questions to users.
Despite the dramatic changes brought about by the tech sector, there may still be a few time-tested lessons for the tech industry as a whole. The U.S. healthcare sector may function as a source of leadership as tech wrestles with questions of how to ethically and legally handle PII. Human and logistical factors are perhaps the most important considerations and, among them, a change in ethos is needed about how to handle PII. Additionally, as with a medical procedure, it is important for tech firms to recognize that not only mere consent, but informed, knowing consent will do the most to achieve ethical and legal compliance with the products and services they apply. Culture changes take time, but in this case, the destination is certainly worth the journey.