Our data is being used in ways we don’t fully understand. Tech giants analyse it to infer our sexual orientation, race, health, and opinions. They share the information with third parties who use it to make assumptions about our personalities and behaviours that could determine our eligibility for insurance or political sensitives. “It’s not just
Our data is being used in ways we don’t fully understand. Tech giants analyse it to infer our sexual orientation, race, health, and opinions. They share the information with third parties who use it to make assumptions about our personalities and behaviours that could determine our eligibility for insurance or political sensitives.
“It’s not just information that I volunteer to you, it’s also the information that I passively, unavoidably and unintentionally leave behind,” Dr Sandra Wachter, a lawyer and senior research fellow at the Oxford Internet Institute explains at the Strata Data Conference at the Excel London.
“It’s my clicking behaviour, my geolocation, my eye tracking. All of that is being collected. You look at my friends on Facebook and assess who I am. And from that, very seamlessly, privacy-invasive and counter-intuitive things are being inferred about me.
Data collectors follow the footprints we leave online to find out who we are – or, rather, who they think they we are. In the case of Dr Wachter, the tracks of her data reveal that she is a Viennese, vegetarian lawyer. This could lead to an inference that she loves rules, animals, and coffee. But these intuitive links aren’t always accurate and the decisions that they inform can change our lives.
“I do have an intuitive link between being Viennese and drinking coffee, but how does my browsing behaviour impact my credit score? I have no idea about that. And even worse than that, this information is endlessly being replicated, is not deletable and is being shared with third parties.”
These opaque inferences mean the public doesn’t know how their data is used. Dr Wachter believes that they need a new data protection right: the right to reasonable inferences.
Enforcing a right
Wachter envisions that the way in which the right is enforced will depend on the use case. For high-risk inferences that are privacy-invasive or damaging to reputation, the data collector would need to justify that the inference was reasonable before it was made. This would be based on the relevance of the data to the inference and the relevance of that interference to the outcome that it triggered, as well as the accuracy and reliability of the conclusion.
“What is reasonable will very much depend on the context,” Wachther tells Techworld after her talk. “Reasonable inference would mean something different in criminal justice than what it means in insurance or in advertisements, so we have to have a very differentiated approach to that.
“For certain applications where the harm is not as detrimental, I think policy guidelines and codes of conduct can absolutely work. If we think about more serious cases, like algorithms being used in criminal justice for sentencing for predictive policing, I think it’s more likely that we need hard enforceable laws.”
Current data protection laws provide little obvious protection against the risks of inferential analytics. GDPR offers further protection for sensitive personal data, but inferences made by technology can quickly break down the borders between different types of information. Inferences can turn the anonymised into the personal and the personal into the sensitive. Rather than providing retrospective transparency into what happened, Wachter wants to focus on justifying an action before it’s taken.
The legal protections for the outputs of inferences are another issue. Laws of equality and discrimination don’t always map onto the categories of people used by algorithms, whether it’s the sad teenagers targetted by Facebook ads or the video gamers being deprived social credit scores in China.
These laws are, however, ambiguous enough to allow different interpretations, which is why Wachter is more focused on developing new rights. Her next project will develop the ethical guidelines for specific sectors that will underline these rights.
Wachter’s underlying motivation for improving data protection is about providing humans with the right to change who they are.
“That freedom to change is important because it helps us to understand the mistakes that we make, learn from them and become better people, and every time you have a digital legacy, that will inhibit that ability,” she says.
“If I’m not able to make mistakes anymore, if everything that I’m doing is not redeemable anymore, if everything that I did when I was 16 years old, would be on record forever, what would be my chances of becoming somebody else in the future? Because I think that’s what’s currently happening … every time I browse something on the web, it has a potential impact on my life. My browsing behaviour is being used to decide if I should get a loan, if I can go to university, if I will get promoted.
“It’s not about my personality anymore. The algorithms are deciding who I really am. And I felt we need to have a counterbalance to that, to give back autonomy to people because they should decide who they really are.”