Will GDPR fail? Moving beyond the new regulation



The General Data Protection Regulation (GDPR) is a good thing, right? A recent discussion with Fieldfisher lawyer Hazel Grant confirmed that, despite its voluminous and bureaucratic outer appearance, it contains the essence of data protection law as present in the UK for over a decade, combined with the current state of best practice. Given that online privacy knows no borders, it will undoubtedly be better to have a single framework rather than being linked to any single nation or jurisdiction. Word is (for example, from this round-table discussion) that it will make the backbone of privacy law globally — if you’re a multinational, goes the thinking, it makes more sense to implement it once, rather than having different grades of regulation depending on the geography.

Neither is GDPR necessarily the end of the world for ill-prepared (and, potentially, previously in denial) organisations. “The vital thing is to plan, rather than panic,” says Freeform Dynamics’ Bryan Betts, who attended said round-table. “GDPR compliance may not be that onerous, especially if you already handle customer data fairly and transparently.” Even if surveys suggest that 75% of marketing data will be “obsolete”, the 80/20 rule suggests that organisations could do without that bit anyway — ‘keep everything’ is a strategy for the hopeless hoarder, not the business leader.

Yes, GDPR may be expensive to implement, particularly for organisations who have played fast and loose with our privacy in the past, and who now face potential fines with real impact. Yes, it may involve a learning curve as people get their heads around the 260-page document it involves (quick tip: get someone in that understands it). Yes it might be a pain for consumers, who may be faced with incessant questions from online providers, each carefully worded to avoid any suggestion that the customer was duped or cajoled. And yes, organisations such as Axciom, operating in the background of marketing data acquisition, may fear their business models are under threat and respond accordingly.

No doubt it will achieve many of the goals it set out to achieve. All well and good. And yet, and yet… a looming question is, will it actually make us safer, or protect our privacy online? As in, will the potential bad things be countered, such that they are less likely to happen? Please forgive my over-simplistic language, but isn’t this what it all boils down to? For a number of reasons (most of which are speculative, this is the future we are talking about after all), this may not be the case.

Challenges of scope, consent, loopholes, aggregation, unexpected consequences and speed of innovation

First, we have unaddressed challenges of scope. Worth a read is “Global authority on adblocking” PageFair’s work comparing GDPR’s impact on Facebook and Google: simply put, Facebook needs to ask people if it can use status posts as input to its advertising engines, whereas Google does not need to know someone is — its AdWords algorithms generate information based on search requests, location and so on, without being personally identifiable. “Google’s AdWords product has the benefit that it can be modified to operate entirely outside the scope of the GDPR,” states the article.

In other words, data can be processed and people can be targeted with marketing materials whether or not they are “personally identifiable”. This (targeting) is only an issue if it is an issue, but it does seem to be one of the areas that GDPR was set up to address. Keep in mind that other providers, including Facebook, can take a similar tack if they choose. “Nothing in the GDPR prohibits Facebook from serving non-targeted ads,” says Michael Kaufmann.

Bringing Facebook into the discussion leads to the question of consent, specifically GDPR’s need for “clear affirmative action” around “agreement to the processing of personal data relating to him or her” — law firm Taylor Wessing provides a good explanation, as does the UK Information Commissioners’s Office (ICO). On the outside, Facebook has done a pretty good job of building Privacy by Design into its services, giving users granular access to what they allow on their timelines.

I don’t know about you however, but my grasp on consent is tenuous at least — I couldn’t say what it is that I have consented to across Apple, Google, Facebook, Microsoft and so on’s services over the past few years. Indeed, who remembers the “A privacy reminder from Google” thing, where essentially we agreed to whatever we had to so we could keep getting the service? For sure, we have a choice not to agree, and I am sure (having just checked) that it is all set out in nice, clear English.

But who has done anything other than following the “Scroll down and click “I agree” when you’re ready to continue to Search,” versus removing oneself from the Google-enabled online world? The consent debate becomes a Hobson’s choice: either agree, or cut yourself off from all things digital. This is crucial: by saying yes, we are acknowledging that one of the most powerful companies in the world, and its partners, can use our data. Having done so, any thoughts about meanwhile restricting access to the local Mom and Pop hardware shop becomes laughable.

And as an aside, consent isn’t always required, for example if another legal requirement needs the data to be processed (here’s an HR-based view). Given the smorgasbord of regulations out there, it’s not hard to imagine an organisation using one law’s requisites as a loophole against the those of GDPR. For example (and I welcome a lawyer’s view on this), the ability to be forgotten necessitates remembering who was forgotten and why, which somewhat undermines the principle: even GDPR could be used as a defence against the consent provisions of GDPR. I’m not trying to be picky, more illustrating how any loopholes are open to confusion or exploitation.

Building on the fact that we are dealing with not one, but multiple global corporations, and a squillion of smaller data users, the issue of aggregation looms equally large. The issue is not only around how individual data gorillas are for new ways to exploit information in the name of innovation, but also what happens when third parties plug into APIs and slurp potentially innocuous feeds in ways that unexpectedly affect privacy. Let’s say your startup creates a new learning algorithm and plug it into the Twitter and Strava APIs, which determines and then posts online provable examples of dangerous cycling. Who is at fault at that point? You? Twitter? Strava?

Or indeed, does it really matter given that you have no money and the cat is already out of the bag? What if the data is reputation-damaging, or directly usable by law enforcement? This leads to the law of unintended consequences — for example, in some cases, data may be sub-poenaed for good reason (in this case, a murder investigation) but a raft of less valid data requests are likely: indeed, these are driving the current review of the UK’s Investigatory Powers Act (a.k.a. The Snooper’s Charter). Indeed, nothing’s stopping government agencies from acting as the startup in the previous paragraph, and/or creating laws to enable that to happen, in the name of fighting crime.

Perhaps the biggest issue of all is speed of innovation, which moves far faster than regulation. Like it or dislike it, much of innovation’s value comes from ‘leveraging’ (horrible word, but more positive than ‘exploiting’) areas of potential difference — for example dis-intermediating an inefficient or costly model of working (e.g. FinTech vs traditional banking), or finding new ways to connect things (e.g. social media). Innovation and speed go hand in hand, as nobody wants tenth-mover advantage.

Innovation and bacterial mutation are not so different, in that both happen without any real chance of success (ask venture capitalists, or indeed Bill Gates): it’s only with hindsight that the next generation of winners emerge. However our governance models act as though the next big thing will happen like the last. The seemingly innocuous “let’s identify new ways of doing things” attitude is exactly what will lead providers to circumvent GDPR in ways the regulators haven’t considered. The latter group are not good at acting quickly: what we now call GDPR was first mooted in June 2011, and replaces laws adopted in October 1995.

To illustrate what the future might hold, consider this example of how voice recognition can detect ‘emotional state’ — the question of how this can impact privacy without needing to be personally identifiable, is not addressed by GDPR. Given that nobody can predict the future, we should at least have governance mechanisms that can react to it.

We need to protect people, not simply their data

If GDPR cannot address these issues, what becomes of it? For a start, it becomes an expensive burden which fails to deliver on some pretty fundamental goals. It will inevitably need to be replaced, but any changes will be flawed if they follow the same splat-the rat, see a problem and try to regulate it away approach, built on a fundamental, naive optimism that law can be implemented even as the context, and its supporting artefacts, shift beyond recognition. As an interesting aside, the financial regulation world is already responding to the fact that such an approach is neither possible, nor desirable.

Tackling this challenge requires very different ways of thinking, starting from the very source. GDPR’s fundamental purpose is not to protect the privacy of citizens but to protect data, that’s the D and P of GDPR. This needs to change — it is people that need protection, whatever data is stored about them and however it is used. And indeed, whoever is in charge: right now, the most likely force that will undermine the provisions of GDPR is ourselves.

Almost three years ago, I proffered the idea of an virtual bill of rights (a couple of weeks later, Web founder Tim Berners-Lee did similar). My point then, and it remains now, is that we can’t legislate on data. Rather, we need to afford the digital world the same rights and responsibilities as the physical world. So, there is no such thing as cyber-theft, but we simply have theft; same for fraud, extortion, bullying and so on. It should be that simple — this also means that all existing laws need to be considered in the light of what is now possible. If it is to be illegal to market to me without my consent, that should be possible whether someone holds information on me or not. And so on, and so on.

Even if we arrive at a world in which everything is known, we still need to act as though we are humans. Perhaps it isn’t that different to village life — back in the day, when we all lived in each others’ pockets and it was very hard to keep anything secret, we first learned the principles of acceptable behaviour. All that needs to happen is to accept that such age-old ideas, of courtesy, respect and basic rights (not to be stolen from, defrauded or conned, or harangued for money and so on) still stand.

For now, we are where we are. What can organisations do in the meantime? Well, get on and protect the data they hold about their customers, that much is still true. Perhaps we will see a GDPR 2, which will be far simpler, but further-reaching than the existing framework, but I could never advise anyone to wait for this, as it would be illegal (even if GDPR is subject to a ‘grace period’).

Even as you look to respond to existing regulation however, you should be looking beyond it and towards a moment when the privacy regulators recognise current approaches will never be effective. Don’t expect GDPR to make us any better protected against annoying consent-based advertising, higher-risk aggregation-based insights or the biggest challenge of all, downright manipulation. To recall what Sun Microsystems founder and CEO Scott McNealy once said, “Privacy is dead, deal with it.” Indeed, we need to deal with it, in a way that will actually deliver.