With RegTech booming, Microsoft CEO Satya Nadella used his presentation at the World Economic Forum in Davos to speak out in favour of increased regulation on data privacy.

He praised Europe’s General Data Protection Regulation (GDPR), which came into force in May 2018. “My own point of view is that it’s a fantastic start in treating privacy as a human right,” he said. “I hope that in the United States we do something similar, and that the world converges on a common standard.”

Last year, Microsoft hailed privacy as a “fundamental” human right in the days after GDPR came into force.

The default position has to be that people own their own data, said Nadella in Davos, echoing calls that Sir Tim Berners-Lee, prime mover of the World Wide Web, has made in recent years.

Nadella’s public stance in favour of more regulation came after a Pew Research Center survey saying that roughly half of Americans don’t trust either the federal government or social media sites to protect their data.

And, of course, it came in the wake of multiple scandals over data protection involving companies as large as Facebook and Google. Indeed, the latter was fined €50 million under GDPR in Davos week.

Privacy is just one controversial area for tech companies as more and more technologies, including artificial intelligence and machine learning, are trawling consumer data to spot patterns of behaviour and make predictions from them – issues that are related to data privacy.

Nadella singled out the growing use of facial recognition – as he did last year, when he called on the US government to regulate the technology to avoid its use in racial profiling and surveillance, using often biased data.

“It’s a piece of technology that’s going to be democratised, that’s going to be prevalent, I can come up with ten uses that are very virtuous and important and can improve human life, and ten uses that would cause problems,” he said.

Microsoft’s website lists the following as praiseworthy applications: “Police in New Delhi recently trialled facial recognition technology and identified almost 3,000 missing children in four days. Historians in the United States have used the technology to identify the portraits of unknown soldiers in Civil War photographs taken in the 1860s. Researchers successfully used facial recognition software to diagnose a rare, genetic disease in Africans, Asians and Latin Americans.”

But the dark side includes invasion of privacy and bias. While Microsoft has unveiled a set of principles for the ethical use of AI – as Google, SAP, and others have done in recent months to counter public criticisms – Nadella said that industry self-regulation won’t be enough.

“In the marketplace there’s no discrimination between the right use and the wrong use,” he said. “We welcome any regulation that helps the marketplace not be a race to the bottom.”

But is Microsoft’s clarion call for regulation as straightforward as it seems?

When California passed its own data privacy laws last June, under the California Consumer Privacy Act (CCPA), the impression was that the technology industry had sorted itself into two camps.

On the one hand, vendors such as Microsoft, Apple, Salesforce, Box, and SugarCRM appeared to favour GDPR-style regulation in the US, judging from public pronouncements made by their CEOs. Apple unveiled a privacy portal, allowing users to manage their own data.

On the other, the likes of Google and Facebook saw the rules as a threat to their advertising-based businesses. These and a handful of other giants actively campaigned against CCPA.

However, later developments suggested that the truth may have been rather more complex. In the months since CCPA was rushed into being, the technology industry has been quietly lobbying the government for weaker national laws that it can draft itself, in order to prevent CCPA from becoming a de facto standard across the US.

The moves have seen companies from both camps, including Microsoft, Google, IBM, and Facebook, lobby the White House to begin outlining federal rules before California’s rules come into force in 2020.

CCPA was passed unanimously on 28 June, after the legislation was rushed through the state senate and assembly to prevent even tougher rules, backed by the signatures of more than 600,000 citizens, from being put before government.

So while it’s good news that the technology industry recognises that it has stepped over the line in recent years, damaging public trust, the apparent support of some CEOs shouldn’t be taken at face value – especially when regulations include inference from data, as CCPA does.

In essence, the industry wants to protect its AI interests by preventing CCPA from becoming the US standard – despite it being supported by citizens in the home of Silicon Valley.

Be part of a discussion and connect with like-minded leaders in your sector at our exclusive event series on banking and RegTech.