Google and one of the largest health-care networks in the United States are embroiled in a data-privacy controversy that researchers fear could jeopardize public trust in data-sharing practices and, potentially, academic studies.
At issue is a project dubbed Nightingale that gives Google access to the health-care information, including names and other identifiable data, of tens of millions of people without their knowledge. The people were treated at facilities run by the health network Ascension, which is based in St Louis, Missouri.
Google says that Nightingale, first reported in The Wall Street Journal on 11 November, is meant to develop technology that would enable Ascension to deliver improved health care.
Both companies say that they abided by US laws to protect health-care information. But the lack of patient knowledge — and the fact that identifiable data weren’t scrubbed from the records — has caused US lawmakers to cry foul. The US Department of Health and Human Services says it is now looking into “this mass collection of individuals’ medical records with respect to the implications for patient privacy”.
Researchers worry that the revelations will undermine trust in studies more broadly. “This has shockwaves far beyond Google, far beyond the health-care sector,” says Johan Ordish, a policy analyst at the PHG Foundation, a charity in Cambridge, UK, that studies health-care technology.
Legal reassurance
In a statement posted online on 11 November, Google said that its work with Ascension adheres to industry-wide regulations regarding patient data, and “comes with strict guidance on data privacy, security, and usage”. The same day, Ascension also posted a statement saying that the project complied with the law and the company’s “strict requirements for data handling”.
Neither company responded to Nature’s request for comment on the potential for Nightingale to erode trust in research.
This isn’t the first time Google has been involved in a controversial health-care project. In 2016, the magazine New Scientist revealed that the Google-owned artificial-intelligence company DeepMind had partnered with a group of London hospitals called the Royal Free London NHS Foundation Trust to access health data without gaining patients’ permission. An investigation by UK regulators subsequently determined that the agreement breached data-protection laws.
Meanwhile, an agreement between Facebook and the UK company Cambridge Analytica that came to light in 2018 allegedly granted researchers access to data from millions of Facebook users without their consent. The controversy was a watershed moment for those whose research involves large collections of personal data, says Ordish: “Cambridge Analytica shook the world.” In July, Facebook agreed to pay US$100 million to settle the matter with the US Securities and Exchange Commission.
The Nightingale project is much larger than the Royal Free agreement. And it has the potential to resonate even more deeply than the Cambridge Analytica controversy, because people are particularly protective of their health-care information, says bioethicist Effy Vayena of the Swiss Federal Institute of Technology in Zurich.
Regulations lacking
All three cases highlight the lack of regulations surrounding corporate use of personal data, says social scientist Jay Shaw, who studies artificial intelligence and health at the University of Toronto in Canada. “It’s just the Wild West right now,” he says.
Against this backdrop, academics are flocking to combine large amounts of health data with artificial-intelligence techniques to evaluate health care and find ways to improve it. Research and business communities are grappling with how to combine this push with appropriate privacy controls.
Academics seeking health data for research rather than commercial purposes must typically get approval from an ethical-review committee before they can start a project. The researchers also often strip identifying information from the records they work with. Commercial uses of personal data don’t necessarily undergo the same review, says Edward Meinert, who studies health-care systems at the University of Oxford, UK. “The problem with these deals is that they’re not going through that type of rigorous process,” he says.
But that distinction might be lost on many people, especially if there are further controversies, cautions Vayena. “At some point, all of the research will get a bad name,” she says. “With these incidents, we undermine public trust to this whole enterprise. We have to be really careful.”