Privacy is about being able to determine who can see what information when and for how long. Security is required to be able to provide privacy. If your information is not secure, you are not in control of who can see your information and you do not have privacy.

But no person is completely self-sufficient. For society to function, we must place our trust in others: engineers to build bridges that do not collapse, restaurants to sell food that doesn’t poison us, and doctors to consider our best interests when they prescribe medicine that will heal us instead of kill us. In every case, we expect those we trust to look after our interests at least as much as their own. We must risk their betrayal in order to let them help us.

For medical practitioners to help us to stay healthy, we must – by necessity – give them information about ourselves. This information is deeply personal; indeed, information about our health is some of the most personal information that exists.

Changing risk profile

The National Digital Health Strategy recognises the importance Australians place on the security of their health data. But the move to opt-out for the Federal Government’s My Health Record along with notifiable data breach requirements mean it is increasingly important for healthcare workers to not only have good procedures but also a good understanding of issues in digital privacy and security.

Doctors and other healthcare staff are very familiar with principles of confidentiality. However, new digital models of healthcare mean that the “paper chart in a locked cupboard” mindset is no longer sufficient to minimise the risks to patient privacy.

One of the challenges of digital privacy is that healthcare workers often do not have a high degree of technical knowledge; doctors and practice managers may take at face value the assurances offered by health software vendors – and impressive sounding, yet empty phrases like “bank-grade security” – without deep understanding of the risks.

As part of the ethical imperative to “first, do no harm”, healthcare staff must remember that the loss of patient confidentiality is a very serious harm, regardless of whether it is a “sensitive” issue such as mental or sexual health.

The benefits of digital health come through sharing of information; this sharing, by necessity, increases the risk to privacy.

According to research by the Office of the Australian Information Commissioner, health service providers are generally thought to be the most trustworthy: out-ranking financial institutions, governments and charities. Once this trust is broken, however, it is very difficult and time-consuming to rebuild.

More importantly, for individuals whose confidentiality is lost, it can never be regained. Once personal information is in the wild, it can never be unseen.

Data capitalism

Big data exceptionalism seems to be in every government project plan and venture capital call for funding. Claims of future benefits are made without being challenged while the risks to privacy are dismissed out of hand. Powerful moneyed interests are aligning behind the ever-increasing pressure to allow others to surveil and quantify our every move, and the $139 billion a year healthcare industry is an attractive target for companies looking for ever more financial reward.

Personal information has been monetised very successfully by the likes of Google and Facebook. Large tech companies are also moving into health data. Alphabet (Google) subsidiary DeepMind entered a contract with the UK National Health Service, and Amazon has this year announced a partnership with JPMorgan Chase and Berkshire.

While the privacy implications of your search engine history or Facebook likes are concerning enough, health information is considered far more personal. Health insurance company NIB has already indicated its desire to gain access to the vast trove of information soon to be stored in the My Health Record system. In the UK, the Google DeepMind/NHS partnership was found to have breached the UK’s data protection laws, which are far more stringent than those in Australia.

Yet this doesn’t appear to be what people want. The 2017 OAIC report into Australians’ attitudes towards privacy notes that 85 per cent of people are either ‘annoyed’ by unsolicited marketing activity or concerned about where the marketer obtained their information. The majority (86 per cent) of people believe an organisation has misused their information if it was provided with the information for one purpose and used it for another.

Combining these figures with Australians’ sensitivity about health data means the ‘surveillance capitalism’ business model of selling targeted advertising based on personal data is a particularly poor fit for health data.

The HealthEngine betrayal

It is for this reason that the story of HealthEngine – a venture capital-backed Australian company offering healthcare appointment booking services – is a particularly egregious betrayal of both patients and their client practices.

It is undoubtedly useful for medical practices to be able to automate appointment bookings and take pressure off their front of house staff, while patients benefit from being able to arrange appointments more conveniently.

But as part of their responsibility to patient confidentiality, healthcare staff must understand what these third-party tools are doing with their patients’ data.

The ABC revealed that HealthEngine shared some information about patient appointment bookings with third parties, including a personal injury law firm. HealthEngine continues to insist that this “sharing" of data is done only with patients’ “express consent” and that it is therefore acceptable.

Most others, including civil society groups and peak medical bodies disagree, and say that these practices are not in line with community expectations of privacy.

With over $37 million of venture funding riding on HealthEngine, it is understandable the company would seek to downplay its actions and pass off its use of dark patterns to manipulate unsuspecting patients into ‘consenting’ to sharing their data with third parties for services unrelated to booking an appointment with their GP.

But does this model for healthcare IT ultimately serve patients’ interests?

Matters of trust

The incentives of VC-backed technology startups and those of data capitalism-focused companies are frequently not aligned with those of patients. While sharing people’s data with third parties may not matter much when it relates to fairly innocuous things such as cat photos, using this same business model for healthcare-related systems can have much more serious consequences.

If patients cannot trust the systems their doctors use they will be less inclined to share information their doctors need to be able to provide them with healthcare. Patient health will suffer if patients can no longer trust that their healthcare providers will put all of their patients’ needs first.

Healthcare staff can only choose to prioritise these needs while the digital health sector must ensure that innovations respect privacy by design. Developers, meanwhile, must not compromise patient care in the name of profit.

Dr Trent Yarwood is an infectious diseases physician and health advisor to independent public policy and advocacy organisation Future Wise.

Justin Warren is the founder and managing director of IT consultancy PivotNine and a board member of digital rights organisation Electronic Frontiers Australia.

 

WEBINARS AND EVENTS

WEBINAR AND EVENTS

White papers