Google is working to liberate doctors from electronic health records with the global tech giant working towards a test of its AI voice recognition system for clinical notes in a healthcare provider by the end of the year.

CNBC in the US has reported that the company is expanding its Google Brain research team with the aim of building the "next gen clinical visit experience".

The healthcare and biosciences team within Google Brain, part of its Google AI division, is undertaking a research project called Medical Digital Assist, which has an "ambitious goal" of testing an AI clinical note-taking system with an external healthcare partner by the end of the 2018.

Internal Google job postings said the project would see AI combined with audio and touch technologies to improve the capture and accuracy of clinical notes without the need for clinicians to type them in.

The company already uses voice technologies in its Google Home and Google Translate products.

According to Google, AI is poised to transform medicine and will deliver assistive technologies that will empower doctors to better serve their patients.

Google Brain has already developed a state-of-the-art computer vision system for reading retinal fundus images for diabetic retinopathy, with the machine learning algorithm’s capabilities determined to be on par with US ophthalmologists.

Its genomics team has released DeepVariant, a universal SNP and small indel variant caller created using deep neural networks. DeepVariant is a collaboration between Google and Verily Life Sciences, and is available as open source.

In November last year, the Google Brain team published Speech Recognition for Medical Conversations, a study into the use of automatic speech recognition (ASR) for transcribing medical conversations.

Using the technology, Google is working with physicians and researchers at Stanford University to investigate the types of clinically relevant information that can be extracted from medical conversations to assist physicians in reducing their interactions with the EHR.

Doctors often spend about six hours of an 11-hour work day in their electronic health records.

Stanford’s Dr Steven Lin, who is working on the project with Google, told CNBC the challenge is developing an AI-powered speech recognition system that is highly accurate at documenting doctor-patient discussions and summarising notes quickly.

"This is even more of a complicated, hard problem than we originally thought. But if solved, it can potentially unshackle physicians from EHRs and bring providers back to the joys of medicine: actually interacting with patients," Lin said.

The first phase of the project will conclude in August, with plans for the partnership to continue its efforts to develop a system that could dramatically improve workloads and prevent burnout.

"If something like this actually existed, I think you'd have practices and hospitals tripping over themselves to get it at whatever cost," Lin said.

Google isn’t the only company investing in AI-powered systems for doctors with Microsoft collaborating with the University of Pittsburgh Medical Centre on Intelligent Scribe, a machine learning system that captures and synthesises patient-physician conversations.

To share tips, news or announcements, contact the HITNA editor on lynne.minion@himssmedia.com

 

WEBINARS AND EVENTS

WEBINAR AND EVENTS

White papers