Can Google save doctors and patients from the misery of electronic medical records?

Author(s)
Published on
January 26, 2018

Google is a company that likes to simplify tasks that used to be much bigger hassles, like reading maps, sharing documents, and finding old emails. Now, recognizing that health systems have not exactly jumped to help doctors with soul-crushing levels of daily data entry, Google wants to use speech recognition to help doctors get patient histories and plans into the electronic health record, or EHR.

Most doctors working today have been forced to become doctor-stenographers, either because they work for a large health system adopting an EHR system, such as Epic or Cerner, or because their private practices have been offered bonuses from the government for using approved software. Doctors have always taken notes, but EHRs are computer documentation systems — they don’t lend themselves to natural narratives like the paper charts of old. One does not have to mouse around on a piece of paper, looking for the “order Vicodin” button.

Electronic records certainly store information in an easier-to-access format than do paper charts, and an absurd number of startups now exist with the aim of mining the health data within them. That being said, these systems were designed by software engineers who apparently thought each physician has hours to write up each patient’s history, do an exam, make a plan, and write orders. Scribes, the specially trained assistants who type in the history and physical findings and start medication orders while the doctor and patient talk, help with the otherwise impossible goal of solving medical problems every 15 minutes, while pages, interruptions, and inbox tasks pile up relentlessly.

But apart from in a few settings, like emergency rooms and the private offices of specialists, scribes are still a novelty at the doctor’s office. This is despite ample evidence that they improve productivity and doctor satisfaction. Doctors often cannot convince administrators to cough up the money to hire and train them. Google’s innovation, discussed on a company blog, involves tweaking its already-competent speech recognition software to serve as a scribe for doctors.

Katherine Chou, a product manager, and Chung-Cheng Chiu, an engineer on something delightfully called the Google Brain Team, found that the algorithms used to turn the rat-a-tat medical dictation by doctors into an accurate representation of the speaker’s words could be tweaked for a more normal human conversation — like an office visit.

Existing medical dictation sounds roughly like: “36-year-old Filipina presenting for the second time with complaints of hirsute elbows period. Patient has tried all recommendations including Nair and spironolactone period.” In other words, it is a one-sided doctor’s assessment of a conversation with the patient.

But Google’s Chou and Chiu have extended this capability to understand conversations. The computer is getting a double dose of information: It must recognize two different voices, which sometimes overlap or interrupt each other. The speakers are also not necessarily using medical terminology: “How long have you been feeling this way?” “I dunno … Maybe since March? April?” “I started coughing, you know, when we went up to the mountains, you know, to get away from the smoke in Napa.”

The Googlers analyzed thousands of anonymized conversations with the help of a professional medical scribe, who helped them parse out the parts of the conversations that were medically relevant and needed to be captured. The team realized they needed to do “data cleaning,” meaning that the discussion of how cute the patient’s kids are could be eliminated from the medical record. They even added artificial noise to the transcripts to challenge their software to pick out the words. The Google researchers concluded the final transcripts constituted a “reasonable” assessment of what was said in the exam room. Overall, Google found that they were able to achieve 80 to 90 percent accuracy in documenting these doctor-patient conversations, or “reasonable quality”, as the Googlers put it.

But there are some concerns with this practice of “roboscribing,” at least at the level of sophistication Chou and Chiu describe. For one, they say “utterances under five words” are simply removed. Many of these “utterances” could well be words like “Hello, Uh-huh,” and other phrases that do not make a difference in the chart. But a doctor could respond to a patient with a crucial short phrase such as “I agree” or “Take half a dose,” which the software might miss. This could lead to confusion in the emergency room or on the part of a new doctor, not to mention legal headaches when the malpractice attorney says, “But doctor, the electronic record makes no mention of you assessing this patient for risk of harming anyone.”

Another concern is privacy. The Google team doesn’t discuss the implications of Google being privy to the most intimate details of patients’ lives. Twitter employees can allegedly check out our “direct messages.” Will rogue Google engineers be able to browse clinical notes?

Nor is Google the only player in the field. The company got some competition from M*Modal, an EHR and transcription company that launched a similar virtual scribe service in March of 2017. Reviews for the service seem wildly mixed, with some doctors saying M*Modal has the best speech recognition they’ve ever seen, while other doctors dictated their reviews from M*Modal in order to prove how dissatisfied they were with the dropped words.

While freeing doctors to talk at length with their patients without the interference of a computer screen is definitely an admirable goal, and even though errors certainly occur in paper charts, a head-to-head trial should happen before this technology debuts widely in medical offices and large health systems. The trial would compare the work of medical scribes as well as Google’s software. Only then could researchers determine whether critical information was falling out of the medical record. If it is, then hospitals will have no choice but to pay for scribes — or get better software that is more user-friendly for doctors than today’s woefully bad options.