Dictation comes easily to most physicians, especially those who started their careers before EHR systems existed or became the norm. Dictation is a tried and true practice. Why mess with something that works?
The problem is, dictation in the traditional sense requires transcription, which is very costly, delays essential updates to medical records, and more critically, bears significant risk of errors that either add to a time-consuming process of proofreading and remediation, or worse, go unnoticed and consequently instantiate permanent and potentially significant misinformation. Bottom line, dictation may seem fast and efficient to physicians, but the requisite transcription can prove detrimental, both financially and clinically.
Voice recognition technology is replacing conventional dictation across a variety of healthcare information systems, EHR included. Voice recognition technology is certainly able to eliminate transcription costs, but how about transcription errors? Is it able to listen and interpret better than a human?
The answer to both questions is, yes, especially if it is “trained.”
The natural language processing (NLP) characteristics of voice recognition technology allow spoken words to be parsed into discrete data fields, not just blocks of free text. Voice recognition can be made highly intuitive and better than a human if an EHR system is programmed to incorporate dynamic, command-based responses.
If an EHR system is meant to function in concert with voice recognition technology, physicians shouldn’t have to speak in complete sentences or provide comprehensive end-to-end narratives. An EHR system can, and should, be provisioned to exercise dynamic, command-based responses consistent with specific types of procedures, techniques, symptoms, care plans, etc.
For example, an orthopedist should be able to say, “insert medial meniscus non-surgical plan,” and receive a system response customized to his or her standard practice, such as:
1. Schedule MRI [date].
2. Periodic application of ice to affected area.
3. Mild compression wrap and knee immobilizer.
4. Physical therapy ordered. Focus on quadriceps muscle strengthening exercises.
5. Work restrictions include [description].
6. Discussion of conservative versus surgical treatment options.
7. Return for follow-up in [timeframe].
8. Precaution: If swelling or pain increases, notify physician’s office immediately.
9. Precaution: Do not sleep in knee immobilizer.
Additions and edits to the auto-populated verbiage — including date, description and timeframe, as shown above — are also input using voice recognition.
Thousands of dynamic, command-based responses programmed within an EHR system can substantially reduce the time it would otherwise take to perform conventional dictation. Plus, the need for transcription is removed from the equation entirely, easily saving the average physician $30,000 to $50,000 a year.
Trained voice recognition also helps overcome many of the issues surrounding general dissatisfaction with EHR systems. In the absence of voice recognition, physicians usually encounter a lengthy series of screens, tabs, check boxes, radio buttons, form fields and pick lists, exhausting five to 12 minutes, more than 100 mouse clicks and an abundance of manual data entry to produce a single exam note. With trained voice recognition and dynamic, command-based responses, a single exam note should take less than two minutes.
By adopting an EMR with trained voice recognition, a physician practice typically realizes a 60% decrease in overhead and a 25% increase in patient throughout and billable revenue.
Money talks, and trained voice recognition listens.