US20050102146A1 - Method and apparatus for voice dictation and document production - Google Patents

Method and apparatus for voice dictation and document production Download PDF

Info

Publication number
US20050102146A1
US20050102146A1 US11/014,807 US1480704A US2005102146A1 US 20050102146 A1 US20050102146 A1 US 20050102146A1 US 1480704 A US1480704 A US 1480704A US 2005102146 A1 US2005102146 A1 US 2005102146A1
Authority
US
United States
Prior art keywords
data
user
file
dictation
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/014,807
Inventor
Mark Lucas
Stephen Miko
Steven Bennington
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/014,807 priority Critical patent/US20050102146A1/en
Publication of US20050102146A1 publication Critical patent/US20050102146A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/174Form filling; Merging
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Definitions

  • the invention relates to the field of document production by dictation, voice-to-text conversion, and automated processing.
  • a typical example may be the method by which physicians document medical procedures. Physicians typically hand-write or dictate their notes at the conclusion of a medical procedure. The notes are sent to a transcriptionist for typing. The transcriptionist creates a typed version of the notes by reading the handwritten notes or by listening to the dictated notes and then typing them by hand. Numerous typing errors may occur because of the transcriptionist's unfamiliarity with the physician's handwriting or because it may be difficult to understand the dictation. If the document is proofread for clarity and a question as to the selection of a transcribed word is raised, then, when the listen-and-type methodology is used, it may be difficult to locate the dictation on a dictation audiotape.
  • hardcopy documents are typically returned to and edited by the physician.
  • Final edited documents are then manually filed with a patient's medical records. If other physicians or an insurance company require copies of the records, the physician's secretarial staff could prepare the copies and transmit them, as required. Often a number of documents must be generated based on one patient visit. For example, an attending physician may send a thank you letter to a referring physician, an insurance form is often required to ensure proper billing, and follow-up notes may be required to verify the status of the patient or laboratory test results. Time and labor is required to generate each of these documents.
  • FIG. 1 is a block diagram of an embodiment of the present invention
  • FIG. 2 is a flow diagram illustrating a method of operation in accordance with an embodiment of the invention
  • FIG. 3 illustrates various components of a system in accordance with an embodiment of the invention
  • FIG. 4 is a flow diagram illustrating another method of operation in accordance with an embodiment of the invention.
  • FIG. 5 is a flow diagram illustrating an alternate method of operation in accordance with an embodiment of the invention.
  • FIG. 6 depicts another embodiment in accordance with the invention.
  • FIG. 7 is a flow diagram illustrating yet another method of operation in accordance with an embodiment of the invention.
  • FIG. 1 is a block diagram of an embodiment of the present invention.
  • the present invention uses voice recognition technology and a voice recognition engine to transcribe dictated notes and may eliminate the need for traditional read-and-type or listen-and-type transcription.
  • a user may be provided with a personal computer, workstation, or a handheld computing device (collectively referred to as “computer 10”), that may include a monitor, microphone, and pointing device (not shown) such as a mouse, stylus, or keyboard.
  • the computer 10 is operatively coupled to a processing and storage system 11 .
  • a user may select the identity of an entity (e.g., a patient or client) that will be the subject of a dictation. The entity therefore becomes the “dictation subject.”
  • entity e.g., a patient or client
  • the entity therefore becomes the “dictation subject.”
  • the entity may be listed on an entity list 12 that may be displayed on a computer 10 monitor.
  • the user may also select a type of document or form (hereinafter “forms”) that the user wishes to populate, or fill-in, with text. Selection of forms is not limited to one document; multiple documents can be selected for sequential or simultaneous data entry and processing during one dictation session.
  • forms may be listed on a forms list 14 that may be displayed on the computer 10 monitor.
  • the forms list 14 of FIG. 1 is depicted as having N forms 14 1 - 14 N , where N is any integer. For ease of illustration, an Nth form 14 N is partially illustrated. Forms are typically divided into sections or fields 16 - 28 . Each field may have its own unique descriptor to identify the field, for example, a descriptor may be “Name,” “Address,” or any number or letter combination to uniquely identify the field. It will be understood that there is no limit to the number of fields that can be associated with any of the forms in the forms list 14 and that the representation of fields having reference numbers 16 - 28 is for illustration purposes only.
  • a user may select a first field, for example the “Subjective” field 26 , which the user wishes to fill-in. After the selected field is filled-in, the user may verbally or manually (e.g., with a pointing device) command the system to go to another field in the same document or in one of the other previously selected documents.
  • the user may dictate speech into an audio input 32 of the computer 10 .
  • the audio input may be a microphone.
  • the processing and storage system 11 may automatically generate an audio file 38 , along with an associated transcribed dictation file 40 , and an indexing file 42 . Such generation may be accomplished with the use of a voice recognition engine 36 .
  • the audio file 38 , transcribed dictation file 40 , and indexing file 42 are stored in a memory 44 , which may be, for example, a hard disk or some other type of random access memory.
  • the transcribed dictation file may be saved as an editable text file (hereinafter “editable transcribed text file 40”).
  • the audio file 38 and editable transcribed text file 40 may be indexed by the indexing file 42 , such that each transcribed word of dictation in the editable transcribed text file 40 is referenced to a location, and thus a sound, in the associated audio file 38 .
  • the audio file 38 and editable transcribed text file 40 may be indexed by the indexing file 42 , such that each transcribed letter of dictation in the editable transcribed text file 40 is referenced to a location, and thus a sound, in the associated audio file 38 .
  • Indexing, or tagging, each letter in the editable transcribed text file 40 improves playback of the audio file and improves editing capability by providing more granularity to the process.
  • the process of editing transcribed dictation is improved by enabling an editor to select a questioned word or words (or alternatively letter or letters), from the editable transcribed text file 40 and hear the user's recorded voice associated with that selection.
  • a questioned word or words or alternatively letter or letters
  • the editor can click on the text and hear the user's voice associated with that text.
  • the editor can correct any errors due to the voice recognition engine's 36 interpretation of the voice.
  • the voice model may be updated by editing single words to one or more words or multiple words into a single word. Alternatively, or conjunctively, the model may be updated by editorial manipulation of single letters of text.
  • Associated with the voice recognition engine 36 is a database of voice profiles 37 for each user.
  • the correction of errors in the voice recognition engine's 36 interpretation of the user's voice may be synchronized with the user's voice profile, thus updating the user's voice profile. Because the user's voice profile has been updated, probability dictates that the error may not occur again.
  • the process of editing improves the user's voice model.
  • the file containing the approved text is saved in a read-only format, (the saved file will hereinafter be referred to as “read-only format file 40A”) thus, effectively deleting the editable transcribed text file 40 from memory 44 .
  • the read-only format file 40 A may be signed and stored as an electronic signature. Saving the approved text in a read-only format avoids accidental or deliberate tampering with the approved text.
  • the audio file 38 generated in concert with the editable transcribed text file 40 as well as the associated indexing file 42 , may be deleted from the memory 44 after the editable transcribed text file 40 is approved.
  • Logical storage of the pre-approved editable transcribed text file 40 may be in a first section of memory 46 reserved for editable text, while logical storage of post approved read-only transcribed text file 40 A may be in a second section of memory 48 reserved for read-only text.
  • Storage in separate logical memory locations improves the speed for a user to replicate a database at a remote location. The scalability to multiple remote sites may be improved with separate logical storage because a user need only mirror read-only transcribed text files, and may thus avoid the unnecessary copying of large audio files and editable files that may not be required at the remote sites.
  • the editable transcribed text file 40 and the corresponding read-only transcribed text file 40 A need not share memory 44 contemporaneously with one another. Additionally, the editable transcribed text file 40 and the read-only transcribed text file 40 A may be stored in a common section of memory.
  • the process of generating documents may be improved by giving the user access to legacy information, such as data pertinent to each entity in the entity list 12 .
  • This data may already be stored in an existing database 50 of the user.
  • a user in the medical profession such as a physician
  • the practice management system may have a wealth of demographic information about each patient seen in the physician's practice.
  • the demographic information may be stored in a database format.
  • Each item of data may therefore be available for use by the processing and storage system 11 .
  • a physician may have a schedule or roster of patients that will be seen on a given day.
  • Each patient may be listed in the physician's practice management system database 50 (i.e., the existing database 50 ).
  • patient demographic data for patients to be seen on the given day, may be downloaded from the practice management system database 50 to a patient demographic database 51 before the physician sees the patient.
  • the physician may identify the patient to the processing and storage system 11 by use of the entity list 12 .
  • Entity list 12 is illustrated with M entities, where M is an integer. An embodiment of the invention may accommodate any number of entities; however, it is noted that the number of entities represented in the database 50 need not be equal to the number of entities listed on the entity list 12 .
  • the entities may be removed from the list 12 to show that the entity has been addressed. This gives a visual reference to the user as to what work has and has not been completed. For example, if a patient's name is selected from the entity list 12 , the name is removed from the entity list 12 after the physician dictates his notes. This indicates to the physician that he has dictated a note for that particular visit of that patient. If at the end of the day the physician has an empty entity list 12 , then he may understand that he has completed all required dictation. If there are names left on the entity list 12 , then the physician may understand that he may be required to complete further dictation.
  • the physician may select the form(s) that will be completed from the forms list 14 .
  • the processing and storage system 11 may automatically fill-in fields, for example fields 16 - 28 , for which data is available, from the downloaded practice management system database 50 or data added directly into the patient demographic database 51 .
  • Downloaded information may include patient name, address, telephone number, insurance company, known allergies, etc., but, of course, is not limited to these items.
  • An embodiment of the invention may dynamically generate and distribute forms, reports, chart notes, or the like based on the entered dictation. Such documents may be placed in electronic archival storage within the user's own control and additionally the processing and storage system 11 may automatically send copies of these documents to third parties.
  • the term “documents” includes both electronic and hard copies.
  • a physician may dictate chart notes (i.e., a summary of the results of a patient visit) into the processing and storage system 11 via the computer 10 . Because dictated information is entered into predefined fields, the processing and storage system 11 may integrate the dictated information into an electronic medical chart that can be archived, retrieved, and searched. Forms, reports, charts, or the like can be sent to third parties via any communication channels, such as fax 52 , print 54 , and e-mail 56 .
  • FIG. 2 is a flow diagram illustrating a method of operation in accordance with an embodiment of the invention.
  • the method of operation is illustrated in the context of a medical practice; however, it will be understood by those of skill in the art that the method is equally applicable to other services and industries as well.
  • selected data from database 50 holding information relating to patient practice management database 51 is downloaded to a patient demographic database.
  • a first user i.e., the physician, may select a patient name from the list of patient names previously downloaded.
  • the first user may select the form or forms that will be filled-in during a dictation session.
  • the forms used can be unique to the user's own practice or industry.
  • the system dynamically generates forms by compiling separate fields of data.
  • Data-mining relates to the process of searching a database, or other storage place of data, for particular data or sets of data.
  • the use of separate fields is a benefit because known existing databases for use in dictation generally have data entered in a free-form style.
  • free-form it is meant that text is dictated in a free flow format into essentially one data field, as contrasted with text dictation into structured and distinct fields. Free-form dictation results in data storage that is not amenable to document generation or data-mining. Forms customization allows discrete data to be captured and saved.
  • the system may fill-in various fields in any of the forms selected by the first user.
  • Data used by the system to fill-in the forms may come from the patient demographic database 51 , which was populated with data downloaded from the first user's own database 50 or from other sources. Because forms are divided into fields, the text in like fields may be shared between different forms and generation of multiple forms may occur contemporaneously. This is an improvement over existing systems, which require a user to fill-in one form at a time. The completion of one form at a time may be driven from a system requirement to engage a voice recognition engine to complete one form and then disengage the voice recognition engine before moving onto the next form.
  • Completion of the dictation session is slowed in that instance, because the user may be duplicating his efforts by filling-in like fields in different forms.
  • several forms may be generated in one session, without the need to dictate entries for one form, close that form, then dictate entries for another form. Once all desired forms are identified, the user can populate the fields of each of the forms in one session.
  • the first user may select a first field and begin dictation.
  • the first user can use voice navigation to select a field, where voice navigation includes the speaking of the desired field name to effect data entry into that field.
  • Data entry includes all forms of spoken word, including numbers. Any type of data entry may be accommodated, for example, both text boxes and check boxes may be used. Text may be entered into a text box and checkboxes may be checked or unchecked by voice entry. Pointing devices need not be used. Thus, if there are four fields the first user can say “field one” and the text will be entered into field one.
  • the first user can then say “next section” or call the next section by name, such as “field two.” Of course fields can be named with common names such as “subjective” or “allergies,” and need not be numbered.
  • the system provides a visual and/or audible cue to the user to allow the user to understand that the system is ready to accept dictation.
  • the background of a dictation screen on the computer monitor turns yellow so that the user can easily tell if the voice recognition engine is engaged.
  • the command “stop dictation” is issued, the background of the dictation screen returns to its original, pre-dictation, color.
  • one embodiment emits an audible tone so that the user does not have to look at the computer screen during dictation.
  • the combination of yellow screen and audible tone makes it clear to the user when the voice recognition engine is starting and stopping, thus avoiding any unnecessary repetition of dictation.
  • the first user's dictation is applied to a voice recognition engine.
  • the output of the voice recognition engine populates fields with like-names in different documents. There is no need to disengage the voice recognition engine in order to dictate a second form.
  • a patient may come to a physician's office for an examination.
  • the physician may use an embodiment of the present invention to document the encounter.
  • the physician may choose a familiar form in which to enter data and can dictate data directly into that form.
  • the physician may also need to generate a request for laboratory work to be performed at a testing laboratory, a follow-up note to the patient, and a thank you letter to the referring physician.
  • Each of these multiple documents may have some fields that are identical to the fields used to record the encounter with the patient, for example “name” and “address.”
  • the system can populate the multiple documents at substantially the same time that the system populates the first document chosen by the physician.
  • the system may compile all fields into the selected form(s), and thus generate the selected document(s). Because transcribed information is stored in fields, rather than actual assembled documents, a user may create numerous documents by assembling or merging the appropriate fields into a form represented by a document listed on the forms list 14 ( FIG. 1 ). The assembled fields may then be presented as a completed document.
  • the system may allow the user to recite a sound known by the system to represent a certain string of text.
  • Such an abbreviated dictation tool is known as a “macro.”
  • frequently used phrases are called “norms” or normals, and can be completed by the use of a macro.
  • the method of inserting a macro into a string of words in a text file may include: correlating the string of words against entries in a database of command strings; copying, upon identity of correlation, the macro at a pointer address of the command string; and replacing the correlated string of words with the copied macro.
  • the user may indicate to the system that the user's next word will be a macro. In an embodiment of the invention, the user may indicate that the next word is a macro by saying the word “sub” followed by the name of the macro.
  • a physician may say “sub thanks” and the system may generate the following: “Thank you for referring the above-identified patient to our offices.”
  • norms in the medical services field is well known; however, an embodiment of the invention allows for the use of what are referred to by the inventors as “variable macros” and “prompted macros.”
  • a variable macro combines a macro with a data variable retrieved from a database.
  • a user may say “sub thanks” and the system may generate the following: “Thank you for referring [PATIENT NAME] to our offices.”
  • [PATIENT NAME] is a data field and the instance of [PATIENT NAME] to be substituted in the example above would be defined by the selection of an entity from the entity list 12 at the beginning of the dictation session.
  • the entity were named “John Brown” the actual text generated by the system would be: “Thank you for referring John Brown to our offices.”
  • a prompted macro allows a user to generate text that requires the insertion of variables that may not be present in the patient demographic database 51 .
  • the prompted macro is used as follows. The physician says “sub macro_name,” waits for a prompt from the system such as a beep, and then says or enters the variable data.
  • the physician may say “sub high lead,” wait for a beep, and then say “five.”
  • the system in turn may generate the following text: “The results of your lead blood screening indicate a level of 5 deciliters/liter. This level is higher than would normally be expected.”
  • the variable “5” was inserted into an appropriate spot in the text of the macro.
  • the compiled fields and audio may be stored in memory.
  • the user may dictate, edit, or view the compiled documents. If further dictation, editing, or viewing is required, the first user may return to step 208 .
  • the first user or a second user may edit the document. Authorization of the second user may be required.
  • the second user may edit from any workstation or other input/output device associated with system. Additionally, any editing that the second user performs may update the first user's voice model. This may be important in improving accuracy. Any person with authorization may view the documents on a workstation in communication with the processing and storage system. If further dictation, editing, or viewing of compiled documents is not desired, then at step 218 further processing of the documents may occur.
  • Further processing may include, but is not limited to: secure storage of transcribed text files in a read-only format; creation of electronic medical records (“EMRs”) or charts that logically combine information for a patient; creation of voice enabled EMRs; display of documents on a monitor; faxing; printing; or e-mailing documents using pre-defined settings.
  • EMRs electronic medical records
  • Automated transmission of any document to a pre-defined recipient is accommodated in one embodiment in accordance with the invention.
  • Each created document may be appended to its corresponding patient's electronic patient chart, eliminating any need for cutting and pasting found in some other applications.
  • a search function allows users to retrieve documents using a variety of search options such as keyword, date, patient name, or document type.
  • FIG. 3 illustrates various components of a system in accordance with an embodiment of the invention.
  • a first server 300 includes a practice management system 302 .
  • the practice management system 302 may be a third party system used for entry, storage, and processing of patient demographic data.
  • Patient demographic data may be stored in a patient database 304 .
  • Patient demographic data includes information having, but not limited to, such field headings as: date of birth, primary insurer, employer, social security number, and address, visit date, and referring physician.
  • Patient demographic databases may include hundreds of fields of information for each identified patient.
  • the practice management system 302 may process this information to generate, for example, patient bills and patient reports. Examples of practice management systems 302 that may be used in accordance with an embodiment of the invention are The Medical ManagerTM by WebMD and LytecTM by NDC Medical. These are examples in the medical industry, but the invention would operate similarly with informational databases in other industries as well.
  • Data stored in the patient database 304 may be in a number of formats including, but not limited to, Access, SQL, MSDE, UNIX, and DOS.
  • the practice management system 302 may also include a patient schedule or roster database 306 .
  • the patient roster database 306 includes information indicative of which patient will be seen on a given day.
  • a timed script 308 or real-time interface may be used to query the patient roster database 306 to determine which patients will be seen on a given day.
  • the timed script 308 may then effectuate a download of patient demographic data of the patients to be seen on the given day. It may also download patient demographic data that may heave been updated since a previous download.
  • the download of patient demographic data from the patient database 304 may be to a temporary directory 310 on the first server 300 .
  • the timed script 308 which may be a length of computer code that performs an action on a timed interval, logs into the practice management system 302 as a user.
  • the timed script 308 may generate a specific report, such as a demographic download for the patients designated to be seen on the next day by the patient roster database 306 . It will be understood, however, that the timed script 308 can retrieve data for any period of time, and for any possible patient selection criteria.
  • the download may be written to the temporary directory 310 on the first server 300 .
  • One of skill in the art will recognize that different methodologies may be used to download patient demographic data without departing from the scope of the invention. Downloading methodologies may depend, in part, on the particular practice management system 302 in use and may also be completed in “real-time.”
  • the timed script 308 may also facilitate the parsing of downloaded data.
  • the patient database 304 may have twenty tables and five hundred fields.
  • the timed script 308 may be used to extract just the data required to be operated upon by an embodiment of the invention from a subset of these fields, for example seventy-five out of the five hundred fields.
  • the timed script 308 may also generate a transferable file 312 .
  • the transferable file 312 may then be transferred to a second server 314 .
  • the second server 314 may be coupled to the first server 300 by a communications network 316 .
  • the communications network 316 may be a public switched telephone network, an intranet, the Internet, or any other communications network, or any combination of data-bearing networks suitable to couple the respective servers 300 , 314 to allow communication and data transfer to be performed therebetween. While two servers are shown, the methods and apparatus described herein can be performed equally well by sharing one server, or by sharing two or more servers.
  • the second server 314 includes an application 316 to operate on data included in the transferable file 312 .
  • the data included within the transferable file 312 includes data stored in the temporary directory 310 of the first server 300 .
  • the application 316 includes a practice management systems interface 318 and a patient demographic database 320 .
  • the practice management systems interface 318 parses the data included in the transferable file 312 , which is transmitted from the first server 300 to the second server 314 via the communications network 316 .
  • the practice management systems interface 318 parses and maps the data such that the data can be indexed and entered into the appropriate locations in the second server's 314 patient demographic database 320 .
  • the practice management system interface 318 may therefore be used to map data from a field having a first heading into a field having a second heading.
  • the practice management system interface 318 provides versatility to the embodiment of the invention by allowing the invention to interface with a plurality of practice management systems.
  • the fields may be used for storage of indexed data from any practice management system provided the appropriate mapping is performed by the practice management system interface 318 . In one embodiment, more than seventy-five fields are used.
  • Exemplary fields may include user ID 322 , which defines a voice model to use for voice-to-text conversion for a given voice recognition engine; patient ID 324 , which may include the patient's social security number or other unique identification number; visit date 326 , which may include the date the patient saw the physician; referring physician 328 , which may include the name, address, phone number and/or other indicia of a physician that referred the patient to the attending physician; date of birth 330 ; and primary insurer 332 , which are self-explanatory, and other fields. Data may also be added manually into fields.
  • the second server 314 may also include a voice recognition engine 334 , such as the runtime portion of ViaVoiceTM manufactured by IBM.
  • a voice recognition engine is separable into at least two parts: 1) runtime software to perform voice-to-text translation and manage dictated speech in a .WAV file, and 2) administrative software to generate screens and help files and provide the ability to correct translated text.
  • Other voice recognition engines and runtime software may be used without departing from the scope of the invention.
  • An embodiment of the invention may use open-architecture, with respect to the voice recognition engine, so as new voice recognition technologies are developed they can be integrated into the invention.
  • the voice recognition engine 334 may receive voice input (i.e., dictation) via a coupling to a noise-canceling microphone 336 .
  • Noise-canceling microphones are available in many different styles, such as handheld, tabletop, and headset integrated.
  • the function of a noise-canceling microphone 336 is to help eliminate background noise that may interfere with the accuracy of the voice recognition engine's 334 speech-to-text function (i.e., transcription of spoken words into textual words).
  • the particular noise-canceling microphone 336 used may depend upon the recommendation of the manufacturer of the voice recognition engine 334 .
  • a model ANC 700 noise-canceling microphone manufactured by Andrea Corporation is used.
  • Some noise-canceling microphones 336 are coupled to the voice recognition engine 334 via a sound card 338 . Still other voice recognition engines may receive voice input from a noise-canceling microphone coupled to a Universal Serial Bus (“USB”) port 340 on the server.
  • USB Universal Serial Bus
  • the voice recognition engine 334 may use user voice models 342 , user specific vocabularies 343 , and specialty specific vocabularies 344 to effect the transcription of voice-to-text.
  • the user voice models 342 , user specific vocabularies 343 , and specialty specific vocabularies 344 may be stored in the memory of the second server 314 .
  • Models and vocabularies may be selected based on User ID 322 .
  • User ID 322 may come from the physician dictating speech into the noise-canceling microphone 336 .
  • the physician may enter identification information into the system by means of a computer interface 337 , such as a keyboard or other data entry device before dictating his or her spoken words. Verification of entered data may be accomplished in real time by observation of a computer video monitor/display 335 .
  • the specialty specific vocabulary 344 may be a database of sounds/words that are specific to a given specialty, such as law or medicine. Using medicine as an example, the single word “endometriosis” may be transcribed as the group of words “end ‘o me tree ‘o sis” by a voice recognition engine not augmented by a specialty specific vocabulary 344 . Additionally, correction may allow words to be automatically added to a user's vocabulary.
  • the user specific vocabulary 343 may allow users to add words that may not be in a specialty specific vocabulary 344 .
  • the voice recognition engine 334 , user voice model 342 , user specific vocabularies 343 , specialty specific vocabulary 344 , sound card 338 , and USB port 340 may all be included in a computer workstation of personal computer 331 1 that is physically separated from, though still in communication with, the second server 314 .
  • Multiple workstations, in a networked computer system, represented by reference numbers 331 2 - 331 X , where X is any integer, may access the second server 314 .
  • the multiple workstations 333 1 - 333 X need not be identical.
  • the voice recognition engine 334 may all be included in the second server 314 , as illustrated by the dashed line labeled 333 in FIG. 3 .
  • All voice models may reside on the second server 314 and be moved to each user when the user logs on. The models can be copied back to the second server if, for example, a voice model changes during that session.
  • the output of the voice recognition engine 334 may be applied to a database 346 that stores text files and associated other files related to a dictation session.
  • the format of the database 346 may be, for example, AccessTM by Microsoft, SQL ServerTM by Sybase or Microsoft Data Engine (MSDETM) by Microsoft. Other formats may be used without departing from the scope of the invention.
  • Files may be logically stored in the database 346 based on, for example, whether they are stored awaiting editing or stored for archival purposes.
  • files stored for editing at least three types of files may be stored: 1) an audio file 348 that is generated by the voice recognition engine 334 as a user dictates; 2) a corresponding editable text file 350 that was either generated concurrently with the audio file, was generated by running the audio file through a voice recognition engine 334 , or may have been typed in; and 3) a synchronization and indexing file 352 that synchronizes and indexes the sounds in the audio file 348 to the text in the editable text file 350 .
  • the audio file 348 may be in .WAV format, other formats may also be used.
  • the editable text file 350 may remain in an editable format throughout any processing of the files that may be required. Processing may include any number of cycles of editing, review, and approval.
  • the data in the editable text file 350 may be stored in a read-only format (referred to hereinafter as read-only text file 354 .
  • Read-only text files 354 are stored without an association to an audio file or a corresponding synchronization and indexing file.
  • the audio file 348 , editable text file 350 , and synchronization and indexing file 352 used to prepare the read-only text file 354 are deleted from memory 346 once the read-only text file 354 is approved.
  • a read-only text file 354 may be signed and stored as an electronic signature.
  • FIG. 4 is a flow diagram illustrating another method of operation in accordance with an embodiment of the invention.
  • the method of operation is exemplified by a method with which a physician may use the embodiment of the invention; however, it will be understood that the method may be used in any field of endeavor.
  • the reference numbers of FIG. 3 will be used as an aid in the description of the method of operation in accordance with FIG. 4 , however use of these reference numbers will not constrain the method of operation to the embodiment of FIG. 3 .
  • One method for transferring data is a timed script. Other methods include a real-time pull of data from a database and pull of data on demand from a database.
  • the system may retrieve information related just to one dictation subject (i.e., one entity) or many dictation subjects.
  • a timed script 308 on a first server 300 logs into the first server 300 .
  • the timed script 308 acquires the identities of patients to be seen by a physician on a given day.
  • the timed script 308 downloads data from a patient database 304 to a temporary directory 310 .
  • the timed script 308 generates a transferable file 312 using data downloaded to the temporary directory 310 .
  • the timed script 308 downloads the transferable file 312 to a communication network 316 for transfer to a second server 314 .
  • the transferable file 312 is downloaded from the communications network 316 to the practice management system interface 318 included in an application 316 included in the second server 314 .
  • a practice management system interface 318 populates a patient demographic database 320 with data mapped from the transferable file 312 .
  • a user e.g., a physician accesses the application 316 via a computer interface 337 .
  • the physician may indicate his or her identity (e.g., User ID 322 ) to the application, so that the application is able to associate the physician's User ID with the physician's voice model (e.g., a voice model stored in user voice model file 342 ).
  • the physician may also indicate a patient ID (e.g., Patient ID 324 ), so that the application will be able to associate the patient's ID with patient data downloaded from the first server 300 in the transferable file 312 .
  • the physician may also indicate which form types the physician will be using to structure his dictation. Form types may be stored in the memory of the second server 314 .
  • Access to the application and indication User ID and Patient ID may be made by voice using a noise-cancellation microphone 336 or manual manipulation of a computer interface 337 . Verification of entries is accomplished in real time by observation of a computer video monitor/display 335 .
  • the physician may dictate a report to the application 316 by speaking into the noise-cancellation microphone 336 .
  • the report may be structured in accordance with the form the physician has selected for initial data input. By structure, it is meant that the report may be broken down into a plurality of fields, each field having associated therewith a filed name. The first user indicates to the application into which field the user will enter dictation.
  • Dictation i.e., speech
  • the voice recognition engine 334 substantially simultaneously generates an audio file 348 corresponding to the dictated speech, transcribes the dictated speech into an editable text file 350 , and generates a synchronization and indexing file 352 .
  • the audio file 348 may be in .WAV format, other formats may also be used.
  • the synchronization and indexing file 352 associates each transcribed word of text in the editable text file 350 with a sound in the audio file 348 .
  • the physician indicates to the application that entry of dictation in the first field is complete or that the dictation session is at an end. If dictation is not complete or the dictation session is not at an end, the user may return to step 416 .
  • the indications may be explicit, as when the user indicates that dictation in the field is complete, or it may be implicit, as when the user indicates that dictation should commence in the next field. Such an implicit indication may be in the form of the utterance of the words “next section.”
  • the application 316 may recognize that data input to a field is complete, as in the case of a checkbox field, where once a box is checked or not checked no further data entry is feasible.
  • the application stores the audio file 348 , the editable text file 350 , and the synchronization and indexing file 352 in the second server 314 .
  • the editable text file 350 is edited/processed, with or without the use of the audio file 348 and synchronization and indexing 352 file. Editing/processing may occur immediately or may be deferred. Even if a note is deferred, a user may return to the note and dictate or otherwise add to the deferred file (note.)
  • the content of the editable text file 350 is approved.
  • the editable text file 350 is saved in a read-only format.
  • the editable text file 350 which has been saved in a read-only format, will hereinafter be referred to a read-only text file 354 .
  • the editable text file 350 , audio file 348 , and synchronization and indexing file 352 that resulted in the generation of the read-only text file 354 are deleted from the second server 314 .
  • Storing the approved dictated form as a read-only text file 354 prevents persons or automated processes from tampering with the file.
  • deleting the editable text file 350 and its associated audio file 348 , and synchronization and indexing file 352 from the system provides additional storage space for new files.
  • the application 316 may generate output, such as reports, faxes, and/or emails, by compiling fields previously saved as read-only text files 354 .
  • FIG. 5 is a flow diagram illustrating an alternate method of operation in accordance with an embodiment of the invention.
  • the system acquires data on dictation subjects from a source, such as, for example, a first server.
  • Data acquisition may be by any manner known to those of ordinary skill.
  • One method of transferring data may be by timed script. Other methods include a real-time transfer of data and a transfer of data on demand.
  • acquired data may be downloaded from the source to transferable file.
  • the transferable file may be downloaded to a server upon which is located an interface for data processing.
  • the interface populates a Dictation Subject Demographic Database with data mapped from the transferable file.
  • a user accesses the application via a computer interface.
  • the computer interface may be one node in a plurality of nodes of a networked computer system.
  • the physician may indicate his or her identity to the application, so that the application would be able to associate the user's identity with the user's voice model.
  • the user may also indicate a Dictation Subject identity, so that the application would be able to associate the Dictation Subject's identity with the Dictation Subject data downloaded from the source and included in the Dictation Subject Demographic Database.
  • the physician may also indicate which form types the user will be using to structure his dictation.
  • Access to the application and indication User ID and Patient ID may be made by voice using a noise-cancellation microphone or manual manipulation of a computer interface. Verification of entries is accomplished in real time by observation of a computer video monitor or by audible cues provided by the application.
  • the user may dictate notes into selected fields in the form or forms chosen by the user. Each form may be broken down into a plurality of fields, each field having associated therewith a filed name.
  • a voice recognition engine receives the dictation and substantially simultaneously generates an audio file corresponding to the dictated speech, transcribes the dictated speech into an editable text file, and generates a synchronization and indexing file.
  • the audio file may be in .WAV format, other formats may also be used.
  • the synchronization and indexing file associates each transcribed word of text in the editable text file with a sound in the audio file.
  • the user may add text into selected fields by returning to the step of dictating notes into selected fields, step 510 . If, at step 514 , the dictation is complete then, at step 516 , the application stores the audio file, the editable text file, and the synchronization and indexing file in a memory.
  • the memory may be on a server in the networked computer system.
  • the editable text file may be edited/processed, with or without the use of the audio file and synchronization and indexing file. Editing/processing may occur immediately or may be deferred. Even if a note is deferred, a user may return to the note and dictate or otherwise add to the deferred file (note.)
  • the user, or any other authorized user from any computer or workstation in the networked computer system can recall the saved editable text file document or form and add dictation or edit the document or form. In an embodiment, free-form dictation may be limited to the user, while any other authorized user may be limited to dictating corrections.
  • the editable text file is saved in a read-only format.
  • the editable text file which has been saved in a read-only format, will hereinafter be referred to a read-only text file.
  • the editable text file, audio file, and synchronization and indexing file are deleted from memory. Storing the approved dictated form as a read-only text file prevents persons or automated processes from tampering with the file. Furthermore, deleting the editable text file and its associated audio file, and synchronization and indexing file from the system provides additional storage space for new files.
  • the application may generate reports, faxes, and/or emails by compiling fields previously saved as read-only text files.
  • FIG. 6 depicts another embodiment in accordance with the invention.
  • a handheld computing device 600 such as a Cassiopeia® by Casio, a JornadaTM by Hewlett-Packard, or an IPAQTM by Compaq, using a Windows CETM operating system from Microsoft Corp., may be used to record initial dictation.
  • Other operating systems such as the PalmTM Operating System by 3COM, Inc. may alternatively be used so long as they support an audio recording capability.
  • the handheld computing device 600 runs an audio recording application 602 at a sampling rate of 11 kHz that generates a .WAV formatted audio file 604 .
  • the audio file 604 is generated as dictation is entered into the handheld computing device 600 .
  • Dictation may be entered into the handheld computing device 600 via a microphone 606 in communication with the handheld computing device 600 .
  • the handheld computing device 600 may acquire data from a server 614 via a data transfer mechanism 617 .
  • the data transfer mechanism 617 may include, for example, a modem, a LAN (local area network) interface, an Internet connection, wireless interconnection including radio waves or light waves such as infrared waves, removable data storage device, or hard wired serial or parallel link.
  • the data transfer mechanism 617 may be a removable data storage device, such as a CompactFlashTM memory card by Pretec Electronics Corp.
  • the removable data storage device has a storage capacity size of 64 megabytes.
  • the size of removable data storage device is related to the amount of dictation and data a user desires to store on the card; other sizes may be used without departing from the scope of the invention.
  • the removable data storage device may be removed from the handheld computing device 600 and placed into a data storage device reader (not shown) such as the USB CompactFlashTM card reader by Pretec Electronics Corp.
  • the data storage device reader can transfer data from the removable data storage device to the server 614 or can transfer data from the server 614 to the removable data storage device.
  • data acquired by the handheld computing device 600 from the server 614 via the data transfer mechanism 617 may include data from a practice management system, which has demographic data on each patient seen in the practice.
  • Patient demographic data and scheduled patient information may be collected in the same manner as described in the text related to FIGS. 3, 4 , and 5 .
  • Patient demographic data perhaps in the form of a transferable file such as transferable file 312 ( FIG. 3 ) may be input to the handheld computing device 600 and stored in a practice management system interface database 610 .
  • the amount of patient demographic data downloadable to the handheld computing device 600 and the amount of functionality that may be incorporated into the handheld computing device may be limited by the memory and storage capacity of the handheld computing device 600 . As the memory and storage capacity of handheld computing devices increase, the amount of data and functionality incorporated within the handheld device should commensurately increase. None herein should be construed as to limit the types or amounts of data, or to restrict any of the various functionalities of the invention disclosed herein, from being incorporated to the greatest extent possible into a handheld computing device.
  • Patient demographic data may be used to organize information on the handheld computing device 600 .
  • the information downloaded may include demographic data as well as past dictated notes.
  • the handheld computing device 600 may import application data 609 , such as, but not limited to, forms, charts, and note information.
  • the handheld computing device's 600 practice management system interface database 610 may be in the format of AccessTM by Microsoft or Sequel ServerTM by Sybase. Other database formats are also acceptable and using a different database will not depart from the scope of the invention.
  • the application data 609 and data stored in the practice management system interface database 610 are synchronized on the handheld computing device 600 by a synchronization and indexing routine 612 .
  • the synchronization and indexing routine 612 on the handheld device 600 cooperates with a counterpart synchronization and indexing routine 628 on the server 614 .
  • Synchronization in this context refers to downloading of demographic information and application data such as forms, charts, and note information from the server 614 to the handheld computing device 600 , and the transfer of audio files and data to the server 614 from the handheld device 600 .
  • Once data is downloaded and synchronized on the handheld device 600 the synchronized data is available for document creation and dictation.
  • a dictated audio file 604 will be associated with the form selections made by the user.
  • the synchronized audio file 604 , application data 609 , and data from the practice management system interface database 610 may be prepared for transfer via the data transfer mechanism 617 .
  • Synchronized and indexed data transferred from the handheld device 600 to the server 614 via the data transfer mechanism 617 may require processing before it can be applied to a voice recognition engine 620 included in the server 614 .
  • Processing may include filtering to reduce or eliminate background noises present in the audio file 604 . Such background noises may have been present during dictation. Processing may also include, but is not limited to, the reduction or elimination of reverb or vibration noises present in the audio file 604 . Processing as just described may take place in an audio file filter 622 , which may be implemented in software. Processing may also include converting the sampling rate of the audio file 604 from one rate to another. For example, in the embodiment described in FIG.
  • the audio file 604 was recorded using a sampling rate of 11 kHz, however the voice recognition engine 620 requires an audio input having a sampling rate of 22 kHz. Therefore, in the embodiment of FIG. 6 , a conversion of sampling rate from 11 kHz to 22 kHz is required. Of course, conversions from one sampling rate to another may not be necessary and conversions from any given sampling rate to sampling rates other than those disclosed are also acceptable without departing from the scope of the invention. In the embodiment of FIG. 6 , sampling rate conversion may occur in an audio file interpreter 623 , and may be handled in software.
  • the audio file is processed as described above, and then input to a voice recognition engine 620 (similar to 334 FIG. 3 ) for generation of an editable text file 624 , an audio file 626 , and synchronization file 630 (similar to 352 FIG. 3 ).
  • a voice recognition engine 620 similar to 334 FIG. 3
  • processing results in the generation of a read-only text file 632 and the deletion of the audio file 626 , editable text file 624 , and synchronization file 630 .
  • User voice model 634 and specialty specific vocabulary 636 may be used by the voice recognition engine 630 during the process of transcribing the audio file 604 into the editable text file 624 .
  • FIG. 7 is a flow diagram illustrating yet another method of operation in accordance with an embodiment of the invention.
  • patient demographics, schedule information, application data, forms, charts and note information may be downloaded to the handheld device from a server via a data transfer mechanism.
  • the physician may carry the handheld device as he performs his duties.
  • the physician has the ability to review previous notes related to any data stored on the handheld device.
  • the physician may wish to dictate a new note.
  • a patient's name may be selected by tapping on the displayed name in a list of names with a stylus on the handheld computing device screen, or by navigating the list by rotating a wheel on the side of the unit, or by other suitable means of selection.
  • a form type to be dictated is selected.
  • the physician may dictate notes into the handheld device using the selected form to structure note entry into specific fields on selected forms. Dictation may begin by depressing and releasing, or depressing and holding, the record button on the handheld computing device and thereafter beginning dictation. Macros and other voice commands can be used during dictation.
  • the user can navigate through sections, or fields, of a form by tapping on a desired section with the stylus on the handheld computing device screen, or navigate through the sections by rotating a wheel on the side of the unit, or other suitable means of selection.
  • the dictated notes or forms may be stored in a memory of the handheld device.
  • the physician may repeat steps 706 A through 706 D for the same or other patients (i.e., dictation subjects).
  • the physician may transfer audio files and application data to a server via a data transfer mechanism.
  • the transferred application data is synchronized with the server's application data.
  • audio files are filtered, processed, and synchronized for storage and further processing or editing on the server. Further processing includes processing of the audio file in a voice recognition engine to generate transcribed text that is stored in an editable transcribed text file.
  • An index file is also generated. The index file associates each word of text in the editable transcribed text file with the location of a sound in the audio file.
  • a first user e.g., any network user from any networked workstation in communication with the server can add dictation to any given field in any given note or form.
  • free-form dictation for the given field may be limited to the first user, while any other authorized user (i.e., a second user) may be limited to dictating corrections to the text for that given field.
  • the second user may, of course, enter free-form dictation into any other empty field.
  • any user from any networked workstation in communication with the server can edit any given field in any given note or form.
  • Editing may involve the use of the synchronized audio file, which, as described in other embodiments herein, can be used to allow the editor to hear the recorded voice of the person that dictated the text in question.
  • An editor may select a word or group of words for recorded audio playback. The editor may make corrections and/or alterations to the editable transcribed text file.
  • the transcribed text in the editable transcribed text file may be approved. If the transcribed text is not approved, then the user may return to step 714 for further dictation and/or editing of the transcribed text. If the transcribed text is approved, then at step 718 the voice models of the users that provided dictation to create the note or form are updated.
  • the approved transcribed text is stored in file in a read-only format.
  • the read-only file may be signed and stored as an electronic signature.
  • the editable transcribed text file, audio file, and index file are deleted from the memory of the server.
  • reports may be generated.

Abstract

Multiple documents including multiple fields may be produced using a voice recognition engine to transcribe dictated notes. An embodiment of an apparatus to generate documents from a user's dictation may include a computer interface and a computer in communication with the computer interface including a voice recognition engine, a database, and a memory. A method of entry of dictation into a plurality of documents may include receiving an indication of a selection of a plurality of documents from a list of documents, receiving an indication of a field descriptor of a first field in a document, receiving dictated speech to be entered into the first field, writing transcribed text representative of the dictated speech to the first field, and writing the transcribed text to other fields having the same descriptor in each of the other selected plurality of documents.

Description

  • This application is a continuation of U.S. patent application Ser. No. 09/901,906, filed Jul. 11, 2001, and claims benefit of the filing date of U.S. Provisional Patent Application Ser. No. 60/279,458, filed Mar. 29, 2001, each entitled “Method and Apparatus for Voice Dictation and Document Production,” and each incorporated herein by reference in their entireties.
  • FIELD OF THE INVENTION
  • The invention relates to the field of document production by dictation, voice-to-text conversion, and automated processing.
  • BACKGROUND
  • In general, methods of authoring a document have changed little in the past forty years. A typical example may be the method by which physicians document medical procedures. Physicians typically hand-write or dictate their notes at the conclusion of a medical procedure. The notes are sent to a transcriptionist for typing. The transcriptionist creates a typed version of the notes by reading the handwritten notes or by listening to the dictated notes and then typing them by hand. Numerous typing errors may occur because of the transcriptionist's unfamiliarity with the physician's handwriting or because it may be difficult to understand the dictation. If the document is proofread for clarity and a question as to the selection of a transcribed word is raised, then, when the listen-and-type methodology is used, it may be difficult to locate the dictation on a dictation audiotape.
  • After initial transcription, hardcopy documents are typically returned to and edited by the physician. Final edited documents are then manually filed with a patient's medical records. If other physicians or an insurance company require copies of the records, the physician's secretarial staff could prepare the copies and transmit them, as required. Often a number of documents must be generated based on one patient visit. For example, an attending physician may send a thank you letter to a referring physician, an insurance form is often required to ensure proper billing, and follow-up notes may be required to verify the status of the patient or laboratory test results. Time and labor is required to generate each of these documents.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The various features of the invention will best be appreciated by simultaneous reference to the description which follows and the accompanying drawings, wherein like numerals indicate like elements, and in which:
  • FIG. 1 is a block diagram of an embodiment of the present invention;
  • FIG. 2 is a flow diagram illustrating a method of operation in accordance with an embodiment of the invention;
  • FIG. 3 illustrates various components of a system in accordance with an embodiment of the invention;
  • FIG. 4 is a flow diagram illustrating another method of operation in accordance with an embodiment of the invention;
  • FIG. 5 is a flow diagram illustrating an alternate method of operation in accordance with an embodiment of the invention;
  • FIG. 6 depicts another embodiment in accordance with the invention; and
  • FIG. 7 is a flow diagram illustrating yet another method of operation in accordance with an embodiment of the invention.
  • DETAILED DESCRIPTION
  • FIG. 1 is a block diagram of an embodiment of the present invention. The present invention uses voice recognition technology and a voice recognition engine to transcribe dictated notes and may eliminate the need for traditional read-and-type or listen-and-type transcription. In accordance with an embodiment of the invention, a user may be provided with a personal computer, workstation, or a handheld computing device (collectively referred to as “computer 10”), that may include a monitor, microphone, and pointing device (not shown) such as a mouse, stylus, or keyboard. The computer 10 is operatively coupled to a processing and storage system 11. Using voice commands and/or a pointing device, a user may select the identity of an entity (e.g., a patient or client) that will be the subject of a dictation. The entity therefore becomes the “dictation subject.” The entity may be listed on an entity list 12 that may be displayed on a computer 10 monitor. The user may also select a type of document or form (hereinafter “forms”) that the user wishes to populate, or fill-in, with text. Selection of forms is not limited to one document; multiple documents can be selected for sequential or simultaneous data entry and processing during one dictation session. The forms may be listed on a forms list 14 that may be displayed on the computer 10 monitor.
  • The forms list 14 of FIG. 1 is depicted as having N forms 14 1-14 N, where N is any integer. For ease of illustration, an Nth form 14 N is partially illustrated. Forms are typically divided into sections or fields 16-28. Each field may have its own unique descriptor to identify the field, for example, a descriptor may be “Name,” “Address,” or any number or letter combination to uniquely identify the field. It will be understood that there is no limit to the number of fields that can be associated with any of the forms in the forms list 14 and that the representation of fields having reference numbers 16-28 is for illustration purposes only. In accordance with one embodiment of the invention, a user may select a first field, for example the “Subjective” field 26, which the user wishes to fill-in. After the selected field is filled-in, the user may verbally or manually (e.g., with a pointing device) command the system to go to another field in the same document or in one of the other previously selected documents.
  • To fill-in a field, the user may dictate speech into an audio input 32 of the computer 10. The audio input may be a microphone. In an embodiment, the processing and storage system 11 may automatically generate an audio file 38, along with an associated transcribed dictation file 40, and an indexing file 42. Such generation may be accomplished with the use of a voice recognition engine 36. The audio file 38, transcribed dictation file 40, and indexing file 42 are stored in a memory 44, which may be, for example, a hard disk or some other type of random access memory. The transcribed dictation file may be saved as an editable text file (hereinafter “editable transcribed text file 40”). The audio file 38 and editable transcribed text file 40 may be indexed by the indexing file 42, such that each transcribed word of dictation in the editable transcribed text file 40 is referenced to a location, and thus a sound, in the associated audio file 38. Alternatively, the audio file 38 and editable transcribed text file 40 may be indexed by the indexing file 42, such that each transcribed letter of dictation in the editable transcribed text file 40 is referenced to a location, and thus a sound, in the associated audio file 38. Indexing, or tagging, each letter in the editable transcribed text file 40, as opposed to each word, improves playback of the audio file and improves editing capability by providing more granularity to the process.
  • The process of editing transcribed dictation is improved by enabling an editor to select a questioned word or words (or alternatively letter or letters), from the editable transcribed text file 40 and hear the user's recorded voice associated with that selection. When the text is presented to an editor on a computer screen, the editor can click on the text and hear the user's voice associated with that text. The editor can correct any errors due to the voice recognition engine's 36 interpretation of the voice. The voice model may be updated by editing single words to one or more words or multiple words into a single word. Alternatively, or conjunctively, the model may be updated by editorial manipulation of single letters of text.
  • Associated with the voice recognition engine 36 is a database of voice profiles 37 for each user. The correction of errors in the voice recognition engine's 36 interpretation of the user's voice may be synchronized with the user's voice profile, thus updating the user's voice profile. Because the user's voice profile has been updated, probability dictates that the error may not occur again. The process of editing improves the user's voice model.
  • After the processes of dictation and editing are completed, and the text contained in the editable transcribed text file 40 is approved, the file containing the approved text is saved in a read-only format, (the saved file will hereinafter be referred to as “read-only format file 40A”) thus, effectively deleting the editable transcribed text file 40 from memory 44. The read-only format file 40A may be signed and stored as an electronic signature. Saving the approved text in a read-only format avoids accidental or deliberate tampering with the approved text. Furthermore, to save storage space in the memory 44, the audio file 38 generated in concert with the editable transcribed text file 40, as well as the associated indexing file 42, may be deleted from the memory 44 after the editable transcribed text file 40 is approved.
  • Logical storage of the pre-approved editable transcribed text file 40 may be in a first section of memory 46 reserved for editable text, while logical storage of post approved read-only transcribed text file 40A may be in a second section of memory 48 reserved for read-only text. Storage in separate logical memory locations improves the speed for a user to replicate a database at a remote location. The scalability to multiple remote sites may be improved with separate logical storage because a user need only mirror read-only transcribed text files, and may thus avoid the unnecessary copying of large audio files and editable files that may not be required at the remote sites. It will be understood that the editable transcribed text file 40 and the corresponding read-only transcribed text file 40A need not share memory 44 contemporaneously with one another. Additionally, the editable transcribed text file 40 and the read-only transcribed text file 40A may be stored in a common section of memory.
  • The process of generating documents may be improved by giving the user access to legacy information, such as data pertinent to each entity in the entity list 12. This data may already be stored in an existing database 50 of the user. For example, a user in the medical profession, such as a physician, may have a practice management system in place to handle the financial, administrative, and clinical needs of his practice. The practice management system may have a wealth of demographic information about each patient seen in the physician's practice. The demographic information may be stored in a database format. Each item of data may therefore be available for use by the processing and storage system 11. For example, a physician may have a schedule or roster of patients that will be seen on a given day. Each patient may be listed in the physician's practice management system database 50 (i.e., the existing database 50). In accordance with an embodiment of the invention, patient demographic data, for patients to be seen on the given day, may be downloaded from the practice management system database 50 to a patient demographic database 51 before the physician sees the patient. When the physician is ready to prepare notes or complete forms based on the patient's visit on a particular day, the physician may identify the patient to the processing and storage system 11 by use of the entity list 12. Entity list 12 is illustrated with M entities, where M is an integer. An embodiment of the invention may accommodate any number of entities; however, it is noted that the number of entities represented in the database 50 need not be equal to the number of entities listed on the entity list 12. As the entities are selected from the entity list 12, they may be removed from the list 12 to show that the entity has been addressed. This gives a visual reference to the user as to what work has and has not been completed. For example, if a patient's name is selected from the entity list 12, the name is removed from the entity list 12 after the physician dictates his notes. This indicates to the physician that he has dictated a note for that particular visit of that patient. If at the end of the day the physician has an empty entity list 12, then he may understand that he has completed all required dictation. If there are names left on the entity list 12, then the physician may understand that he may be required to complete further dictation.
  • The physician may select the form(s) that will be completed from the forms list 14. The processing and storage system 11 may automatically fill-in fields, for example fields 16-28, for which data is available, from the downloaded practice management system database 50 or data added directly into the patient demographic database 51. Downloaded information may include patient name, address, telephone number, insurance company, known allergies, etc., but, of course, is not limited to these items.
  • An embodiment of the invention may dynamically generate and distribute forms, reports, chart notes, or the like based on the entered dictation. Such documents may be placed in electronic archival storage within the user's own control and additionally the processing and storage system 11 may automatically send copies of these documents to third parties. As used herein, the term “documents” includes both electronic and hard copies. Using the medical practice as an example, a physician may dictate chart notes (i.e., a summary of the results of a patient visit) into the processing and storage system 11 via the computer 10. Because dictated information is entered into predefined fields, the processing and storage system 11 may integrate the dictated information into an electronic medical chart that can be archived, retrieved, and searched. Forms, reports, charts, or the like can be sent to third parties via any communication channels, such as fax 52, print 54, and e-mail 56.
  • FIG. 2 is a flow diagram illustrating a method of operation in accordance with an embodiment of the invention. The method of operation is illustrated in the context of a medical practice; however, it will be understood by those of skill in the art that the method is equally applicable to other services and industries as well. At step 200, selected data from database 50 holding information relating to patient practice management database 51 is downloaded to a patient demographic database. At step 202, a first user, i.e., the physician, may select a patient name from the list of patient names previously downloaded. At step 204, the first user may select the form or forms that will be filled-in during a dictation session. The forms used can be unique to the user's own practice or industry. The system dynamically generates forms by compiling separate fields of data. The user populates each of these fields with text as the user dictates into the system. Fields can also be populated by omission with preformatted form defaults. Thus forms, whether new or old, can be compiled by inserting the proper field contents within a text box, check box, or other data entry location in the form.
  • The use of fields provides a benefit for data-mining of the fields. Data-mining, as used herein, relates to the process of searching a database, or other storage place of data, for particular data or sets of data. The use of separate fields is a benefit because known existing databases for use in dictation generally have data entered in a free-form style. By free-form, it is meant that text is dictated in a free flow format into essentially one data field, as contrasted with text dictation into structured and distinct fields. Free-form dictation results in data storage that is not amenable to document generation or data-mining. Forms customization allows discrete data to be captured and saved.
  • At step 206, the system may fill-in various fields in any of the forms selected by the first user. Data used by the system to fill-in the forms may come from the patient demographic database 51, which was populated with data downloaded from the first user's own database 50 or from other sources. Because forms are divided into fields, the text in like fields may be shared between different forms and generation of multiple forms may occur contemporaneously. This is an improvement over existing systems, which require a user to fill-in one form at a time. The completion of one form at a time may be driven from a system requirement to engage a voice recognition engine to complete one form and then disengage the voice recognition engine before moving onto the next form. Completion of the dictation session is slowed in that instance, because the user may be duplicating his efforts by filling-in like fields in different forms. In an embodiment of the invention, several forms may be generated in one session, without the need to dictate entries for one form, close that form, then dictate entries for another form. Once all desired forms are identified, the user can populate the fields of each of the forms in one session.
  • At step 208, the first user may select a first field and begin dictation. In an embodiment of the invention, the first user can use voice navigation to select a field, where voice navigation includes the speaking of the desired field name to effect data entry into that field. Data entry includes all forms of spoken word, including numbers. Any type of data entry may be accommodated, for example, both text boxes and check boxes may be used. Text may be entered into a text box and checkboxes may be checked or unchecked by voice entry. Pointing devices need not be used. Thus, if there are four fields the first user can say “field one” and the text will be entered into field one. The first user can then say “next section” or call the next section by name, such as “field two.” Of course fields can be named with common names such as “subjective” or “allergies,” and need not be numbered. Additionally, after the user indicates to the system that dictation is about to begin, the system provides a visual and/or audible cue to the user to allow the user to understand that the system is ready to accept dictation. In one embodiment, the background of a dictation screen on the computer monitor turns yellow so that the user can easily tell if the voice recognition engine is engaged. When the command “stop dictation” is issued, the background of the dictation screen returns to its original, pre-dictation, color. This also enables the user to see what state the system is in, even if the user is standing or pacing while dictating several feet away from the workstation. In addition to the screen changing color when dictation is initialized and terminated, one embodiment emits an audible tone so that the user does not have to look at the computer screen during dictation. The combination of yellow screen and audible tone makes it clear to the user when the voice recognition engine is starting and stopping, thus avoiding any unnecessary repetition of dictation. Each of these features can be disengaged if not desired by the user.
  • At step 210, the first user's dictation is applied to a voice recognition engine. The output of the voice recognition engine populates fields with like-names in different documents. There is no need to disengage the voice recognition engine in order to dictate a second form. For example, a patient may come to a physician's office for an examination. The physician may use an embodiment of the present invention to document the encounter. The physician may choose a familiar form in which to enter data and can dictate data directly into that form. The physician may also need to generate a request for laboratory work to be performed at a testing laboratory, a follow-up note to the patient, and a thank you letter to the referring physician. Each of these multiple documents may have some fields that are identical to the fields used to record the encounter with the patient, for example “name” and “address.” In accordance with an embodiment of the invention, the system can populate the multiple documents at substantially the same time that the system populates the first document chosen by the physician.
  • At step 212, once the dictation session is complete, the system may compile all fields into the selected form(s), and thus generate the selected document(s). Because transcribed information is stored in fields, rather than actual assembled documents, a user may create numerous documents by assembling or merging the appropriate fields into a form represented by a document listed on the forms list 14 (FIG. 1). The assembled fields may then be presented as a completed document.
  • As the user dictates speech, the user may wish to perform a verbal abbreviation for certain words, phrases, sentences or entire paragraphs of text that are often repetitive in the course of the user's generation of documents. To allow such abbreviation, the system may allow the user to recite a sound known by the system to represent a certain string of text. Such an abbreviated dictation tool is known as a “macro.” In the medical industry, for example, frequently used phrases are called “norms” or normals, and can be completed by the use of a macro. When the system encounters a macro, it substitutes the string of text corresponding to the macro into the text file that is generated by the voice recognition engine. The method of inserting a macro into a string of words in a text file may include: correlating the string of words against entries in a database of command strings; copying, upon identity of correlation, the macro at a pointer address of the command string; and replacing the correlated string of words with the copied macro. The user may indicate to the system that the user's next word will be a macro. In an embodiment of the invention, the user may indicate that the next word is a macro by saying the word “sub” followed by the name of the macro. Thus, a physician may say “sub thanks” and the system may generate the following: “Thank you for referring the above-identified patient to our offices.” The use of norms in the medical services field is well known; however, an embodiment of the invention allows for the use of what are referred to by the inventors as “variable macros” and “prompted macros.”
  • A variable macro combines a macro with a data variable retrieved from a database. Thus, a user may say “sub thanks” and the system may generate the following: “Thank you for referring [PATIENT NAME] to our offices.” Where [PATIENT NAME] is a data field and the instance of [PATIENT NAME] to be substituted in the example above would be defined by the selection of an entity from the entity list 12 at the beginning of the dictation session. Thus, if the entity were named “John Brown” the actual text generated by the system would be: “Thank you for referring John Brown to our offices.”
  • A prompted macro allows a user to generate text that requires the insertion of variables that may not be present in the patient demographic database 51. In an embodiment, the prompted macro is used as follows. The physician says “sub macro_name,” waits for a prompt from the system such as a beep, and then says or enters the variable data. Thus, as an example, if a patient had taken a lead blood level test and the result of 5 deciliters/liter was returned to the physician, the physician may say “sub high lead,” wait for a beep, and then say “five.” The system in turn may generate the following text: “The results of your lead blood screening indicate a level of 5 deciliters/liter. This level is higher than would normally be expected.” Thus, the variable “5” was inserted into an appropriate spot in the text of the macro.
  • At step 214, the compiled fields and audio may be stored in memory. At step 216, the user may dictate, edit, or view the compiled documents. If further dictation, editing, or viewing is required, the first user may return to step 208. The first user or a second user may edit the document. Authorization of the second user may be required. The second user may edit from any workstation or other input/output device associated with system. Additionally, any editing that the second user performs may update the first user's voice model. This may be important in improving accuracy. Any person with authorization may view the documents on a workstation in communication with the processing and storage system. If further dictation, editing, or viewing of compiled documents is not desired, then at step 218 further processing of the documents may occur. Further processing may include, but is not limited to: secure storage of transcribed text files in a read-only format; creation of electronic medical records (“EMRs”) or charts that logically combine information for a patient; creation of voice enabled EMRs; display of documents on a monitor; faxing; printing; or e-mailing documents using pre-defined settings. Automated transmission of any document to a pre-defined recipient is accommodated in one embodiment in accordance with the invention. Each created document may be appended to its corresponding patient's electronic patient chart, eliminating any need for cutting and pasting found in some other applications. A search function allows users to retrieve documents using a variety of search options such as keyword, date, patient name, or document type.
  • FIG. 3 illustrates various components of a system in accordance with an embodiment of the invention. A first server 300 includes a practice management system 302. The practice management system 302 may be a third party system used for entry, storage, and processing of patient demographic data. Patient demographic data may be stored in a patient database 304. Patient demographic data includes information having, but not limited to, such field headings as: date of birth, primary insurer, employer, social security number, and address, visit date, and referring physician. Patient demographic databases may include hundreds of fields of information for each identified patient. The practice management system 302 may process this information to generate, for example, patient bills and patient reports. Examples of practice management systems 302 that may be used in accordance with an embodiment of the invention are The Medical Manager™ by WebMD and Lytec™ by NDC Medical. These are examples in the medical industry, but the invention would operate similarly with informational databases in other industries as well.
  • Data stored in the patient database 304 may be in a number of formats including, but not limited to, Access, SQL, MSDE, UNIX, and DOS. The practice management system 302 may also include a patient schedule or roster database 306. The patient roster database 306 includes information indicative of which patient will be seen on a given day.
  • In an embodiment of the invention, a timed script 308 or real-time interface (not shown) may be used to query the patient roster database 306 to determine which patients will be seen on a given day. The timed script 308 may then effectuate a download of patient demographic data of the patients to be seen on the given day. It may also download patient demographic data that may heave been updated since a previous download. The download of patient demographic data from the patient database 304 may be to a temporary directory 310 on the first server 300. In an embodiment of the invention, the timed script 308, which may be a length of computer code that performs an action on a timed interval, logs into the practice management system 302 as a user. The timed script 308 may generate a specific report, such as a demographic download for the patients designated to be seen on the next day by the patient roster database 306. It will be understood, however, that the timed script 308 can retrieve data for any period of time, and for any possible patient selection criteria. The download may be written to the temporary directory 310 on the first server 300. One of skill in the art will recognize that different methodologies may be used to download patient demographic data without departing from the scope of the invention. Downloading methodologies may depend, in part, on the particular practice management system 302 in use and may also be completed in “real-time.”
  • The timed script 308 may also facilitate the parsing of downloaded data. For example, the patient database 304 may have twenty tables and five hundred fields. The timed script 308 may be used to extract just the data required to be operated upon by an embodiment of the invention from a subset of these fields, for example seventy-five out of the five hundred fields. The timed script 308 may also generate a transferable file 312. The transferable file 312 may then be transferred to a second server 314.
  • The second server 314 may be coupled to the first server 300 by a communications network 316. The communications network 316 may be a public switched telephone network, an intranet, the Internet, or any other communications network, or any combination of data-bearing networks suitable to couple the respective servers 300, 314 to allow communication and data transfer to be performed therebetween. While two servers are shown, the methods and apparatus described herein can be performed equally well by sharing one server, or by sharing two or more servers.
  • In accordance with an embodiment of the invention, the second server 314 includes an application 316 to operate on data included in the transferable file 312. The data included within the transferable file 312 includes data stored in the temporary directory 310 of the first server 300. The application 316 includes a practice management systems interface 318 and a patient demographic database 320. The practice management systems interface 318 parses the data included in the transferable file 312, which is transmitted from the first server 300 to the second server 314 via the communications network 316. The practice management systems interface 318 parses and maps the data such that the data can be indexed and entered into the appropriate locations in the second server's 314 patient demographic database 320. Mapping may be necessary because the field names in the patient database 304 of the first server 300 may not necessarily match the field names used in the patient demographic database 320 of the second server 314. The practice management system interface 318 may therefore be used to map data from a field having a first heading into a field having a second heading. The practice management system interface 318 provides versatility to the embodiment of the invention by allowing the invention to interface with a plurality of practice management systems.
  • There may be an unlimited number of fields within the patient demographic database 320. The fields may be used for storage of indexed data from any practice management system provided the appropriate mapping is performed by the practice management system interface 318. In one embodiment, more than seventy-five fields are used. Exemplary fields may include user ID 322, which defines a voice model to use for voice-to-text conversion for a given voice recognition engine; patient ID 324, which may include the patient's social security number or other unique identification number; visit date 326, which may include the date the patient saw the physician; referring physician 328, which may include the name, address, phone number and/or other indicia of a physician that referred the patient to the attending physician; date of birth 330; and primary insurer 332, which are self-explanatory, and other fields. Data may also be added manually into fields.
  • The second server 314 may also include a voice recognition engine 334, such as the runtime portion of ViaVoice™ manufactured by IBM. Typically a voice recognition engine is separable into at least two parts: 1) runtime software to perform voice-to-text translation and manage dictated speech in a .WAV file, and 2) administrative software to generate screens and help files and provide the ability to correct translated text. Other voice recognition engines and runtime software may be used without departing from the scope of the invention. An embodiment of the invention may use open-architecture, with respect to the voice recognition engine, so as new voice recognition technologies are developed they can be integrated into the invention.
  • The voice recognition engine 334 may receive voice input (i.e., dictation) via a coupling to a noise-canceling microphone 336. Noise-canceling microphones are available in many different styles, such as handheld, tabletop, and headset integrated. The function of a noise-canceling microphone 336 is to help eliminate background noise that may interfere with the accuracy of the voice recognition engine's 334 speech-to-text function (i.e., transcription of spoken words into textual words). The particular noise-canceling microphone 336 used may depend upon the recommendation of the manufacturer of the voice recognition engine 334. In one embodiment, a model ANC 700 noise-canceling microphone manufactured by Andrea Corporation is used. Some noise-canceling microphones 336 are coupled to the voice recognition engine 334 via a sound card 338. Still other voice recognition engines may receive voice input from a noise-canceling microphone coupled to a Universal Serial Bus (“USB”) port 340 on the server.
  • The voice recognition engine 334 may use user voice models 342, user specific vocabularies 343, and specialty specific vocabularies 344 to effect the transcription of voice-to-text. The user voice models 342, user specific vocabularies 343, and specialty specific vocabularies 344 may be stored in the memory of the second server 314. Models and vocabularies may be selected based on User ID 322. User ID 322 may come from the physician dictating speech into the noise-canceling microphone 336. The physician may enter identification information into the system by means of a computer interface 337, such as a keyboard or other data entry device before dictating his or her spoken words. Verification of entered data may be accomplished in real time by observation of a computer video monitor/display 335.
  • The specialty specific vocabulary 344 may be a database of sounds/words that are specific to a given specialty, such as law or medicine. Using medicine as an example, the single word “endometriosis” may be transcribed as the group of words “end ‘o me tree ‘o sis” by a voice recognition engine not augmented by a specialty specific vocabulary 344. Additionally, correction may allow words to be automatically added to a user's vocabulary. The user specific vocabulary 343 may allow users to add words that may not be in a specialty specific vocabulary 344.
  • The voice recognition engine 334, user voice model 342, user specific vocabularies 343, specialty specific vocabulary 344, sound card 338, and USB port 340 may all be included in a computer workstation of personal computer 331 1 that is physically separated from, though still in communication with, the second server 314. Multiple workstations, in a networked computer system, represented by reference numbers 331 2-331 X, where X is any integer, may access the second server 314. The multiple workstations 333 1-333 X need not be identical. Alternatively, the voice recognition engine 334, user voice model 342, user specific vocabularies 343, specialty specific vocabulary 344, macros and templates 345, sound card 338, and USB port 340, represented as being included in workstation 331 1, may all be included in the second server 314, as illustrated by the dashed line labeled 333 in FIG. 3. All voice models may reside on the second server 314 and be moved to each user when the user logs on. The models can be copied back to the second server if, for example, a voice model changes during that session.
  • The output of the voice recognition engine 334 may be applied to a database 346 that stores text files and associated other files related to a dictation session. The format of the database 346 may be, for example, Access™ by Microsoft, SQL Server™ by Sybase or Microsoft Data Engine (MSDE™) by Microsoft. Other formats may be used without departing from the scope of the invention.
  • Files may be logically stored in the database 346 based on, for example, whether they are stored awaiting editing or stored for archival purposes. Regarding files stored for editing, at least three types of files may be stored: 1) an audio file 348 that is generated by the voice recognition engine 334 as a user dictates; 2) a corresponding editable text file 350 that was either generated concurrently with the audio file, was generated by running the audio file through a voice recognition engine 334, or may have been typed in; and 3) a synchronization and indexing file 352 that synchronizes and indexes the sounds in the audio file 348 to the text in the editable text file 350. The audio file 348 may be in .WAV format, other formats may also be used. The editable text file 350 may remain in an editable format throughout any processing of the files that may be required. Processing may include any number of cycles of editing, review, and approval. At the conclusion of processing, the data in the editable text file 350 may be stored in a read-only format (referred to hereinafter as read-only text file 354. Read-only text files 354 are stored without an association to an audio file or a corresponding synchronization and indexing file. In an embodiment of the invention, the audio file 348, editable text file 350, and synchronization and indexing file 352 used to prepare the read-only text file 354 are deleted from memory 346 once the read-only text file 354 is approved. One purpose of deleting the files used to prepare the read-only text file 354 is to reduce storage space required on the second server 314 or other data storage device (not shown) used to store such data. Another purpose of deleting the files used to prepare the read-only text file 354 is to prevent tampering or accidental alteration of the stored documents. A read-only text file 354 may be signed and stored as an electronic signature.
  • FIG. 4 is a flow diagram illustrating another method of operation in accordance with an embodiment of the invention. The method of operation is exemplified by a method with which a physician may use the embodiment of the invention; however, it will be understood that the method may be used in any field of endeavor. The reference numbers of FIG. 3 will be used as an aid in the description of the method of operation in accordance with FIG. 4, however use of these reference numbers will not constrain the method of operation to the embodiment of FIG. 3. One method for transferring data is a timed script. Other methods include a real-time pull of data from a database and pull of data on demand from a database. In a pull on demand, the system may retrieve information related just to one dictation subject (i.e., one entity) or many dictation subjects. At step 400, a timed script 308 on a first server 300 logs into the first server 300. At step 402, the timed script 308 acquires the identities of patients to be seen by a physician on a given day. At step 404, the timed script 308 downloads data from a patient database 304 to a temporary directory 310. At step 406, the timed script 308 generates a transferable file 312 using data downloaded to the temporary directory 310. At step 408, the timed script 308 downloads the transferable file 312 to a communication network 316 for transfer to a second server 314. At step 410 the transferable file 312 is downloaded from the communications network 316 to the practice management system interface 318 included in an application 316 included in the second server 314. At step 412, a practice management system interface 318 populates a patient demographic database 320 with data mapped from the transferable file 312. At step 414, a user (e.g., a physician) accesses the application 316 via a computer interface 337. As part of the access procedure, the physician may indicate his or her identity (e.g., User ID 322) to the application, so that the application is able to associate the physician's User ID with the physician's voice model (e.g., a voice model stored in user voice model file 342). As part of the access procedure, the physician may also indicate a patient ID (e.g., Patient ID 324), so that the application will be able to associate the patient's ID with patient data downloaded from the first server 300 in the transferable file 312. As part of the access procedure, the physician may also indicate which form types the physician will be using to structure his dictation. Form types may be stored in the memory of the second server 314. Access to the application and indication User ID and Patient ID may be made by voice using a noise-cancellation microphone 336 or manual manipulation of a computer interface 337. Verification of entries is accomplished in real time by observation of a computer video monitor/display 335. At step 416, the physician may dictate a report to the application 316 by speaking into the noise-cancellation microphone 336. The report may be structured in accordance with the form the physician has selected for initial data input. By structure, it is meant that the report may be broken down into a plurality of fields, each field having associated therewith a filed name. The first user indicates to the application into which field the user will enter dictation. Dictation (i.e., speech) received by the noise-cancellation microphone 336 is converted to electrical signals and applied to the voice recognition engine 334. At step 418, the voice recognition engine 334 substantially simultaneously generates an audio file 348 corresponding to the dictated speech, transcribes the dictated speech into an editable text file 350, and generates a synchronization and indexing file 352. The audio file 348 may be in .WAV format, other formats may also be used. The synchronization and indexing file 352 associates each transcribed word of text in the editable text file 350 with a sound in the audio file 348. At step 420, the physician indicates to the application that entry of dictation in the first field is complete or that the dictation session is at an end. If dictation is not complete or the dictation session is not at an end, the user may return to step 416. The indications may be explicit, as when the user indicates that dictation in the field is complete, or it may be implicit, as when the user indicates that dictation should commence in the next field. Such an implicit indication may be in the form of the utterance of the words “next section.” Additionally, the application 316 may recognize that data input to a field is complete, as in the case of a checkbox field, where once a box is checked or not checked no further data entry is feasible. In any case, once the dictation session is complete, at step 422, the application stores the audio file 348, the editable text file 350, and the synchronization and indexing file 352 in the second server 314. At step 424, the editable text file 350 is edited/processed, with or without the use of the audio file 348 and synchronization and indexing 352 file. Editing/processing may occur immediately or may be deferred. Even if a note is deferred, a user may return to the note and dictate or otherwise add to the deferred file (note.) At step 426, the content of the editable text file 350 is approved. At step 428, the editable text file 350 is saved in a read-only format. For ease of description, the editable text file 350, which has been saved in a read-only format, will hereinafter be referred to a read-only text file 354. At step 430, the editable text file 350, audio file 348, and synchronization and indexing file 352 that resulted in the generation of the read-only text file 354 are deleted from the second server 314. Storing the approved dictated form as a read-only text file 354 prevents persons or automated processes from tampering with the file. Furthermore, deleting the editable text file 350 and its associated audio file 348, and synchronization and indexing file 352 from the system provides additional storage space for new files. At step 432, the application 316 may generate output, such as reports, faxes, and/or emails, by compiling fields previously saved as read-only text files 354.
  • FIG. 5 is a flow diagram illustrating an alternate method of operation in accordance with an embodiment of the invention. At step 500, the system acquires data on dictation subjects from a source, such as, for example, a first server. Data acquisition may be by any manner known to those of ordinary skill. One method of transferring data may be by timed script. Other methods include a real-time transfer of data and a transfer of data on demand. At step 502, acquired data may be downloaded from the source to transferable file. At step 504, the transferable file may be downloaded to a server upon which is located an interface for data processing. At step 506 the interface populates a Dictation Subject Demographic Database with data mapped from the transferable file. At step 508, a user (e.g., a physician or any network user) accesses the application via a computer interface. The computer interface may be one node in a plurality of nodes of a networked computer system. As part of the access procedure, the physician may indicate his or her identity to the application, so that the application would be able to associate the user's identity with the user's voice model. As part of the access procedure, the user may also indicate a Dictation Subject identity, so that the application would be able to associate the Dictation Subject's identity with the Dictation Subject data downloaded from the source and included in the Dictation Subject Demographic Database. As part of the access procedure, the physician may also indicate which form types the user will be using to structure his dictation. Access to the application and indication User ID and Patient ID may be made by voice using a noise-cancellation microphone or manual manipulation of a computer interface. Verification of entries is accomplished in real time by observation of a computer video monitor or by audible cues provided by the application. At step 510, the user may dictate notes into selected fields in the form or forms chosen by the user. Each form may be broken down into a plurality of fields, each field having associated therewith a filed name. At step 512, a voice recognition engine receives the dictation and substantially simultaneously generates an audio file corresponding to the dictated speech, transcribes the dictated speech into an editable text file, and generates a synchronization and indexing file. The audio file may be in .WAV format, other formats may also be used. The synchronization and indexing file associates each transcribed word of text in the editable text file with a sound in the audio file. At step 514, if dictation is not complete, the user may add text into selected fields by returning to the step of dictating notes into selected fields, step 510. If, at step 514, the dictation is complete then, at step 516, the application stores the audio file, the editable text file, and the synchronization and indexing file in a memory. The memory may be on a server in the networked computer system. At step 518, the editable text file may be edited/processed, with or without the use of the audio file and synchronization and indexing file. Editing/processing may occur immediately or may be deferred. Even if a note is deferred, a user may return to the note and dictate or otherwise add to the deferred file (note.) At step 518, the user, or any other authorized user from any computer or workstation in the networked computer system can recall the saved editable text file document or form and add dictation or edit the document or form. In an embodiment, free-form dictation may be limited to the user, while any other authorized user may be limited to dictating corrections. At step 520, if the editable text file is approved, then at step 522 the user's voice model is updated. At step 524, the editable text file is saved in a read-only format. For ease of description, the editable text file, which has been saved in a read-only format, will hereinafter be referred to a read-only text file. At step 526, the editable text file, audio file, and synchronization and indexing file are deleted from memory. Storing the approved dictated form as a read-only text file prevents persons or automated processes from tampering with the file. Furthermore, deleting the editable text file and its associated audio file, and synchronization and indexing file from the system provides additional storage space for new files. At step 528, the application may generate reports, faxes, and/or emails by compiling fields previously saved as read-only text files.
  • FIG. 6 depicts another embodiment in accordance with the invention. In the embodiment of FIG. 6, a handheld computing device 600, such as a Cassiopeia® by Casio, a Jornada™ by Hewlett-Packard, or an IPAQ™ by Compaq, using a Windows CE™ operating system from Microsoft Corp., may be used to record initial dictation. Other operating systems, such as the Palm™ Operating System by 3COM, Inc. may alternatively be used so long as they support an audio recording capability. In one embodiment, the handheld computing device 600 runs an audio recording application 602 at a sampling rate of 11 kHz that generates a .WAV formatted audio file 604. Of course, other sampling rates and formats of audio files are acceptable without departing from the scope of the invention. The audio file 604 is generated as dictation is entered into the handheld computing device 600. Dictation may be entered into the handheld computing device 600 via a microphone 606 in communication with the handheld computing device 600.
  • The handheld computing device 600 may acquire data from a server 614 via a data transfer mechanism 617. The data transfer mechanism 617 may include, for example, a modem, a LAN (local area network) interface, an Internet connection, wireless interconnection including radio waves or light waves such as infrared waves, removable data storage device, or hard wired serial or parallel link.
  • In the exemplary embodiment of FIG. 6, the data transfer mechanism 617 may be a removable data storage device, such as a CompactFlash™ memory card by Pretec Electronics Corp. In one embodiment, the removable data storage device has a storage capacity size of 64 megabytes. The size of removable data storage device is related to the amount of dictation and data a user desires to store on the card; other sizes may be used without departing from the scope of the invention. The removable data storage device may be removed from the handheld computing device 600 and placed into a data storage device reader (not shown) such as the USB CompactFlash™ card reader by Pretec Electronics Corp. The data storage device reader can transfer data from the removable data storage device to the server 614 or can transfer data from the server 614 to the removable data storage device.
  • In the example of a medical practice, data acquired by the handheld computing device 600 from the server 614 via the data transfer mechanism 617 may include data from a practice management system, which has demographic data on each patient seen in the practice. Patient demographic data and scheduled patient information, for example, may be collected in the same manner as described in the text related to FIGS. 3, 4, and 5. Patient demographic data, perhaps in the form of a transferable file such as transferable file 312 (FIG. 3) may be input to the handheld computing device 600 and stored in a practice management system interface database 610.
  • The amount of patient demographic data downloadable to the handheld computing device 600 and the amount of functionality that may be incorporated into the handheld computing device may be limited by the memory and storage capacity of the handheld computing device 600. As the memory and storage capacity of handheld computing devices increase, the amount of data and functionality incorporated within the handheld device should commensurately increase. Nothing herein should be construed as to limit the types or amounts of data, or to restrict any of the various functionalities of the invention disclosed herein, from being incorporated to the greatest extent possible into a handheld computing device.
  • Patient demographic data may be used to organize information on the handheld computing device 600. The information downloaded may include demographic data as well as past dictated notes. In addition, the handheld computing device 600 may import application data 609, such as, but not limited to, forms, charts, and note information. The handheld computing device's 600 practice management system interface database 610 may be in the format of Access™ by Microsoft or Sequel Server™ by Sybase. Other database formats are also acceptable and using a different database will not depart from the scope of the invention.
  • The application data 609 and data stored in the practice management system interface database 610 are synchronized on the handheld computing device 600 by a synchronization and indexing routine 612. The synchronization and indexing routine 612 on the handheld device 600 cooperates with a counterpart synchronization and indexing routine 628 on the server 614. Synchronization in this context refers to downloading of demographic information and application data such as forms, charts, and note information from the server 614 to the handheld computing device 600, and the transfer of audio files and data to the server 614 from the handheld device 600. Once data is downloaded and synchronized on the handheld device 600 the synchronized data is available for document creation and dictation. A dictated audio file 604, will be associated with the form selections made by the user. Other pieces of information, as entered by a stylus, check box, or other method, are also associated with the form selection. The synchronized audio file 604, application data 609, and data from the practice management system interface database 610 may be prepared for transfer via the data transfer mechanism 617.
  • Synchronized and indexed data transferred from the handheld device 600 to the server 614 via the data transfer mechanism 617 may require processing before it can be applied to a voice recognition engine 620 included in the server 614. Processing may include filtering to reduce or eliminate background noises present in the audio file 604. Such background noises may have been present during dictation. Processing may also include, but is not limited to, the reduction or elimination of reverb or vibration noises present in the audio file 604. Processing as just described may take place in an audio file filter 622, which may be implemented in software. Processing may also include converting the sampling rate of the audio file 604 from one rate to another. For example, in the embodiment described in FIG. 6, the audio file 604 was recorded using a sampling rate of 11 kHz, however the voice recognition engine 620 requires an audio input having a sampling rate of 22 kHz. Therefore, in the embodiment of FIG. 6, a conversion of sampling rate from 11 kHz to 22 kHz is required. Of course, conversions from one sampling rate to another may not be necessary and conversions from any given sampling rate to sampling rates other than those disclosed are also acceptable without departing from the scope of the invention. In the embodiment of FIG. 6, sampling rate conversion may occur in an audio file interpreter 623, and may be handled in software.
  • In an embodiment, the audio file is processed as described above, and then input to a voice recognition engine 620 (similar to 334 FIG. 3) for generation of an editable text file 624, an audio file 626, and synchronization file 630 (similar to 352 FIG. 3). As previously described, processing results in the generation of a read-only text file 632 and the deletion of the audio file 626, editable text file 624, and synchronization file 630. User voice model 634 and specialty specific vocabulary 636 may be used by the voice recognition engine 630 during the process of transcribing the audio file 604 into the editable text file 624.
  • FIG. 7 is a flow diagram illustrating yet another method of operation in accordance with an embodiment of the invention. At step 701, patient demographics, schedule information, application data, forms, charts and note information may be downloaded to the handheld device from a server via a data transfer mechanism. At step 702, the physician may carry the handheld device as he performs his duties. At step 704, the physician has the ability to review previous notes related to any data stored on the handheld device.
  • The physician may wish to dictate a new note. At step 706A, a patient's name may be selected by tapping on the displayed name in a list of names with a stylus on the handheld computing device screen, or by navigating the list by rotating a wheel on the side of the unit, or by other suitable means of selection. At step 706B, a form type to be dictated is selected. At step 706C, the physician may dictate notes into the handheld device using the selected form to structure note entry into specific fields on selected forms. Dictation may begin by depressing and releasing, or depressing and holding, the record button on the handheld computing device and thereafter beginning dictation. Macros and other voice commands can be used during dictation. Also, the user can navigate through sections, or fields, of a form by tapping on a desired section with the stylus on the handheld computing device screen, or navigate through the sections by rotating a wheel on the side of the unit, or other suitable means of selection. At step 706D, the dictated notes or forms (for example, in the form of application data and audio files) may be stored in a memory of the handheld device. At step 706E, the physician may repeat steps 706A through 706D for the same or other patients (i.e., dictation subjects).
  • At step 708, which may be at the day's conclusion or at any point during the day, the physician may transfer audio files and application data to a server via a data transfer mechanism. At step 710, the transferred application data is synchronized with the server's application data. At step 712, audio files are filtered, processed, and synchronized for storage and further processing or editing on the server. Further processing includes processing of the audio file in a voice recognition engine to generate transcribed text that is stored in an editable transcribed text file. An index file is also generated. The index file associates each word of text in the editable transcribed text file with the location of a sound in the audio file.
  • At step 714, a first user (e.g., any network user) from any networked workstation in communication with the server can add dictation to any given field in any given note or form. In an embodiment, free-form dictation for the given field may be limited to the first user, while any other authorized user (i.e., a second user) may be limited to dictating corrections to the text for that given field. The second user may, of course, enter free-form dictation into any other empty field. In addition; at step 714, any user from any networked workstation in communication with the server can edit any given field in any given note or form. Editing may involve the use of the synchronized audio file, which, as described in other embodiments herein, can be used to allow the editor to hear the recorded voice of the person that dictated the text in question. An editor may select a word or group of words for recorded audio playback. The editor may make corrections and/or alterations to the editable transcribed text file. At step 716, the transcribed text in the editable transcribed text file may be approved. If the transcribed text is not approved, then the user may return to step 714 for further dictation and/or editing of the transcribed text. If the transcribed text is approved, then at step 718 the voice models of the users that provided dictation to create the note or form are updated. At step 720, the approved transcribed text is stored in file in a read-only format. The read-only file may be signed and stored as an electronic signature. At step 722 the editable transcribed text file, audio file, and index file are deleted from the memory of the server. At step 724, reports may be generated.
  • The disclosed embodiments are illustrative of the various ways in which the present invention may be practiced. Other embodiments can be implemented by those skilled in the art without departing from the spirit and scope of the present invention.

Claims (12)

1. A method, comprising:
retrieving a plurality of data fields from a memory;
populating at least a first data field in the plurality of data fields with default data;
populating at least a second data field in the plurality of data fields with non-default data downloaded from a database; and
populating at least a third data field in the plurality of data fields with data generated by a voice recognition engine.
2. The method of claim 1, wherein the default data is a frequently used phrase.
3. The method of claim 1, wherein the default data is a normal.
4. The method of claim 1, wherein the non-default data downloaded from the database is medical patient data.
5. The method of claim 1, wherein the non-default data downloaded from the database is downloaded in real-time.
6. The method of claim 1, wherein the database is a practice management system.
7. The method of claim 1, wherein the data generated by a voice recognition engine is data associated with a voice command.
8. The method of claim 1, wherein the data generated by a voice recognition engine is data associated with a macro.
9. The method of claim 1, wherein the data generated by a voice recognition engine is data associated with a macro.
10. The method of claim 1, wherein the data generated by a voice recognition engine is decoded as a macro name having data associated therewith, and populating the third field with the data associated with the macro name.
11. The method of claim 1, wherein the data generated by a voice recognition engine is decoded as a macro name having a variable name associated therewith, and
prompting a user for a value associated with the variable name.
12. The method of claim 1 1, wherein the prompting comprises one of an audible cue, a visual cue, and a combined audible and visual cue.
US11/014,807 2001-03-29 2004-12-20 Method and apparatus for voice dictation and document production Abandoned US20050102146A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/014,807 US20050102146A1 (en) 2001-03-29 2004-12-20 Method and apparatus for voice dictation and document production

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US27945801P 2001-03-29 2001-03-29
US09/901,906 US6834264B2 (en) 2001-03-29 2001-07-11 Method and apparatus for voice dictation and document production
US11/014,807 US20050102146A1 (en) 2001-03-29 2004-12-20 Method and apparatus for voice dictation and document production

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/901,906 Continuation US6834264B2 (en) 2001-03-29 2001-07-11 Method and apparatus for voice dictation and document production

Publications (1)

Publication Number Publication Date
US20050102146A1 true US20050102146A1 (en) 2005-05-12

Family

ID=26959676

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/901,906 Expired - Fee Related US6834264B2 (en) 2001-03-29 2001-07-11 Method and apparatus for voice dictation and document production
US11/014,807 Abandoned US20050102146A1 (en) 2001-03-29 2004-12-20 Method and apparatus for voice dictation and document production

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/901,906 Expired - Fee Related US6834264B2 (en) 2001-03-29 2001-07-11 Method and apparatus for voice dictation and document production

Country Status (3)

Country Link
US (2) US6834264B2 (en)
AU (1) AU2002254369A1 (en)
WO (1) WO2002080139A2 (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020062229A1 (en) * 2000-09-20 2002-05-23 Christopher Alban Clinical documentation system for use by multiple caregivers
US20020120472A1 (en) * 2000-12-22 2002-08-29 Dvorak Carl D. System and method for integration of health care records
US20030033294A1 (en) * 2001-04-13 2003-02-13 Walker Jay S. Method and apparatus for marketing supplemental information
US20030125950A1 (en) * 2001-09-06 2003-07-03 Avila J. Albert Semi-automated intermodal voice to data transcription method and apparatus
US20030130872A1 (en) * 2001-11-27 2003-07-10 Carl Dvorak Methods and apparatus for managing and using inpatient healthcare information
US20030154110A1 (en) * 2001-11-20 2003-08-14 Ervin Walter Method and apparatus for wireless access to a health care information system
US20030216945A1 (en) * 2002-03-25 2003-11-20 Dvorak Carl D. Method for analyzing orders and automatically reacting to them with appropriate responses
US20030220817A1 (en) * 2002-05-15 2003-11-27 Steve Larsen System and method of formulating appropriate subsets of information from a patient's computer-based medical record for release to various requesting entities
US20030220821A1 (en) * 2002-04-30 2003-11-27 Ervin Walter System and method for managing and reconciling asynchronous patient data
US20030220816A1 (en) * 2002-04-30 2003-11-27 Andy Giesler System and method for managing interactions between machine-generated and user-defined patient lists
US20030220815A1 (en) * 2002-03-25 2003-11-27 Cathy Chang System and method of automatically determining and displaying tasks to healthcare providers in a care-giving setting
US20040010465A1 (en) * 2002-05-20 2004-01-15 Cliff Michalski Method and apparatus for exception based payment posting
US20040010422A1 (en) * 2002-05-20 2004-01-15 Cliff Michalski Method and apparatus for batch-processed invoicing
US20040059714A1 (en) * 2002-07-31 2004-03-25 Larsen Steven J. System and method for providing decision support to appointment schedulers in a healthcare setting
US20040172520A1 (en) * 2002-09-19 2004-09-02 Michael Smit Methods and apparatus for visually creating complex expressions that inform a rules-based system of clinical decision support
US20060256739A1 (en) * 2005-02-19 2006-11-16 Kenneth Seier Flexible multi-media data management
US20080147394A1 (en) * 2006-12-18 2008-06-19 International Business Machines Corporation System and method for improving an interactive experience with a speech-enabled system through the use of artificially generated white noise
US20080235032A1 (en) * 2005-10-05 2008-09-25 Ossama Emam Method and Apparatus for Data Capture Using a Voice Activated Workstation
WO2008148102A1 (en) * 2007-05-25 2008-12-04 Tigerfish Method and system for rapid transcription
US20090077038A1 (en) * 2007-09-18 2009-03-19 Dolbey And Company Methods and Systems for Verifying the Identity of a Subject of a Dictation Using Text to Speech Conversion and Playback
US20100070263A1 (en) * 2006-11-30 2010-03-18 National Institute Of Advanced Industrial Science And Technology Speech data retrieving web site system
US20110093445A1 (en) * 2006-04-07 2011-04-21 Pp Associates, Lp Report Generation with Integrated Quality Management
US20110218802A1 (en) * 2010-03-08 2011-09-08 Shlomi Hai Bouganim Continuous Speech Recognition
US20110246194A1 (en) * 2010-03-30 2011-10-06 Nvoq Incorporated Indicia to indicate a dictation application is capable of receiving audio
US8155957B1 (en) * 2003-11-21 2012-04-10 Takens Luann C Medical transcription system including automated formatting means and associated method
US8255218B1 (en) 2011-09-26 2012-08-28 Google Inc. Directing dictation into input fields
CN102956125A (en) * 2011-08-25 2013-03-06 骅钜数位科技有限公司 Cloud digital phonetic teaching recording system
US8543397B1 (en) 2012-10-11 2013-09-24 Google Inc. Mobile device voice activation
US20140180687A1 (en) * 2012-10-01 2014-06-26 Carl O. Kamp, III Method And Apparatus For Automatic Conversion Of Audio Data To Electronic Fields of Text Data
US20140249813A1 (en) * 2008-12-01 2014-09-04 Adobe Systems Incorporated Methods and Systems for Interfaces Allowing Limited Edits to Transcripts
US9082310B2 (en) 2010-02-10 2015-07-14 Mmodal Ip Llc Providing computable guidance to relevant evidence in question-answering systems
US9201965B1 (en) * 2009-09-30 2015-12-01 Cisco Technology, Inc. System and method for providing speech recognition using personal vocabulary in a network environment
US9465795B2 (en) 2010-12-17 2016-10-11 Cisco Technology, Inc. System and method for providing feeds based on activity in a network environment
US9870796B2 (en) 2007-05-25 2018-01-16 Tigerfish Editing video using a corresponding synchronized written transcript by selection from a text viewer
US9870405B2 (en) 2011-05-31 2018-01-16 Cisco Technology, Inc. System and method for evaluating results of a search query in a network environment
US10156956B2 (en) 2012-08-13 2018-12-18 Mmodal Ip Llc Maintaining a discrete data representation that corresponds to information contained in free-form text
US10996922B2 (en) 2017-04-30 2021-05-04 Samsung Electronics Co., Ltd. Electronic apparatus for processing user utterance
US11102311B2 (en) * 2016-03-25 2021-08-24 Experian Health, Inc. Registration during downtime
US11392217B2 (en) 2020-07-16 2022-07-19 Mobius Connective Technologies, Ltd. Method and apparatus for remotely processing speech-to-text for entry onto a destination computing system
US20220375471A1 (en) * 2020-07-24 2022-11-24 Bola Technologies, Inc. Systems and methods for voice assistant for electronic health records

Families Citing this family (225)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US20020188362A1 (en) * 2001-06-08 2002-12-12 Tung-Liang Li Digital recording microphone
US8200988B2 (en) * 2001-08-03 2012-06-12 Intel Corporation Firmware security key upgrade algorithm
US7246060B2 (en) * 2001-11-06 2007-07-17 Microsoft Corporation Natural input recognition system and method using a contextual mapping engine and adaptive user bias
JP3542578B2 (en) * 2001-11-22 2004-07-14 キヤノン株式会社 Speech recognition apparatus and method, and program
US6990445B2 (en) * 2001-12-17 2006-01-24 Xl8 Systems, Inc. System and method for speech recognition and transcription
US20030115169A1 (en) * 2001-12-17 2003-06-19 Hongzhuan Ye System and method for management of transcribed documents
US20040054538A1 (en) * 2002-01-03 2004-03-18 Peter Kotsinadelis My voice voice agent for use with voice portals and related products
US20030154085A1 (en) * 2002-02-08 2003-08-14 Onevoice Medical Corporation Interactive knowledge base system
US7562053B2 (en) 2002-04-02 2009-07-14 Soluble Technologies, Llc System and method for facilitating transactions between two or more parties
CN1204489C (en) * 2002-04-03 2005-06-01 英华达(南京)科技有限公司 Electronic installation and method for synchronous play of associated voices and words
US20030204498A1 (en) * 2002-04-30 2003-10-30 Lehnert Bernd R. Customer interaction reporting
US7590534B2 (en) * 2002-05-09 2009-09-15 Healthsense, Inc. Method and apparatus for processing voice data
US7259906B1 (en) * 2002-09-03 2007-08-21 Cheetah Omni, Llc System and method for voice control of medical devices
US7539086B2 (en) * 2002-10-23 2009-05-26 J2 Global Communications, Inc. System and method for the secure, real-time, high accuracy conversion of general-quality speech into text
US7774694B2 (en) 2002-12-06 2010-08-10 3M Innovation Properties Company Method and system for server-based sequential insertion processing of speech recognition results
US20050096910A1 (en) * 2002-12-06 2005-05-05 Watson Kirk L. Formed document templates and related methods and systems for automated sequential insertion of speech recognition results
US7444285B2 (en) * 2002-12-06 2008-10-28 3M Innovative Properties Company Method and system for sequential insertion of speech recognition results to facilitate deferred transcription services
WO2004072846A2 (en) * 2003-02-13 2004-08-26 Koninklijke Philips Electronics N.V. Automatic processing of templates with speech recognition
US20040186705A1 (en) * 2003-03-18 2004-09-23 Morgan Alexander P. Concept word management
US7263483B2 (en) * 2003-04-28 2007-08-28 Dictaphone Corporation USB dictation device
US7966188B2 (en) * 2003-05-20 2011-06-21 Nuance Communications, Inc. Method of enhancing voice interactions using visual messages
US20040243415A1 (en) * 2003-06-02 2004-12-02 International Business Machines Corporation Architecture for a speech input method editor for handheld portable devices
US7757173B2 (en) * 2003-07-18 2010-07-13 Apple Inc. Voice menu system
EP1665149A1 (en) * 2003-09-05 2006-06-07 Wifi Med LLC Cross reference to related applications
US7389236B2 (en) * 2003-09-29 2008-06-17 Sap Aktiengesellschaft Navigation and data entry for open interaction elements
US7715034B2 (en) * 2003-10-17 2010-05-11 Canon Kabushiki Kaisha Data processing device and data storage device for performing time certification of digital data
US20050091064A1 (en) * 2003-10-22 2005-04-28 Weeks Curtis A. Speech recognition module providing real time graphic display capability for a speech recognition engine
US7315612B2 (en) * 2003-11-04 2008-01-01 Verizon Business Global Llc Systems and methods for facilitating communications involving hearing-impaired parties
US7236574B2 (en) * 2003-11-04 2007-06-26 Verizon Business Global Llc Method and system for providing communication services for hearing-impaired parties
US7200208B2 (en) * 2003-11-04 2007-04-03 Mci, Llc Method and system for providing communication services for hearing-impaired parties
WO2005052785A2 (en) * 2003-11-28 2005-06-09 Koninklijke Philips Electronics N.V. Method and device for transcribing an audio signal
US7764771B2 (en) * 2003-12-24 2010-07-27 Kimberly-Clark Worldwide, Inc. Method of recording invention disclosures
US8504369B1 (en) * 2004-06-02 2013-08-06 Nuance Communications, Inc. Multi-cursor transcription editing
US20130304453A9 (en) * 2004-08-20 2013-11-14 Juergen Fritsch Automated Extraction of Semantic Content and Generation of a Structured Document from Speech
US7584103B2 (en) * 2004-08-20 2009-09-01 Multimodal Technologies, Inc. Automated extraction of semantic content and generation of a structured document from speech
US20060129532A1 (en) * 2004-12-13 2006-06-15 Taiwan Semiconductor Manufacturing Co., Ltd. Form generation system and method
US7627638B1 (en) * 2004-12-20 2009-12-01 Google Inc. Verbal labels for electronic messages
WO2006097975A1 (en) * 2005-03-11 2006-09-21 Gifu Service Co., Ltd. Voice recognition program
US20070282631A1 (en) * 2005-09-08 2007-12-06 D Ambrosia Robert Matthew System and method for aggregating and providing subscriber medical information to medical units
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
WO2007049183A1 (en) * 2005-10-27 2007-05-03 Koninklijke Philips Electronics N.V. Method and system for processing dictated information
US20070198250A1 (en) * 2006-02-21 2007-08-23 Michael Mardini Information retrieval and reporting method system
US20070201631A1 (en) * 2006-02-24 2007-08-30 Intervoice Limited Partnership System and method for defining, synthesizing and retrieving variable field utterances from a file server
FR2902542B1 (en) * 2006-06-16 2012-12-21 Gilles Vessiere Consultants SEMANTIC, SYNTAXIC AND / OR LEXICAL CORRECTION DEVICE, CORRECTION METHOD, RECORDING MEDIUM, AND COMPUTER PROGRAM FOR IMPLEMENTING SAID METHOD
WO2007150005A2 (en) * 2006-06-22 2007-12-27 Multimodal Technologies, Inc. Automatic decision support
US8286071B1 (en) * 2006-06-29 2012-10-09 Escription, Inc. Insertion of standard text in transcriptions
US8521510B2 (en) 2006-08-31 2013-08-27 At&T Intellectual Property Ii, L.P. Method and system for providing an automated web transcription service
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
WO2008041083A2 (en) * 2006-10-02 2008-04-10 Bighand Ltd. Digital dictation workflow system and method
US8132104B2 (en) * 2007-01-24 2012-03-06 Cerner Innovation, Inc. Multi-modal entry for electronic clinical documentation
US8515757B2 (en) * 2007-03-20 2013-08-20 Nuance Communications, Inc. Indexing digitized speech with words represented in the digitized speech
US7813929B2 (en) * 2007-03-30 2010-10-12 Nuance Communications, Inc. Automatic editing using probabilistic word substitution models
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US20080262873A1 (en) * 2007-04-18 2008-10-23 Janus Health, Inc. Patient management system and method
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8498870B2 (en) * 2008-01-24 2013-07-30 Siemens Medical Solutions Usa, Inc. Medical ontology based data and voice command processing system
US20090234675A1 (en) * 2008-03-11 2009-09-17 Surya Prakash Irakam Method and system for medical communication between health professionals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US9135809B2 (en) * 2008-06-20 2015-09-15 At&T Intellectual Property I, Lp Voice enabled remote control for a set-top box
US9230222B2 (en) * 2008-07-23 2016-01-05 The Quantum Group, Inc. System and method enabling bi-translation for improved prescription accuracy
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8126862B2 (en) * 2008-08-11 2012-02-28 Mcdermott Matt System for enhanced customer service
US8898568B2 (en) 2008-09-09 2014-11-25 Apple Inc. Audio user interface
GB2477653B (en) * 2008-10-10 2012-11-14 Nuance Communications Inc Generating and processing forms for receiving speech data
US20100131280A1 (en) * 2008-11-25 2010-05-27 General Electric Company Voice recognition system for medical devices
US20100169092A1 (en) * 2008-11-26 2010-07-01 Backes Steven J Voice interface ocx
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US20100268534A1 (en) * 2009-04-17 2010-10-21 Microsoft Corporation Transcription, archiving and threading of voice communications
US10255566B2 (en) 2011-06-03 2019-04-09 Apple Inc. Generating and processing task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US9111538B2 (en) * 2009-09-30 2015-08-18 T-Mobile Usa, Inc. Genius button secondary commands
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US8374864B2 (en) * 2010-03-17 2013-02-12 Cisco Technology, Inc. Correlation of transcribed text with corresponding audio
US9377373B2 (en) * 2010-10-05 2016-06-28 Infraware, Inc. System and method for analyzing verbal records of dictation using extracted verbal features
US8959102B2 (en) 2010-10-08 2015-02-17 Mmodal Ip Llc Structured searching of dynamic structured document corpuses
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
KR20120121070A (en) * 2011-04-26 2012-11-05 삼성전자주식회사 Remote health care system and health care method using the same
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US20120323574A1 (en) * 2011-06-17 2012-12-20 Microsoft Corporation Speech to text medical forms
US8781829B2 (en) 2011-06-19 2014-07-15 Mmodal Ip Llc Document extension in dictation-based document generation workflow
US8589160B2 (en) * 2011-08-19 2013-11-19 Dolbey & Company, Inc. Systems and methods for providing an electronic dictation interface
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US9432611B1 (en) 2011-09-29 2016-08-30 Rockwell Collins, Inc. Voice radio tuning
US9922651B1 (en) * 2014-08-13 2018-03-20 Rockwell Collins, Inc. Avionics text entry, cursor control, and display format selection via voice recognition
US20130325465A1 (en) * 2011-11-23 2013-12-05 Advanced Medical Imaging and Teleradiology, LLC Medical image reading system
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9569593B2 (en) * 2012-03-08 2017-02-14 Nuance Communications, Inc. Methods and apparatus for generating clinical reports
US9569594B2 (en) 2012-03-08 2017-02-14 Nuance Communications, Inc. Methods and apparatus for generating clinical reports
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US8909526B2 (en) * 2012-07-09 2014-12-09 Nuance Communications, Inc. Detecting potential significant errors in speech recognition results
US8924211B2 (en) 2012-07-09 2014-12-30 Nuance Communications, Inc. Detecting potential significant errors in speech recognition results
US9064492B2 (en) * 2012-07-09 2015-06-23 Nuance Communications, Inc. Detecting potential significant errors in speech recognition results
CN103577072A (en) * 2012-07-26 2014-02-12 中兴通讯股份有限公司 Terminal voice assistant editing method and device
KR20150046100A (en) 2012-08-10 2015-04-29 뉘앙스 커뮤니케이션즈, 인코포레이티드 Virtual agent communication for electronic devices
US9946699B1 (en) * 2012-08-29 2018-04-17 Intuit Inc. Location-based speech recognition for preparation of electronic tax return
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
KR20230137475A (en) 2013-02-07 2023-10-04 애플 인크. Voice trigger for a digital assistant
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
WO2014144579A1 (en) 2013-03-15 2014-09-18 Apple Inc. System and method for updating an adaptive speech recognition model
US9916295B1 (en) * 2013-03-15 2018-03-13 Richard Henry Dana Crawford Synchronous context alignments
AU2014233517B2 (en) 2013-03-15 2017-05-25 Apple Inc. Training an at least partial voice command system
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
EP3937002A1 (en) 2013-06-09 2022-01-12 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
AU2014278595B2 (en) 2013-06-13 2017-04-06 Apple Inc. System and method for emergency calls initiated by voice command
DE112014003653B4 (en) 2013-08-06 2024-04-18 Apple Inc. Automatically activate intelligent responses based on activities from remote devices
US9479931B2 (en) 2013-12-16 2016-10-25 Nuance Communications, Inc. Systems and methods for providing a virtual assistant
US10534623B2 (en) 2013-12-16 2020-01-14 Nuance Communications, Inc. Systems and methods for providing a virtual assistant
US9571645B2 (en) 2013-12-16 2017-02-14 Nuance Communications, Inc. Systems and methods for providing a virtual assistant
US9804820B2 (en) * 2013-12-16 2017-10-31 Nuance Communications, Inc. Systems and methods for providing a virtual assistant
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
AU2015266863B2 (en) 2014-05-30 2018-03-15 Apple Inc. Multi-command single utterance input method
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10210204B2 (en) * 2014-06-16 2019-02-19 Jeffrey E. Koziol Voice actuated data retrieval and automated retrieved data display
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10950329B2 (en) 2015-03-13 2021-03-16 Mmodal Ip Llc Hybrid human and computer-assisted coding workflow
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US9996517B2 (en) 2015-11-05 2018-06-12 Lenovo (Singapore) Pte. Ltd. Audio input of field entries
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
WO2017201041A1 (en) 2016-05-17 2017-11-23 Hassel Bruce Interactive audio validation/assistance system and methodologies
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179588B1 (en) 2016-06-09 2019-02-22 Apple Inc. Intelligent automated assistant in a home environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
JP6762819B2 (en) * 2016-09-14 2020-09-30 株式会社東芝 Input support device and program
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
WO2018136417A1 (en) 2017-01-17 2018-07-26 Mmodal Ip Llc Methods and systems for manifestation and transmission of follow-up notifications
JP2018191145A (en) * 2017-05-08 2018-11-29 オリンパス株式会社 Voice collection device, voice collection method, voice collection program, and dictation method
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US11316865B2 (en) 2017-08-10 2022-04-26 Nuance Communications, Inc. Ambient cooperative intelligence system and method
US20190066823A1 (en) 2017-08-10 2019-02-28 Nuance Communications, Inc. Automated Clinical Documentation System and Method
WO2019103930A1 (en) 2017-11-22 2019-05-31 Mmodal Ip Llc Automated code feedback system
US11222716B2 (en) 2018-03-05 2022-01-11 Nuance Communications System and method for review of automated clinical documentation from recorded audio
US11250382B2 (en) 2018-03-05 2022-02-15 Nuance Communications, Inc. Automated clinical documentation system and method
EP3762921A4 (en) * 2018-03-05 2022-05-04 Nuance Communications, Inc. Automated clinical documentation system and method
CN110413966A (en) * 2018-04-27 2019-11-05 富士施乐株式会社 Document management apparatus and non-transitory computer-readable medium
US11069368B2 (en) * 2018-12-18 2021-07-20 Colquitt Partners, Ltd. Glasses with closed captioning, voice recognition, volume of speech detection, and translation capabilities
CN110211581B (en) * 2019-05-16 2021-04-20 济南市疾病预防控制中心 Laboratory automatic voice recognition recording identification system and method
US11227679B2 (en) 2019-06-14 2022-01-18 Nuance Communications, Inc. Ambient clinical intelligence system and method
US11043207B2 (en) 2019-06-14 2021-06-22 Nuance Communications, Inc. System and method for array data simulation and customized acoustic modeling for ambient ASR
US11216480B2 (en) 2019-06-14 2022-01-04 Nuance Communications, Inc. System and method for querying data points from graph data structures
US11531807B2 (en) 2019-06-28 2022-12-20 Nuance Communications, Inc. System and method for customized text macros
US11670408B2 (en) 2019-09-30 2023-06-06 Nuance Communications, Inc. System and method for review of automated clinical documentation
EP3970057A1 (en) * 2019-10-15 2022-03-23 Google LLC Voice-controlled entry of content into graphical user interfaces
US11507345B1 (en) * 2020-09-23 2022-11-22 Suki AI, Inc. Systems and methods to accept speech input and edit a note upon receipt of an indication to edit
US11222103B1 (en) 2020-10-29 2022-01-11 Nuance Communications, Inc. Ambient cooperative intelligence system and method
US20220139571A1 (en) * 2020-11-03 2022-05-05 Nuance Communications, Inc. Communication System and Method
CA3230884A1 (en) * 2021-09-07 2023-03-16 Charles Martin Iv. Medical procedure documentation system and method

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5146439A (en) * 1989-01-04 1992-09-08 Pitney Bowes Inc. Records management system having dictation/transcription capability
US5148366A (en) * 1989-10-16 1992-09-15 Medical Documenting Systems, Inc. Computer-assisted documentation system for enhancing or replacing the process of dictating and transcribing
US5526407A (en) * 1991-09-30 1996-06-11 Riverrun Technology Method and apparatus for managing information
US5530950A (en) * 1993-07-10 1996-06-25 International Business Machines Corporation Audio data processing
US5772585A (en) * 1996-08-30 1998-06-30 Emc, Inc System and method for managing patient medical records
US5960399A (en) * 1996-12-24 1999-09-28 Gte Internetworking Incorporated Client/server speech processor/recognizer
US5960447A (en) * 1995-11-13 1999-09-28 Holt; Douglas Word tagging and editing system for speech recognition
US5995936A (en) * 1997-02-04 1999-11-30 Brais; Louis Report generation system and method for capturing prose, audio, and video by voice command and automatically linking sound and image to formatted text locations
US6122614A (en) * 1998-11-20 2000-09-19 Custom Speech Usa, Inc. System and method for automating transcription services
US6128002A (en) * 1996-07-08 2000-10-03 Leiper; Thomas System for manipulation and display of medical images
US6259657B1 (en) * 1999-06-28 2001-07-10 Robert S. Swinney Dictation system capable of processing audio information at a remote location
US6282154B1 (en) * 1998-11-02 2001-08-28 Howarlene S. Webb Portable hands-free digital voice recording and transcription device
US6684276B2 (en) * 2001-03-28 2004-01-27 Thomas M. Walker Patient encounter electronic medical record system, method, and computer product
US6813603B1 (en) * 2000-01-26 2004-11-02 Korteam International, Inc. System and method for user controlled insertion of standardized text in user selected fields while dictating text entries for completing a form
US6816837B1 (en) * 1999-05-06 2004-11-09 Hewlett-Packard Development Company, L.P. Voice macros for scanner control

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5818800A (en) 1992-04-06 1998-10-06 Barker; Bruce J. Voice recording device having portable and local modes of operation
US5519808A (en) 1993-03-10 1996-05-21 Lanier Worldwide, Inc. Transcription interface for a word processing station
US5857099A (en) 1996-09-27 1999-01-05 Allvoice Computing Plc Speech-to-text dictation system with audio message capability
US5909667A (en) 1997-03-05 1999-06-01 International Business Machines Corporation Method and apparatus for fast voice selection of error words in dictated text

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5146439A (en) * 1989-01-04 1992-09-08 Pitney Bowes Inc. Records management system having dictation/transcription capability
US5148366A (en) * 1989-10-16 1992-09-15 Medical Documenting Systems, Inc. Computer-assisted documentation system for enhancing or replacing the process of dictating and transcribing
US5267155A (en) * 1989-10-16 1993-11-30 Medical Documenting Systems, Inc. Apparatus and method for computer-assisted document generation
US5526407A (en) * 1991-09-30 1996-06-11 Riverrun Technology Method and apparatus for managing information
US5530950A (en) * 1993-07-10 1996-06-25 International Business Machines Corporation Audio data processing
US5960447A (en) * 1995-11-13 1999-09-28 Holt; Douglas Word tagging and editing system for speech recognition
US6128002A (en) * 1996-07-08 2000-10-03 Leiper; Thomas System for manipulation and display of medical images
US5772585A (en) * 1996-08-30 1998-06-30 Emc, Inc System and method for managing patient medical records
US5960399A (en) * 1996-12-24 1999-09-28 Gte Internetworking Incorporated Client/server speech processor/recognizer
US5995936A (en) * 1997-02-04 1999-11-30 Brais; Louis Report generation system and method for capturing prose, audio, and video by voice command and automatically linking sound and image to formatted text locations
US6282154B1 (en) * 1998-11-02 2001-08-28 Howarlene S. Webb Portable hands-free digital voice recording and transcription device
US6122614A (en) * 1998-11-20 2000-09-19 Custom Speech Usa, Inc. System and method for automating transcription services
US6816837B1 (en) * 1999-05-06 2004-11-09 Hewlett-Packard Development Company, L.P. Voice macros for scanner control
US6259657B1 (en) * 1999-06-28 2001-07-10 Robert S. Swinney Dictation system capable of processing audio information at a remote location
US6813603B1 (en) * 2000-01-26 2004-11-02 Korteam International, Inc. System and method for user controlled insertion of standardized text in user selected fields while dictating text entries for completing a form
US6684276B2 (en) * 2001-03-28 2004-01-27 Thomas M. Walker Patient encounter electronic medical record system, method, and computer product

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7251610B2 (en) * 2000-09-20 2007-07-31 Epic Systems Corporation Clinical documentation system for use by multiple caregivers
US20020062229A1 (en) * 2000-09-20 2002-05-23 Christopher Alban Clinical documentation system for use by multiple caregivers
US20020120472A1 (en) * 2000-12-22 2002-08-29 Dvorak Carl D. System and method for integration of health care records
US20030033294A1 (en) * 2001-04-13 2003-02-13 Walker Jay S. Method and apparatus for marketing supplemental information
US8583430B2 (en) * 2001-09-06 2013-11-12 J. Albert Avila Semi-automated intermodal voice to data transcription method and apparatus
US20030125950A1 (en) * 2001-09-06 2003-07-03 Avila J. Albert Semi-automated intermodal voice to data transcription method and apparatus
US20030154110A1 (en) * 2001-11-20 2003-08-14 Ervin Walter Method and apparatus for wireless access to a health care information system
US20030130872A1 (en) * 2001-11-27 2003-07-10 Carl Dvorak Methods and apparatus for managing and using inpatient healthcare information
US20030216945A1 (en) * 2002-03-25 2003-11-20 Dvorak Carl D. Method for analyzing orders and automatically reacting to them with appropriate responses
US20030220815A1 (en) * 2002-03-25 2003-11-27 Cathy Chang System and method of automatically determining and displaying tasks to healthcare providers in a care-giving setting
US20030220816A1 (en) * 2002-04-30 2003-11-27 Andy Giesler System and method for managing interactions between machine-generated and user-defined patient lists
US20030220821A1 (en) * 2002-04-30 2003-11-27 Ervin Walter System and method for managing and reconciling asynchronous patient data
US20030220817A1 (en) * 2002-05-15 2003-11-27 Steve Larsen System and method of formulating appropriate subsets of information from a patient's computer-based medical record for release to various requesting entities
US20040010465A1 (en) * 2002-05-20 2004-01-15 Cliff Michalski Method and apparatus for exception based payment posting
US20040010422A1 (en) * 2002-05-20 2004-01-15 Cliff Michalski Method and apparatus for batch-processed invoicing
US7979294B2 (en) 2002-07-31 2011-07-12 Epic Systems Corporation System and method for providing decision support to appointment schedulers in a healthcare setting
US20040059714A1 (en) * 2002-07-31 2004-03-25 Larsen Steven J. System and method for providing decision support to appointment schedulers in a healthcare setting
US20040172520A1 (en) * 2002-09-19 2004-09-02 Michael Smit Methods and apparatus for visually creating complex expressions that inform a rules-based system of clinical decision support
US8155957B1 (en) * 2003-11-21 2012-04-10 Takens Luann C Medical transcription system including automated formatting means and associated method
US20060256739A1 (en) * 2005-02-19 2006-11-16 Kenneth Seier Flexible multi-media data management
US8165876B2 (en) * 2005-10-05 2012-04-24 Nuance Communications, Inc. Method and apparatus for data capture using a voice activated workstation
US20080235032A1 (en) * 2005-10-05 2008-09-25 Ossama Emam Method and Apparatus for Data Capture Using a Voice Activated Workstation
US20110093445A1 (en) * 2006-04-07 2011-04-21 Pp Associates, Lp Report Generation with Integrated Quality Management
US8326887B2 (en) * 2006-04-07 2012-12-04 Pp Associates, Lp Report generation with integrated quality management
US20100070263A1 (en) * 2006-11-30 2010-03-18 National Institute Of Advanced Industrial Science And Technology Speech data retrieving web site system
US20080147394A1 (en) * 2006-12-18 2008-06-19 International Business Machines Corporation System and method for improving an interactive experience with a speech-enabled system through the use of artificially generated white noise
US8306816B2 (en) 2007-05-25 2012-11-06 Tigerfish Rapid transcription by dispersing segments of source material to a plurality of transcribing stations
WO2008148102A1 (en) * 2007-05-25 2008-12-04 Tigerfish Method and system for rapid transcription
US9870796B2 (en) 2007-05-25 2018-01-16 Tigerfish Editing video using a corresponding synchronized written transcript by selection from a text viewer
US9141938B2 (en) 2007-05-25 2015-09-22 Tigerfish Navigating a synchronized transcript of spoken source material from a viewer window
WO2009039236A1 (en) * 2007-09-18 2009-03-26 Dolbey And Company Methods and systems for verifying the identity of a subject of a dictation using text to speech conversion and playback
US20090077038A1 (en) * 2007-09-18 2009-03-19 Dolbey And Company Methods and Systems for Verifying the Identity of a Subject of a Dictation Using Text to Speech Conversion and Playback
US8972269B2 (en) * 2008-12-01 2015-03-03 Adobe Systems Incorporated Methods and systems for interfaces allowing limited edits to transcripts
US20140249813A1 (en) * 2008-12-01 2014-09-04 Adobe Systems Incorporated Methods and Systems for Interfaces Allowing Limited Edits to Transcripts
US9201965B1 (en) * 2009-09-30 2015-12-01 Cisco Technology, Inc. System and method for providing speech recognition using personal vocabulary in a network environment
US9082310B2 (en) 2010-02-10 2015-07-14 Mmodal Ip Llc Providing computable guidance to relevant evidence in question-answering systems
US20110218802A1 (en) * 2010-03-08 2011-09-08 Shlomi Hai Bouganim Continuous Speech Recognition
US20110246194A1 (en) * 2010-03-30 2011-10-06 Nvoq Incorporated Indicia to indicate a dictation application is capable of receiving audio
US9465795B2 (en) 2010-12-17 2016-10-11 Cisco Technology, Inc. System and method for providing feeds based on activity in a network environment
US9870405B2 (en) 2011-05-31 2018-01-16 Cisco Technology, Inc. System and method for evaluating results of a search query in a network environment
CN102956125A (en) * 2011-08-25 2013-03-06 骅钜数位科技有限公司 Cloud digital phonetic teaching recording system
US8255218B1 (en) 2011-09-26 2012-08-28 Google Inc. Directing dictation into input fields
US10156956B2 (en) 2012-08-13 2018-12-18 Mmodal Ip Llc Maintaining a discrete data representation that corresponds to information contained in free-form text
US20140180687A1 (en) * 2012-10-01 2014-06-26 Carl O. Kamp, III Method And Apparatus For Automatic Conversion Of Audio Data To Electronic Fields of Text Data
US8543397B1 (en) 2012-10-11 2013-09-24 Google Inc. Mobile device voice activation
US11102311B2 (en) * 2016-03-25 2021-08-24 Experian Health, Inc. Registration during downtime
US10996922B2 (en) 2017-04-30 2021-05-04 Samsung Electronics Co., Ltd. Electronic apparatus for processing user utterance
US11392217B2 (en) 2020-07-16 2022-07-19 Mobius Connective Technologies, Ltd. Method and apparatus for remotely processing speech-to-text for entry onto a destination computing system
US20220375471A1 (en) * 2020-07-24 2022-11-24 Bola Technologies, Inc. Systems and methods for voice assistant for electronic health records

Also Published As

Publication number Publication date
AU2002254369A1 (en) 2002-10-15
US6834264B2 (en) 2004-12-21
WO2002080139A2 (en) 2002-10-10
WO2002080139A3 (en) 2003-05-22
US20020143533A1 (en) 2002-10-03

Similar Documents

Publication Publication Date Title
US6834264B2 (en) Method and apparatus for voice dictation and document production
US11586808B2 (en) Insertion of standard text in transcription
US11704434B2 (en) Transcription data security
US20100169092A1 (en) Voice interface ocx
US6122614A (en) System and method for automating transcription services
US7236932B1 (en) Method of and apparatus for improving productivity of human reviewers of automatically transcribed documents generated by media conversion systems
US7979281B2 (en) Methods and systems for creating a second generation session file
US6961699B1 (en) Automated transcription system and method using two speech converting instances and computer-assisted correction
EP1183680B1 (en) Automated transcription system and method using two speech converting instances and computer-assisted correction
EP3138274B1 (en) Methods and apparatus for associating dictation with an electronic record
US8504369B1 (en) Multi-cursor transcription editing
US7516070B2 (en) Method for simultaneously creating audio-aligned final and verbatim text with the assistance of a speech recognition program as may be useful in form completion using a verbal entry method
US8719027B2 (en) Name synthesis
US20070245308A1 (en) Flexible XML tagging
US20020069056A1 (en) Methods and systems for generating documents from voice interactions
US20120173281A1 (en) Automated data entry and transcription system, especially for generation of medical reports by an attending physician
US20070124142A1 (en) Voice enabled knowledge system
US20070081428A1 (en) Transcribing dictation containing private information
US8275613B2 (en) All voice transaction data capture—dictation system
US20020069057A1 (en) Methods for peer to peer sharing of voice enabled document templates
US20140344679A1 (en) Systems and methods for creating a document
WO2000046787A2 (en) System and method for automating transcription services
US20070033575A1 (en) Software for linking objects using an object-based interface

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION