ChatGPT-maker OpenAI introduced Whisper two years ago as an AI tool that transcribes speech to text. Now, the tool is used by AI healthcare company Nabla and its 45,000 clinicians to help transcribe medical conversations across over 85 organizations, like the University of Iowa Health Care.
However, new research shows that Whisper has been “hallucinating,” or adding statements that no one has said, into transcripts of conversations, raising the question of how quickly medical facilities should adopt AI if it yields errors.
According to the Associated Press, a University of Michigan researcher found hallucinations in 80% of Whisper transcriptions. An unnamed developer found hallucinations in half of more than 100 hours
→ Continue reading at Entrepreneur