Sun May 04, 2025 10:32 pm
The update to voice transcription is great, with a small caveat. It appears to be unable to consistently
identify speakers. In the middle of a recording it fails to spot a previously identified speaker and all
of the text is lost. As a thought, since you already have an AI to handle different noise environments, could an AI be trained before transcribing by pointing it at as sample of the speaker to use as a filter?
Just as a thought.
Keep up the great work on DR 20, the ability to point at a timeline for transcription is a big help (although I had to scrap all the work putting into the creation).
Al Edlund