Automated speech recognition just reached human parity
The goal was 5.1 per cent. That’s the word error rate that was measured for humans. A team from the Microsoft Speech and Dialog Research Group have now matched this value with their program of automated speech recognition. They used a convolutional neural network combined with bidirectional long-short-term memory to improve acoustic modelling. This benchmark was 25 years in the making, and the next step will be, to train these neural networks to understand speech after recognising it.
Congratulations to the researchers from Microsoft Speech and Dialog Research Group for their amazing work.
For the full article, please look here.