Google voice searches just got a lot faster and more accurate.
The search giant this week said it has built "better neural network acoustic models," which means your phone should better understand what you're trying to dictate.
"In addition to requiring much lower computational resources, the new models are more accurate, robust to noise, and faster to respond to voice search queries," the Google Speech Team wrote in a blog post.
Google gave voice search a big upgrade in 2012 by adopting something known as Deep Neural Networks (DNNs). But three years is an eternity in tech terms, so this week's upgrade means better results that are "blazingly fast," Google said.
When you speak, speech-recognition software breaks up what you say into separate slices that are each 10-milliseconds long. Run this through the Google-bots and your phone "reconciles all this information to determine the sentence the user is speaking."
Try, for example, saying "OK Google, where is the nearest museum?" The software's acoustic models use feedback loops to differentiate between similar-sounding letters. The word "museum" is broken into /m j u z i @ m/ in phonetic notation.
And while it would normally be difficult to discern the separate /j/ and /u/ sounds, "in truth the recognizer doesn't care where exactly that transitions happens," the blog said. "All it cares about is that these sounds were spoken."
Test the improvements for yourself on voice searches and commands in the iOS and Android Google app and for dictation on Android devices.
Source: pcmag.com
0 comments:
Post a Comment