Siri Could (Finally) Get Better at Speech Recognition
Apple’s (NASDAQ:AAPL) Siri voice assistant may be getting an upgrade that would make it easier to dictate where you want to go or what you want to search.
Wired reports that neural network algorithms are becoming mainstream now, five years after Microsoft (NASDAQ:MSFT) engineers began working with University of Toronto’s Geoff Hinton, a deep learning expert, to experiment with neural network machine learning models to improve speech recognition technology. The experiments yielded a 25 percent improvement in accuracy. Neural network algorithms figure into Google’s (NASDAQ:GOOG) Android voice recognition technology, and are featured in Microsoft’s Skype Translate. Apple, however, hasn’t made the switch to neural network algorithms, but Siri might be about to get an upgrade.
Though Apple remains secretive, it’s commonly believed that the company licensed voice recognition tech from Nuance (NASDAQ:NUAN) to power the back end of the digital assistant. According to Wired, Apple has formed a speech recognition team to replace Nuance, and to bring neural networks to Siri. Apple has hired speech recognition researchers from Nuance and from academic institutions, and Microsoft Peter Lee tells Wired that he expects Apple to start using neural networks within six months. The shift would make Siri more accurate, a development that users have long hoped would be on its way.
9to5Mac traces the history of the various rumors surrounding the idea that Apple would build its own speech recognition solution. In 2011, the United States Patent and Trademark Office granted the company a patent related to text-to-speech features, which figure in to machine-generated speech. Also in 2011, Norman Winarsky, co-founder of the Siri Personal Assistant Software that Apple acquired, told 9to5Mac: “Theoretically, if a better speech recognition comes along (or Apple buys one), they could likely replace Nuance without too much trouble.”