top of page

Voice Recognition

 

 

 

 

 

 

 

 

 

 

 

Timeline

1950s and 1960s: Baby Talk 

1952: Bell Laboratories designed the “Audrey” system, which recognized digits spoken by a single voice.

1962: IBM demonstrated ‘Shoebox’ machine, which could understand 16 words spoken in English.

1970s: Speech Recognition Takes off

1971: DARPA established the Speech Understanding Research (SUR) program to develop a computer system that could understand continuous speech.

1978: The popular toy "Speak and Spell" by Texas Instruments was introduced. Speak and Spell used a speech chip which led to huge strides in development of more human-like digital synthesis sound.

1980s: Speech Recognition Turns toward Prediction 

1985: Kurzweil text-to-speech program could recognize 1000 words.

1987: speech recognition started to work its way into commercial applications for business and specialized industry.

1990s: Automatic Speech Recognition Comes to the Masses 

1990: Dragon launched the first consumer speech recognition product, Dragon Dictate.

1996: VAL, the first voice portal, was a dial-in interactive voice recognition system that was supposed to give information based on what was being said on the phone.

1997: The much-improved Dragon Naturally Speaking arrived. The application recognized continuous speech at about 100 words per minute.

2000s: Speech Recognition Plateaus–Until Google Comes

2010: Google added “personalized recognition” to Voice Search on Android phones, so that the software could record users’ voice searches and produce a more accurate speech model.

 2011: Siri has been introduced. Siri relies on cloud-based processing. It draws what it knows about the user to generate a contextual reply, and it responds to voices input with personality. Speech recognition has gone from utility to entertainment.

http://www.itbusiness.ca/news/history-of-voice-recognition-from-audrey-to-siri/15008 

http://www.techhive.com/article/243060/speech_recognition_through_the_decades_how_we_ended_up_with_siri.html

 

Voice Recognition on other movies:

Star Trek (1966)

The Ships Computer always seemed to be able to identify who was speaking and to distinguish between voice commands and conversations between crew members.

Star Wars Episode IV: A New Hope (1977)

The R2D2 robot was not able to speak in English, but could understand spoken instructions. The C3PO robot was designed as a translator/communicator.

Westworld (1973)

Gunslinger was an android designed for the entertainment of guests ends up killing them instead. His speech is characteristic of a western gun fighter.

http://www.voice-commands.com/060.htm

 

Is it real?

In the movie, Theodore was a professional writer whose job was to write intimate letters for people who are unwilling or unable to write letters. He used a voice recognition system that recognized his speech and commands. This depicted exactly the near future of voice recognition technology. The machine will recognize human speech, hopefully in multiple languages, in noisy environments, and human will be able to talk to their devices, such as ask their printers to print or their lights to turn themselves on and off.  

http://www.theguardian.com/commentisfree/2014/jan/27/what-are-the-ethics-of-human-robot-relationships

bottom of page