Modules /
VoiceRecognitionSummaryThe Voice Recognition demo would use accessors to connect to the Amazon Echo, and every time the word "TerraSwarm" was spoken, a Hue lightbulb would flash. A key part of this demo is that it would be created on the fly using the Cape Code Host and then deployed to a SwarmBox that would keep it running Components
Amazon EchoEdward has one that Christopher is borrowing and will return. We need to create a Node http accessor. Amazon Echo name can only be Alexa or Amazon. However, "The device also comes with a manually and voice-activated remote control which can be used in lieu of the 'wake word'." (Wikipedia) One limitation of using the Echo instead of the UI speech recognition feed is the wake word. Using the remote mitigates this somewhat, but still requires an active waking. Amazon Echo and Echo Dot FAQs says: " 2. How do I know when Amazon Echo or Echo Dot are streaming my voice to the Cloud?"
When Amazon Echo or Echo Dot detect the wake word, when you press the action button on top of the devices, or when you press and hold your remote's microphone button, the light ring around the top of your Amazon Echo turns blue, to indicate that Amazon Echo is streaming audio to the Cloud. When you use the wake word, the audio stream includes a fraction of a second of audio before the wake word, and closes once your question or request has been processed. Within Sounds settings in the Alexa App (Settings > [Your Device Name] > Sounds), you can enable a 'wake up sound,' a short audible tone that plays after the wake word is recognized to indicate that the device is streaming audio. You can also enable an 'end of request sound' that will play a short audible tone at the end of your request, to indicate that the connection has closed and the device is no longer streaming audio."
I was able to set up an example Alexa app that does not require the hardware. See https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/docs/java-client-sample. There is a bunch of configuration. I can ask press the button and ask questions and get back the answers. There is a Java client that uses VLC (I think) to get the audio. I think the Java client connects to a node server that then connects to Amazon, but I'm not sure. I believe that each time the node server restarts, one has to do some reauthentication with the Amazon server. It seems like it would be possible to create an accessor that would send data to Amazon and have the Alexa server reply. Asking Alexa questions by pressing the start button in the Java app does not require using the "Alexa" or "Amazon" keyword. It is not that clear how one would get general speech recognition out of the system. I think we would have to define a service that would get the analysis and then look for keywords. There is probably a limit as to how much data one can send, and we would need to partition the data into chunks by detecting silence. It seems to me like we would be better off pursuing what ever Long and Duc have been working on. In May, 2017, Beth wrote: Alexa could be another option for intent recognition if we want to require a wake-up word. There's a smartphone app for Alexa, a browser interface, and a RESTful API, so we don't necessarily have to use an Echo. (Amazon account required). Browser: https://echosim.io/welcome?next=%2F Alexa Voice Service: https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/content/avs-api-overview It is possible to write custom intents for Alexa: https://developer.amazon.com/alexa-skills-kit and then register your devices to recognize these intents: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/testing-an-alexa-skill#h2_register HueWe need to be able to control this from the Node Host, so we need the corresponding module. DeploymentFrom Cape Code, we can create a Node Host composite accessor and use the AccessorSSHCodeGenerator to copy the composite accessor over to the remote host. See Deployment. See Also
|