Recent Changes - Search:

edit SideBar

MachineLearning

Summary of how Machine Learning Worked in the Wait, What demo:

I believe we used a microphone accessor in Ptolemy to capture the audio and compute a stream of feature vectors. The feature vector stream was passed to a websocket accessor that sent the input data to gmtkOnline and produced a stream of classificaitons (applause/no applause) as output. I think the shell accessor or a TCP/IP accessor (say running gmtkOnline via inetd) could be used instead of websockets.
I also worked on a prototype of a simple GMTK activity recognition model using Roozbeh's sensors at the accessor workshop which used a similar architecture, though I don't think we were able to get all of the accessors working properly during the workshop. Roozbeh has engineered some new more robust features that would be good to incorporate into any future activity recognition models.
In both cases, model training was done off-line with labelled training data. The labelling could make it a bit more complicated to use same data acquisition accessor(s) for training and applying the model, though you'll certainly want to ensure your feature vectors are produced the same way for training and decoding.

See Also

Edit - History - Print - Recent Changes - Search
Page last modified on April 14, 2017, at 01:04 am