Will a future API help us recognize Body movements / gestures to help build interactive games, storytelling, book reading, etc?
For controlling or navigation within skills, I’m interested in an api to recognize Body movements, eventually gestures, and emotion. Right now, just some particular body movements… During the design webinar, I asked about skill design and best practices regarding ergonomic issues - The answer was very exciting.
I can imagine building a very crude api based on the LPS example to determine basic navigation in a skill by tracking the person (left, right, forward, back). I’m more interested in subtle movements, more like “leaning” in a particular direction: lean forward or backward, left or right. Well - Segway users might really enjoy this. The person might be sitting or standing close to Jibo or further way. Sitting/standing. A child sits on the floor close to Jibo, an adult uses an exercise skill and stands further away…
I’m not too confident about spending alot of time without having hardware.
Also I don’t understand the basis of your tracking - tracking the entity - meaning you track a face, right?
If tracking a face or even a connected sensorized device, then perhaps more subtle body movements can be recognized based on the LPS example (?)
…lean to the left or lean forward or backward versus
a big jump to the left as in my Rocky Horror skill
- A child sitting on floor, she leans forward to walk down the hallway in her story, and leans left to “enter” a room that Jibo describes or displays…
Of course, she might use hand gestures too - but I’m looking for the basics for now.
- An adult engages in @michael 's exercise skill and he doesn’t want to use voice commands, rather use body language (body movement, gestures) to move through the skill
A skill might then use the basic API to even help Jibo “count” how often a person performed an exercise e.g. head/face shifted down, then up to a starting position
Of course, Lot’s of other gestures you know… I’m curious what might be available in the short term.
I’m just focusing on movements I mentioned and maybe derived using the API used in LPS example. Without hardware though, I would not attempt to build anything serious.