Body movement,gesture, emotion recognition for control/navigation in skills


#1

Will a future API help us recognize Body movements / gestures to help build interactive games, storytelling, book reading, etc?

For controlling or navigation within skills, I’m interested in an api to recognize Body movements, eventually gestures, and emotion. Right now, just some particular body movements… During the design webinar, I asked about skill design and best practices regarding ergonomic issues - The answer was very exciting.

I can imagine building a very crude api based on the LPS example to determine basic navigation in a skill by tracking the person (left, right, forward, back). I’m more interested in subtle movements, more like “leaning” in a particular direction: lean forward or backward, left or right. Well - Segway users might really enjoy this. The person might be sitting or standing close to Jibo or further way. Sitting/standing. A child sits on the floor close to Jibo, an adult uses an exercise skill and stands further away…

I’m not too confident about spending alot of time without having hardware.

Also I don’t understand the basis of your tracking - tracking the entity - meaning you track a face, right?
If tracking a face or even a connected sensorized device, then perhaps more subtle body movements can be recognized based on the LPS example (?)
…lean to the left or lean forward or backward versus
a big jump to the left as in my Rocky Horror skill :wink:

Simple examples:

  1. A child sitting on floor, she leans forward to walk down the hallway in her story, and leans left to “enter” a room that Jibo describes or displays…
    Of course, she might use hand gestures too - but I’m looking for the basics for now.
  2. An adult engages in @michael 's exercise skill and he doesn’t want to use voice commands, rather use body language (body movement, gestures) to move through the skill
    A skill might then use the basic API to even help Jibo “count” how often a person performed an exercise e.g. head/face shifted down, then up to a starting position

Of course, Lot’s of other gestures you know… I’m curious what might be available in the short term.
I’m just focusing on movements I mentioned and maybe derived using the API used in LPS example. Without hardware though, I would not attempt to build anything serious.

Thanks,
Best, Bob


#2

Wonderful insights, Bob. I remember early on seeing a video of body movement tracking within Jibo on one of their first prototypes. I hope that made it into the final LPS as it would be very valuable information for skills to use.

You mentioned a prototype exercise skill I worked on as an example… I could see it go even further than just navigation. If available, one could even see if the user was performing the exercise correctly or not, then provide feedback to the user on the proper way to do so. In addition and more generally, something like this could be used to detect if an older user has fallen and needs assistance, or if a younger user is paying attention or off doing something else and then act accordingly.

I would love to hear what will be available to us in regards to body movement tracking as well.


#3