SDK Behaviors: higher order building blocks and generator


I know the SDK is a work in progress. I also know there are better and grander versions of the SDK on the drawing board.

I am new to Jibo development but found the overall experience pleasant and understandable. Having used the SDK and the Basic Skills Template ( and main.rule) to craft my first skill, I found it to be pretty easy. Thanks SDK team - well done!

My resultant is a rat’s nest of sequences and that is for only a handful of actions. For each sequence I added an animation, a TTS response, a sound and an ExecuteScript at the bottom - i did this over and over again to create variety and interest.

However, it could be much easier. The power in an expressive interface such as the Jibo robot, I feel, comes from variety and abundance of interesting responses - even possibly amusing and surprising responses - that make one smile and take note. What pains me is when I see repetitive, robotic responses that are always the same. A developer has to work pretty hard to add variety. Random TTS response, random movement, animations, etc. are tedious to code up. Just the simple changes I made the to basic skills template, to add a couple more behaviors, took me some time.

What I would like to see is a way of encapsulating behaviors into higher level behaviors with randomness and variability built in. For example, I came with about 50 different ways of saying “correct” to Jibo’s responding to my answer. To code this up nicely I would have to come up with a collection of body animations to correspond to each response and display a graphic appropriate for each response.

It would be nice if I could create a behavior function which takes as arguments a collection of animations to randomize from and a collection of graphics to randomize from and generates the variety needed for the 50 behaviors. The behavior function could be movement only, or movement and speech, or movement and graphic or all 3 together. So for 10 movements, 10 graphics and 10 text-to-speech responses randomly assembled and shuffled would give 1000 different response types. Of course we would have to specify were valid combinations and which would not be. Or in fact they could be paired. Exploding graphic -> bang.mp3 and champaign bottle opened -> pop.mp3.

Maybe this is in the works or maybe this has already been considered. There has to be a good way for skill developers to make Jibo’s behavior less robotic and more varied. What I am suggesting is some type behavior automation/generator that allows one to create skills with variety and ease. Then instead of crafting these behaviors by hand we would simply generate them with a given set of parameters.

I apologize if this subject has already been discussed. I have not gotten around to reading every forum post - yet.


@jsimone Thank you for taking the time to explore the SDK and collect your thoughts on how to best create Jibo skill. We agree that variety in responses really do help create a unique and expressive experience on Jibo. The next update to the SDK will include a number of tools to help easily help create collections of responses for use in your skills.

  • MIMs will allow you to build a collection of prompts, responses, and error handling within one interaction point in your skill. You can specify conditions for when prompts can be played and a random prompt will be selected from the available prompts.

  • Character AI / Embodied Speech (located toward end of post) can automatically turns MIM text prompts into compelling performances by dynamically creating body and screen animations that match timing and meaning of the text. You will also be able to custom create your own performance by using Embodied Speech Markup Language tags (located at bottom of post) within your MIM text prompts.

  • Flow Editor to better handle branching logic and conditions visually within your skills.

Be sure to read through our blog posts on these upcoming features to learn more.