A few questions about basic Jibo functionality

In digging through the SDK process, a few questions have popped up for me.

  1. Has it been discussed how Jibo will know how to address each skill that is added? What I do not want to happen is that we need to initiate a skill before we can use it. Example: Hey Jibo, open IFTTT and turn on kitchen light. That would totally break the feeling communicating with Jibo like he is a part of the family. That said there needs to be some place in a skill where we put something that Jibo can associate with a skill when he is asked to turn on a light.

  2. The idle behavior’s that I’ve seen in sample skill’s are really just there for our immediate viewing pleasure right? I would think that this behavior would be more of a core function for the final product.

  3. When Jibo is delivered, will the SDK have access to Jibo’s full feature set? For example where in the simulator you could trigger rules that require online features of Jibo. It would also be great if we could use a computer installed mic.
    I get the feeling that the only reason this isn’t currently allowed is that it would basically allow a person to, as Jibo is right now, create something that would be just as good or better then a certain online shopping companies offering. :wink:


This was discussed in this Topic

There is a missing layer of abstraction over the entire Jibo interface between Jibo and Jibo Skills.

Words like Turn On and every top level command will need to be vetted into sub skills that are present for those assigned words. However, this is not yet release or yet been described how this will work. I too am very concerned about the command structure like, “Jibo go to this system and start these commands,” rather than Jibo do this; while Jibo understands what ‘this’ is in context to, so knows what skill to launch. Yet another consideration is skill interruption from Jibo’s main OS and returning, if at all. None of this has been decided yet. :frowning:

I still believe the best way that the Jibo team can handle this is for skills to register a .rule file as a starting point for the skill. That way a developer can say that these particular 5 phrases, for example, invoke the skill…the registered rule file could also pass in variables (ex: $word{action="doThis"}) as usual to allow the skill to do different things based on how it was invoked.

In the case of conflicting/overlapping rules, Jibo can intervene and say “Do you want to use my XYZ or ABC skill to do this?”. I’m sure there are more complicated scenarios to think about, but I do believe this is likely the best way to handle skills on the highest level.

What do you think? Any reason this couldn’t work?

1 Like

using alfarmer’s example from the topic he referenced

Example skill one:
Jibo, play my music presentation.
Where in the skill, play is used with presentation title, “my music presentation”.

Example skill two:
Jibo, play my music

I think that the only way you’r going to really maintain the illusion of actual conversation would be to create nested qualifiers. (made that term up, so I will explain)

PLAY triggers a list of items that play is associated with.


  • Play my presentation
  • Play music from my catalog or
  • Play tick tac toe or

This secondary list would have something like:

Presentation = go to presentation skill
Tick Tac Toe = Go to tic tack toe skill

Music = sub qualifier ->
Pandora = go to pandora skill
My Catalog = open local music skill

Hopefully that makes some sense. I’m a little worried though about how that would scale. Contextual AI is going to be really important for this notion of social robots to really work.

my idea is kind of a hybrid of the both of your comments.
I also noted in my messing about (cant find it now) that there seemed to be some place where jibo ranked how confident it was about what was said. This ranking (percentage of certainty) could help jibo know when it’s time to ask for clarification.

1 Like