In digging through the SDK process, a few questions have popped up for me.
Has it been discussed how Jibo will know how to address each skill that is added? What I do not want to happen is that we need to initiate a skill before we can use it. Example: Hey Jibo, open IFTTT and turn on kitchen light. That would totally break the feeling communicating with Jibo like he is a part of the family. That said there needs to be some place in a skill where we put something that Jibo can associate with a skill when he is asked to turn on a light.
The idle behavior’s that I’ve seen in sample skill’s are really just there for our immediate viewing pleasure right? I would think that this behavior would be more of a core function for the final product.
When Jibo is delivered, will the SDK have access to Jibo’s full feature set? For example where in the simulator you could trigger rules that require online features of Jibo. It would also be great if we could use a computer installed mic.
I get the feeling that the only reason this isn’t currently allowed is that it would basically allow a person to, as Jibo is right now, create something that would be just as good or better then a certain online shopping companies offering.