How much knowledge of the 3D space around Jibo will he be aware of and what connections in the SDK are planned for our skills to access the information? For example; Jibo detects he is being moved and asks, “where are we now?” when set upright. If the user answered, “You’re in the kitchen Jibo,” From here if our skills had access to [Jibo knows he is in the kitchen.] That would be most helpful
We aren’t planning on saving or exposing location in the home in the knowledge base at this time, but we do plan on giving each skill a sandbox knowledge base to persist information across sessions. So in your example, you can have your skill ask what room Jibo is in, and you can store the user’s answer for future use by your skill alone. We will have APIs for accessing your skill’s sandbox knowledge base included in a future version of the SDK.
Hey @joe.t !
I think you gave me the answer I was looking for. It doesn’t matter if Jibo has any of this information available to us, since you’re going to allow us to gather and save it ourselves across separate activations of our skill. correct? How much memory can a skill take up?
Another question @joe.t,
Along the lines of my question above, my use of Jibo is going to be in a single use scenario where I would want my skill to be the master skill where I would want Jibo to return to my master skill after running other skills to record context information about individual users and their experiences. Will you allow this and is there a way to do this for a special function Jibo?
We are still defining how much memory each skill will be allotted. We will not allow another skill to become a master skill and replace Jibo’s operating system. It may be possible to have a skill with a
main.bt loop menu with multiple
subtrees that return to your skill’s
main.bt after completing the
subtree's respective function to record the context information. We are still defining our QA guidelines for Skill Store submissions though and there may be limits placed on how long skills will be able to remain in an looped idle state before exiting back out to Jibo’s operating system. We expect to have more defined information available when we formally release our Skill Store QA guidelines.
My suggestion is you work with skill developer for approval to use the hardware for single use, such as in elder care where very specific events are required and one skill managing other skills, as an owners option to use Jibo this way. It would make integration and training for elderly easier with control over the entire function of the device, like my master skill to monitor Jibo usage and user reactions and responses to the events. It’s my opinion that you’ll sell more jibos as elder companion friends than you will to families with children, as the elderly home bound are outnumbering families with children.
The equivalent scenario is iPhones have business apps not for the store.
I want that option.
@alfarmer I understand you would like to see more ways to deploy and launch skills on Jibo hardware without certain restrictions in order to create the experiences you are looking to provide for certain markets, such as elder care. I will absolutely let the team know this is something you would find useful.
Exactly @joe.t, A working Jibo that has dedicated functions to be used exclusively for a set of tasks, such as in an eldercare facilities where the facility could use a Corporate Level Skill used only on the Jibo’s purchased for the project and not designed for the store or general use. Skills in this category can be funded by outside investment or corporately. These skills built from concepts that can move business cost drivers, where I believe funding for this type of skill development is more readily available than in the general market; at least until the number of units sold has reached a threshold where cost drivers can be commensurate with market size.
Hi @alfarmer, thank you for further explaining how this will be useful for you and I have included this with your original feature request.
dedicated function sets could correspond nicely with contexts… so how can users start defining contexts?