Documentation guides - SDK upcoming features/topics and background

####1. Describe Your Issue

It would be helpful to understand what will be possible regarding the SDK and help alleviate some of the communication frustrations. I’m looking for a reality check especially when designing skills…
We see hints where you are going in parts of the API that are not yet officially released. Sometimes I gather info from a video… a bit of detective work is needed, but I lack a coherent picture and that affects my skill conception and design.

Describing background and upcoming features is nicely done in the speech and design style guides. Also the discussion about speech technologies is a gem hiding away in the discussion group. It helped to understand the background.

I’m uncertain about skill design without knowing what’s possible. I also ask for features that probably you will address anyways - but I don’t really know.
For example, I’m interested in matching my expectations with reality regarding entity recognition (lps api…) and locating persons to support my human engagement in groups.
Also whether skills can utilize core skills such as reminders and the jot messaging skill I just heard about… I’m not seeing a coherent picture of how the SDK can integrate w your core skills…or not…perhaps we may not…let us know please.

Lastly, the “new!” annotation in the dev guides is very helpful in the latest release,
especially that it is searchable from the site query box.

thanks in advance for considering all this.
Best, Bob

1 Like

Hi Bob,

A lot of your questions around how skills interact with other skills (like our core skills) will be answered when we announce details around how transitions between 3rd party skills and core skills will take place. As soon as those details are announced we will update this prior thread with details.

In regards to handling groups using LPS APIs, we may have further APIs in the future (those would be announced upon SDK release in the release notes) but for the time being I would use the current LPS APIs to manage group interactions. APIs like getClosestAudibleEntity() will allow you to engage with the entity/speaker that Jibo is hearing.

I would also recommend taking a look at the new audio tracking feature that was added to the simulator in the latest release if you have not already. That will allow you to simulate how Jibo reacts to a source of noise.

I hope this is helpful and, as always, we will continue to update the developer community as further details are available.


1 Like

Thanks John for the update - I’ll wait for more info about “transitions”.

I’m wasn’t sure how the speaker id itself could be related to the audible or visual entity, however, I expect future APIs or docs will provide more info. That would be a topic in the guides when your are ready.

I’m happy enough that I can add additional BTs later extend these particular human interactions

I’ve tested the audio tracking in the simulator - thank you.