I’m trying to figure out how to visually “sense” entities to support my skills’ group social features.
Looking at the lookAt and lps related APIs and behaviours, of course we’ll see lot’s more, but I wondering if it is possible now to collect visual entities identifiers and positions in a particular frame. I didn’t figure out from the docs if LookAt can return visual entities…
I’ve experimented with behaviour scripts combing the lps getClosestEntity and the lookAt (continuous rotation), however, specifically I eventually want to recognize entities in a frame as Jibo rotates - Jibo might do that before telling the story to determine the group boundaries and also while he tells a story to handle skill-human interactions.
For a lookAt/lps behaviour, I can figure out a starting position (x,y,z) of an entity…of course I want to start Jibo by looking at speaker (speakerId to visual entityId? ) and then rotate Jibo to discover nearby entities in each frame.
The other factor will be depth - for example, Jibo just looks for entities within a few meters (no storytelling shouting - of himself. Not need to identify everyone in a big room.
Eventually I want to dynamically update the group boundaries to restrict Jibo’s rotations and track people and position.
Given all that, what methods can I use to collect entities within a frame? Maybe I missed something … and also entities within a particular distance or depth from Jibo. I can look for entities further away perhaps by altering the z-axis perhaps. Perhaps there are other supporting behaviours and API in the works
For example, there are a bunch of kids sitting on a long couch and also on the rug in front Jibo who is sitting on the coffee table. Jibo_getClosestEntity_ doesn’t seem to address this.
Also I mention visual entityId - I’m not sure how that Id relates to speakerId,perhaps the same?? – I assume we can relate the two if they refer to a person. In other words, use speakerId to locate the person visually.
So, perhaps some code samples would help in a future release.