Getting visual entities

I’m trying to figure out how to visually “sense” entities to support my skills’ group social features.

Looking at the lookAt and lps related APIs and behaviours, of course we’ll see lot’s more, but I wondering if it is possible now to collect visual entities identifiers and positions in a particular frame. I didn’t figure out from the docs if LookAt can return visual entities…

I’ve experimented with behaviour scripts combing the lps getClosestEntity and the lookAt (continuous rotation), however, specifically I eventually want to recognize entities in a frame as Jibo rotates - Jibo might do that before telling the story to determine the group boundaries and also while he tells a story to handle skill-human interactions.

For a lookAt/lps behaviour, I can figure out a starting position (x,y,z) of an entity…of course I want to start Jibo by looking at speaker (speakerId to visual entityId? ) and then rotate Jibo to discover nearby entities in each frame.

The other factor will be depth - for example, Jibo just looks for entities within a few meters (no storytelling shouting - :wink: of himself. Not need to identify everyone in a big room.

Eventually I want to dynamically update the group boundaries to restrict Jibo’s rotations and track people and position.

Given all that, what methods can I use to collect entities within a frame? Maybe I missed something … and also entities within a particular distance or depth from Jibo. I can look for entities further away perhaps by altering the z-axis perhaps. Perhaps there are other supporting behaviours and API in the works :wink:

For example, there are a bunch of kids sitting on a long couch and also on the rug in front Jibo who is sitting on the coffee table. Jibo_getClosestEntity_ doesn’t seem to address this.

Also I mention visual entityId - I’m not sure how that Id relates to speakerId,perhaps the same?? – I assume we can relate the two if they refer to a person. In other words, use speakerId to locate the person visually.

So, perhaps some code samples would help in a future release.
best, Bob

2 Likes

Also, I mentioned in the feature category about we can inject a map of test entities and their locations into the behaviour tree - rather than adding targets in the simulator each time. Then in the simulator, we can say a test code to activate these test entities.
That way, we can develop and test our group social interactions supporting single or multiuser skills.
Any codes samples would be appreciated!
best, Bob