Test scripts to add target entities (entityId and location)

In the LPS section of the simulator, developers can add and position targets to simulate entity/group interactions.

I’m interested in activating test environment scripts within the behaviour to position entities (persons) and simulate single or multiplayer human engagements with the simulator.

Depending on the strategy, the skill might just need to determine the group boundaries so that he can rotate within these boundaries and humanly engage his audience. Perhaps Jibo is surrounded by people, or he sits on the coffee table and should rotate enough to stay within the limits of the group and engage people, looking at a person briefly (social engagement) and perhaps he even asks the child a question or include the child’s name in the story.

During the running of the storytelling skill, people change positions, so the test mode can update the positions of entities or introduce “newcomer” entities who want to listen to the story.

Eventually, when we can lookup entity contact info, hopefully, we can inject test persons to match these entities – not all entities need a name, as we want to test situations where Jibo does not yet know someone’s name (meaning that Jibo has identified a face in the group, assigned an entityId, but no name info is yet known … we call another “meet and greet” behaviour from the core toolkit perhaps.

All in all, we want to test and develop group related behaviours and position entities using scripts instead of adding targets using the simulator LPS.

Can someone provide some sample scripts or add samples to the sample-code?
Thanks in advance for any input!
Best, Bob

Hi Bob,

Thank you for this input!

So that I can make sure that we best communicate to the team it sounds like you are looking to have a way, either via code or otherwise, to simulate multiple moving targets in the simulator.

Right now, as you likely know, you can add many targets into the simulator and move them around individually as in the below .gif:

It sounds like what you would like to do is have the simulator have those targets move around Jibo more dynamically. For example, if you were building a skill that’s a bit like musical chairs where multiple targets move around Jibo at the same time you would want to automate/use script to control that movement rather than use the manual add target and followed by shift+alt+drag to move the targets.

Is that an accurate summary of what you are looking for, a way to simulate the active movement of multiple targets and groups in the simulator without using those manual methods?

Let me know if that doesn’t sound right! As always we track all requests in this category to share with the team but I wanted to get a little clarification so that I best communicate what you are looking for.

Thank you!



Thanks John for your analysis. That’s a good summary. I worked with the simulator’s to manually set targets, but I’d rather setup single/multiuser test cases and run those test cases against a skill that supports particular group oriented social behaviours. That would help build reproducible single/mulitiuser social behaviour tests.

Your example musical chairs is good…Jibo inspires games that move kids. Also, for less active skills, people still move about.My social behaviours should adapt to the location of people, the group, while telling a story or playing a game. If someone moves, Jibo doesn’t talk to an empty seat :wink:

Of course, there could be stationary and dynamic visual entities as well as dynamic audio entities - why auditory sense? the reason is that Jibo might use various cues to determine if someone is behind him or beyond his visual frame… maybe someone has joined the group to listen to a story etc.

Thank you John!