I assume that users will use a core behavior to browse and search photos taken by Jibo.
(Uncertain of your roadmap)
Our skills should also be allowed to include that behavior and also use a query API to retrieve photos.
Likely this core behaviour could be a voice controlled as well as touch oriented.
I’d be interested in query API based on metada that is either automatically collected or added by users or skills. Query metadata might include:
- person entities identified in the photo
- optional tags added by the user e.g. names of pets, favorite toys, objects. It can be a way to identify rooms, objects associated with a person(s).
- any barcode or QR code automatically extracted
- person’s photo (selfies, preferred photo)
I do want to get a photo of a person for use in my skills, first choice selfie or some preferred photo .Useful again for games, stories, etc… Especially in a group context, I want to support “inclusiveness” and personal communication by displaying a photo of the person sometimes when Jibo asks someone a question.
Lastly, relating to the photo API , I’d like to add metadata to new photos taken from the skill to support the query API.
Developers - please add your ideas and be sure to like this post.
Thanks in advance for considering this!