A few requests:
When the dust settles (future), please provide additional documentation for the barcode and QR features regarding the capabilities and limitations for techies, and most important best practices for using QR/barcodes that we can share with the end-user when needed. (distance, orientation, lighting, etc.) Jibo’s cameras,of course, will have far more capabilities than our typical smartphones. It would be helpful in some skills to explain these issues to the end user.
Regarding testing again, it would be useful to somehow create test objects containing one or more QR or Barcodes. The goal is to simulate that Jibo has identified a barcode or QR in the room. In another post, I discussed a test feature to use predefined sets of visual targets. It would also be helpful to predefine test QRs/barcodes in various directions/distances for use in the simulator.
Behaviors supporting barcode/QR reading - There are common user interaction patterns for reading barcodes and QR codes especially with regard to particular use cases - looking up products info, accessing online resources, etc. We use barcode readers at the supermarket - we get help info, info graphics and get feedback if we do something wrong. Our skills might call a core behaviour(s) to handle typical user interaction patterns - interact with the user, to position a product code, take the picture, show a result, perhaps view the link, handle misreads, etc. However, I don’t expect always to use core behaviours involving full user interaction for QR codes, instead a skill can scan QR codes in the environment and interact with the user as needed. For reading product info e.g medicine box, then I’d use the core behaviour to handle the user interactions.
Thanks considering all this!