Currently the API focus is on triggering skills via speech rules. However, in previous feature requests there are other triggers requested such as from scheduled calendar event or IFTTT.
I assume that facial, audio, gesture, or other device triggers are or will one day be possible.
The triggered skill provides info about how it was triggered so that each situation can be appropriately handled.
Additional info depends on the trigger type e.g.calender or IFTTT might have associated event info, target person(s),action, relevant IFTTT stuff; device might have device model, standard action id,etc; facial/audio/gesture entity ids, speaker, gesture type, etc.
A skill can define what triggers types it can handle. A skill can’t be activated by calendar reminder or IFTTT if only waits for speech rule activation. Depending on trigger, additional interactions might be needed.
device (?)better name
Event / calendar …reminder
idle mode or autoplay? (skill like wallpaper can run with or without human interaction)
jibo or system triggered (jibo recommends a skill to someone)