a couple of weeks ago, i attended a talk by barbara grosz titled “can’t you see i’m busy? designing computers that interrupt only when they should.” grosz discussed interaction, collaboration and a simulation system her research team has developed to test interaction. this interaction testbed runs optimization problems where two agents need to share information. in the system, communication carries an associated cost and both agents try to optimize the number of times interruptions occur.
in considering whether a communication should be initiated, the a.i. model evaluates three decision points:
- is there a benefit to collaboration?
- is the partner likely to be willing to communicate?
- will this information lead to a change in the plan or approach?
test results indicate, somewhat predictably, that in uncertain conditions people are more likely to accept an interruption if they think it’s coming from another person. at the same time, conditions with less extreme expected values have acceptance rates that are about the same.
during the talk, grosz highlighted several implications of this interaction model:
- while interaction requires only communication, collaboration also requires a shared goal.
- relevance is situational, that is, the expected impact of a piece of information cannot be predicted without knowing the current situation.
- a computer agent can gain some trust by representing its level of certainty in the importance of a communication.
- the testbed is a basic model and does not account for any of the emotional overlays or context with which we approach real life problems.
while those who study information understand that significant aspects of interaction are situational, that shouldn’t mean that we can’t create fairly good predictive models without accounting for intended use. chen and xu (HICSS 2005) point us to an important distinction in the study of subjective relevance by contrasting situational relevance and subjective topicality. while the former approach delivers powerful results in limited task-based settings, the latter extends subject relevance (‘this book is about X’ relevance) by taking into account individual preferences and patterns. chen and xu also remind us that topicality is just one of the criteria that need to be met to achieve situational relevance.
unfortunately, it doesn’t sound like the agents being built today are very good at learning the preferences of their users. so even if they follow optimal game theoretic approaches and gather situational data, they’re still missing key building blocks we’re coming to expect from our information providers. without a personalization component, these agents will be helpful only to average users. that is, they won’t be much good to most of us.