Erickson (2002) argues that “context awareness” is motivated by a desire for systems to take action, autonomously, leaving us out of the loop. The ability to do so accurately requires a lot of intelligence to draw inferences from the available sensors. Erickson reckons the project is doomed to failure. However, he thinks we might make some progress if humans are brought back into the loop and given the contextual data in rawer form so they can interpret it and take appropriate action themselves. Not sure. The example he gives can easily be modified to reveal potentially damaging information about a user’s whereabouts and actions:
“Lee has been motionless in a dim place with high ambient sound for the last 45 minutes. Continue with call or leave a message.”
Reminds me of the impressive-looking thesis by Nora Balfe (2010) on a safety critical railway signalling systems. For instance from the conclusions:
“Feedback from [the system] was … found to be very poor, resulting in low understanding and low predictability of the automation. As signallers cannot predict what the automation will do in all situations they do not feel they can trust it to set routes and frequently step in to ensure trains are routed in the correct order. In the observation study, the differences found between high and low interveners in terms of feedback, understanding and predictability confirm the importance of good mental models in the development and calibration of trust…”
Balfe, N.(2010) Appropriate automation of rail signalling systems: a human factors study. PhD thesis, University of Nottingham.
Erickson, T. (2002). Some problems with the notion of context-aware computing: Ask not for whom the cell phone tolls. Communications of the ACM, 45(2), 102-104.