CUTS

Physical risks are inherently defined by the physical environment. Cyber security risks are similarly defined by the combined physical and electronic environment. However, unlike the in- creased risk from speed in the rain on the highway at night, the dimensions of both the combined environment and the nature of the underlying risks are not so obvious. Physical risks are often transparent, and inherently aligned with human information processing capacity: contextual, often visual, and at a pace that fits well within a human narrative. In contrast, cyber risks are ill-suited for human risk perception: either they are literally invisible or identified in a decontextualized manner. There is a critical need in computer security to communicate risks and thereby enable informed decisions by average, non-expert computer users.

Thus the design of the current research prototype includes four lines of development:

Risk Context Analysis- Creating the ability to identify a user risk context from intrinsic (user activity, history, and known network entities) and extrinsic (system configuration, location, network details) factors.

Automatic Context Response- Automatically adapt system actions and configuration to the changing context, to reduce cognitive overload on the user by taking non-controversial actions without involving the human.

Metaphorical Risk Communication- Ability to convey risk factors of a particular context to the user in narrative form consistent with the users' mental model that will be quickly and effectively understood.

Intelligent Communication- Engaging the user effectively and infrequently, appropriately, and only for the time necessary to communicate.

Five Guidelines

First, implement high security defaults and then automatically decrease them as feasible.

Second, make it possible to override these in a simple automated manner, with a single click. However, require the individual to experience either highly personalized or demographically targeted (based on information available to the system) risk communication. Thus individuals can take risks but they do so knowingly.

Third, personalize security for the context. Some situations require temporary disarmament by the individual. One example is connecting to a commercial wireless service in an airport. Scripts must be enabled, and active advertisements accepted. Third party cookies are required. In this case the individual is required to accept high levels of risk in the interests of the party controlling the connectivity. Recall these contexts. Isolate as much of the machine as possible, and clean the machine on context changes. In contrast, some situations require the highest levels of security. Some of those can be recognized by the client machine. Examples include entering authentication credentials previously used in a financial context into a non-banking site, installing downloaded software, or entering critical information in a recently created site, i.e. Social Security Numbers or bank account numbers.

Fourth, personalize security for the individual. A translucent design utilizes history and automated intelligence to provide contextual security setting and minimize individual risk. Intelligent secure interactions observe individual security behaviors by enabling individuals to disable even when security is recommended, but also implements the most stringent security settings the individual will tolerate in a given context.

Fifth, use social context not only in the interest of advertisers but also in the interest of the individual. This implies observing histories of interactions (locally) as well as histories of simple clicks. Please do note that this does not imply or require reporting all of these to some centralized surveillance entity.