Leveraging Behavioral Science for Cyber Security Risk Mitigation PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

RightfulBeryllium

Uploaded by RightfulBeryllium

Pfleeger, Caputo

Tags

cyber security behavioral science human behavior security risk mitigation

Summary

This document discusses the use of behavioral science to manage cyber security risks. It examines how user perceptions, trust, and decision-making processes influence security implementation. It looks at how technological advancements have unintended consequences that reduce trust or increase risk.

Full Transcript

Pfleeger, Caputo: Leveraging behavioral science to mitigate cyber security risk Why technology alone is not enough - When security technology is perceived as an obstacle to the user: overwhelmed by difficulties in security implementation, mistrust, misinterpret or override the s...

Pfleeger, Caputo: Leveraging behavioral science to mitigate cyber security risk Why technology alone is not enough - When security technology is perceived as an obstacle to the user: overwhelmed by difficulties in security implementation, mistrust, misinterpret or override the security *In a voluntary implementation, that competence may be a vector of pride and accomplishment. In a mandatory context, the individual may feel her competence challenged, triggering a negative attitude toward the process. - Bellanger, 2011 - Using trust to mitigate risks: recent studies show that a blending of comprehensive education and training of system developers and users could be effective → appropriate teaching and then trust to behave - Evidence that technological advances can have unintended consequences that reduce trust or increase risk → important to include the human element when designing, building and using critical systems Identifying behavioral aspects of security *The buyer and seller are humans enacting a transaction enabled by a system designed, developed and maintained by humans → There may be neither actual human contact nor direct knowledge of the other human actors involved, but the transaction process reflects its human counterpart. - Sasse and Flechais (2005): secure systems are socio-technical systems in which we should use an understanding of behavioral science to “prevent users from being the ‘weakest link.’” 3 perspectives users need to trust: 1. Product → security controls (policies and mechanisms) on stakeholders 2. Process → how security decisions are made 3. Panorama → the context in which the security operates - Baier (1986): expectations to fulfillment → the nature of a user’s expectation and on perceived trustworthiness of technology-mediated interactions *Scenarios’ findings: ➔ Security is intertwined with the way humans behave when trying to meet a goal or perform a task: in most instances, security is secondary to a user’s primary task. When security interferes, the person may ignore or even subvert the security, since the person is rewarded for the primary task. In some sense, the person trusts the system to take care of security concerns. - 2 problems: trusting the system too much / bypassing because of no trust. ➔ Limitations on memory or analysis capability interfered with an analyst’s ability to perform: abundance of information being generated by automated systems, and the increasing likelihood that important events would go unnoticed / cognitive (over)load ➔ Inattentional blindness: a person’s inability to notice unexpected events when concentrating on a primary task. ➔ Bias in the way each person thinks about security: experience, goals and expertise ➔ Risk perception: decision-makers have a difficult time both understanding the nature of the risk and balancing multiple perceptions of the risk to make the best decision in the time available. *The combination of narrow focus with a large (and often growing) quantity of information continues to cause failure to “connect the dots. *cognitive load and bias as organizing principles Relevant BS areas: - Recognition easier than recollection asking participants to recall a shape without being shown examples was far less successful than displaying a collection of shapes and asking them to identify which one had been shown to them initially *Dhamija and Perrig (2000): people can more reliably recognize their chosen image than remember a selected password. - Interference Frequent changes to a memorized item interfere with remembering the new version of the item, *the frequency of change is important. *Wixted (2004): “recently formed memories that have not yet had a chance to consolidate are vulnerable to the interfering force of mental activity and memory formation” *applying these findings to password memorability, Sasse et al. (2002) showed that login failures increased sharply as required password changes became more frequent. *Everitt et al. (2009) and Chiasson et al. (2009): multiple graphical passwords - Sociology How to build trust in online networks, treating computer mediated interaction as an architectural problem, using the nature of the mediation and characterisation to shape desired behavior. - Economics The role of reputation in establishing trust → disappointment is substantially reduced when online traders can freely change their identities and cancel their reputations. - Psychology and economics actual costs and perceived costs → transference is complete only if agency costs from intermediation lie within consumer thresholds - Cognition refers to the way people process and learn information 1. Identifiable victim effect the tendency of individuals to offer greater aid when a specific, identifiable person (the victim) is observed under hardship, compared to a group Implications: Users may choose stronger security when possible negative outcomes are tangible and personal, rather than abstract. 2. Elaboration likelihood model *2 main routes to attitude change: - Central route → logical, conscious, and requires a great deal of thought, result is often a permanent change - Peripheral route → people do not pay attention to persuasive arguments, they are swayed by surface characteristics, attitude change likely to be temporary Implications: One of the best ways to motivate users to take the central route when receiving a cyber security message is to make the message personally relevant. Fear can also be effective in making users pay attention, but only if levels of fear are moderate and a solution to the fear-inducing situation is also offered; strong fear leads to fight-or-flight (physical) reactions. 3. Cognitive dissonance the feeling of discomfort that comes from holding two conflicting thoughts in the mind at the same time. *a very powerful motivator that can lead people to change in one of three ways: change behavior, justify behavior by changing the conflicting attitude, or justify behavior by adding new attitudes → most powerful when it is about self-image. Implications: To get users to change their cyber behavior, we can first change their attitudes about cyber security. e.g.: a system could emphasize a user’s sense of foolishness concerning the cyber risks he is taking, then offer solutions to relieve that tension. 4. Social cognitive theory some of an individual’s knowledge acquisition can be directly related to observing others within the context of social interactions, experiences, and outside media influences. Implications: enable users to identify with a recognizable peer and have a greater sense of self-efficacy. The users would then be likely to imitate the peer’s actions in order to learn appropriate, secure behavior. 5. Bystander effect psychological phenomenon in which someone is less likely to intervene in an emergency situation when other people are present and able to help than when he or she is alone. Implications: systems can be designed with mechanisms to counter this effect, encouraging users to take action when necessary. 6. Bias - Status quo bias → the tendency of people to not change an established behavior without a compelling incentive to do so. Implications: Users will need compelling incentives - Framing effects → framing refers to the context in which someone interprets information, reacts to events, and makes decisions. Implications: User choices about cyber security may be influenced by framing them as gains rather than losses, or by appealing to particular user characteristics. - Optimism bias → overestimating the likelihood of positive events and underestimating the likelihood of negative events. Implications: users may think they are immune to cyber attacks, even when others have been shown to be susceptible. *systems can be designed to convey risk impact and likelihood in ways that relate to people’s real experiences. - Control bias → the tendency of people to believe they can control or influence outcomes that they clearly cannot. Implications: Users may be less likely to use protective measures when they feel they have control over the security risks. - Confirmation bias → people are not as open to new ideas as they think they are. They often reinforce their existing attitudes by selectively collecting new evidence, interpreting evidence in a biased way, or selectively recalling information from memory. Implications: the system must provide users with an arsenal of evidence to encourage them to change their current beliefs or to mitigate their over-confidence. - Endowment effect → people usually place a higher value on objects they own than objects they do not own. A related effect is that people react more strongly to loss than to gain; Implications: Users may pay more (both figuratively and literally) for security when it lets them keep something they already have, rather than gain something new. 7. Heuristics Simple rules inherent in human nature or learned in order to reduce cognitive load. When heuristics fail, they can lead to systematic errors or cognitive biases. - Affect heuristic Enables someone to make a decision based on an affect (i.e., a feeling) rather than on rational deliberation. Implications: If users perceive little risk, the system may need a design that will encourage them to take protective measures. The system should also reward the system administrator who looks closely at a system audit log because something just doesn’t “feel” right. - Availability heuristic Refers to the relationship between ease of recall and probability → someone will predict an event’s probability or frequency based on the ease with which instances of an event come to mind. Implications: system is designed to use vivid, personal events as examples, rather than statistics and facts, frequent security exercises may encourage more desirable security behavior. 8. Health-related behavioral models - Health belief model The perceived benefits must outweigh the barriers or costs Implications: a user will take protective security actions if he feels that a negative condition can be avoided, has a positive expectation that by taking a recommended action he will avoid a negative condition and believes that he can successfully perform the recommended action. - Extended parallel process model Improving message efficacy by using threats → ss long as efficacy perceptions are stronger than threat perceptions, the user will go into danger control mode. Implications: used appropriately, threats and fear can be useful in encouraging users to comply with security. - Illness representations five components: identity, timeline, consequences, control/cure, and illness coherence Implications: users concerned about whether to trust a site, person, or document can obtain new information about their security posture and evaluate their attempts to deal (e.g., moderate, cure or cope) with its effects. Then, the users form new representations based upon their experiences. - Theory of reasoned action/ planned behavior (1) people are reasonable and make good use of information when deciding (2) people consider the implications of their behavior Implications: system must create messages that affect users’ intentions (control); in turn, the intentions are changed by influencing users’ attitudes through identification of social norms and behavioral control. - Stages of change model providing strategies or processes of change to guide her through the stages of change to action and maintenance Implications: assess the users’ stage before developing processes to elicit behavior change. - Precaution-adoption process theory seven consecutive stages: unaware; unengaged; deciding about acting; decided not to act; decided to act; acting; and maintenance → people should respond better to interventions that are matched to the stage they are in. Implications: Assess user’s stage, related to the seven stages. Applying these behavioral science findings: Workshops bridging communities catalysts for the initiation of new research, encouragement for continued interaction and cooperation across disciplines Empirical evaluation across disciplines Repository of findings

Use Quizgecko on...
Browser
Browser