In part 1 of this series, I introduced the reasoning for developing a bridge from existing IT and risk frameworks to the next generation of risk management based on cognitive. These concepts are no longer theoretical and, in fact, are evolving faster than most IT security and risk professionals appreciate. In part 2, I introduce the pillars of a cognitive risk framework for cybersecurity that make this program operational. The pillars represent existing technology and concepts that are increasingly being adopted by technology firms, government agencies, computer scientists and industries as diverse as health care, biotechnology, financial services and many others.
The following is an abbreviated version of the cognitive risk framework for cybersecurity (CRFC) that will be published later this year.
A cognitive risk framework is fundamental to the integration of existing internal controls, risk management practice, cognitive security technology and the people who are responsible for executing on the program components that make up enterprise risk management. Cognitive risk fills the missing gap in today’s cybersecurity program that fails to fully incorporate how to address the “softest target,” the human mind.
A functioning cognitive risk framework for cybersecurity provides guidance for the development of a CogSec response that is three-dimensional instead of a one-dimensional defensive posture. Further, cognitive risk requires an expanded taxonomy to level set expectations about risk management through scientific methods and improve communications about risks. A CRFC is an evolutionary step from intuition and hunches to quantitative analysis and measurement of risks. The first step in the transition to a CRFC is to develop an organizational Cognitive Map. Paul Slovic’s Perception of Risk research is a guide for starting the process to understand how decision-makers across an organization perceive key risks in order to prioritize actionable steps for a range of events large and small. A Cognitive Map is one of many tools risk professionals must use to expand discussions on risk and form agreements for enhanced techniques in cybersecurity.
Risk communications sound very simple on the surface, but even risk experts will refer to risks and use the term with different meanings without recognizing the contradictions. In speaking with one senior executive at a major bank, I was told that she thought the understood risks, but the 2008 Great Recession revealed major disagreements in how the firm talked about risk and the decisions made to manage risk. Poor communications about risk are more common than not without a structured way to put risks in context to account for a diversity of risk perceptions. “The fact that the word ‘risk’ has so many different meanings often causes problems in communication,”’ according to Slovic.
Organizations rarely openly discuss these differences or even understand they exist until a major risk event forces the issue onto the table. Even then the focus of the discussion quickly pivots to solving the problem with short-term solutions, leaving the underlying conflicts unresolved. Slovic, Peters, Finucane and MacGregor (2005) posited that “risk is perceived and acted on in two ways: Risk as Feelings refers to individuals’ fast, instinctive and intuitive reactions to danger. Risk as Analysis brings logic, reason and scientific deliberation to bear on risk management.”
Some refer to this exercise as forming a “risk appetite,” but again this term is vague and doesn’t fully develop a full range of ways individuals experience risk. Researchers now recognize diverse views of risks as relevant from the nonscientist, who views risks subjectively, whereas scientists evaluate adverse events as the probability and consequences of risks. A deeper view into risk perceptions explains why there is little consensus on the role of risk management and dissatisfaction when expectations are not met.
Techniques for reconciling these differences create a forum that leads to better discussions about risk. Discussions about risk management are extremely important to organizational success, yet paradoxically produce discomfort whether in personal or business life when planning for the future. Personal experience in conjunction with a body of research demonstrates that the topic of risk tends to elicit a strong emotional response. Kahneman and Tversky called this response “loss aversion.” “Numerous studies have shown that people feel losses more deeply than gains of the same value (Kahneman and Tversky 1979, Tversky and Kahneman 1991).” Losses have a powerful psychological impact that lingers long after the fact, coloring one’s perception about risk-taking.
Over time, these perceptions about risk and loss become embedded in the unconscious, and – by virtue of the vagaries of memory – the facts and circumstances fade. The natural bias to avoid loss leads us to a fallacy that assumes losses are avoidable if people simply make the right choices. This common view of risk awareness fails to account for uncertainty, the leading cause of surprise, when expectations are not met. This fallacy of perceived risks produces an underestimation or overestimation of the probability of success or failure.
A Cognitive Risk Framework for Cybersecurity, or any other risk, requires a clear understanding and agreement on the role(s) of data management; risk and decision support analytics, parameters for dealing with uncertainty (imperfect information) and how technology is integrated to facilitate the expansion of what Herbert A. Simon called “bounded rationality.” Building a CRFC does not eliminate risks; it develops a new kind of intelligence about risk.
The goal of a cognitive risk framework is needed to advance risk management in the same way economists deconstructed the “rational man” theory. The myth of “homo economicus” still lingers in risk management, damaging the credibility of the profession. “Homo economicus, economic man, is a concept in many economic theories portraying humans as consistently rational and narrowly self-interested who usually pursue their subjectively defined ends optimally.”[1] These concepts have since been contrasted with Simon’s bounded rationality; not to mention any number of financial market failures and unethical and fraudulent behavior that stands as evidence to the weakness in the argument. A cognitive risk framework will serve to broaden awareness in the science of cognitive hacks as well as the factors that limit our ability to effectively deal with the cyber paradox that go beyond selecting defensive strategy. Let’s take a closer look at what a cognitive risk framework for cybersecurity looks like and consider how to operationalize the program.
The foundational base (“guiding principles”) for developing a cognitive risk framework for cybersecurity starts with Slovic’s “Cognitive Map – Perceptions of Risk” and an orientation in Simon’s “Bounded Rationality” and Kahneman and Tversky’s “Prospect Theory – An Analysis of Decision Making Under Risk.” In other words, a cognitive risk framework formally develops a structure for actualizing the two ways people fundamentally perceive adverse events: “risk as feelings” and “risk as analysis.” Each of the following guiding principles is a foundational building block for a more rigorous, science-based approach to risk management.
The CRFC guiding principles expand the language of risk with concepts from behavioral science to build a bridge connecting decision science, technology and risk management. The CRFC guiding principles establish a link and recognize the important work undertaken by the COSO Enterprise Risk Framework for Internal Controls, ISO 31000 Risk Management Framework, NIST and ISO/IEC 27001 Information Security standards, which make reference to the need for processes to deal with the human element. The opportunity to extend the cognitive risk framework to other risk programs exists; however, the focus of this topic is directed on cybersecurity and the program components needed to operationalize its execution. The CRFC program components include five pillars:
- Intentional Controls Design
- Cognitive Informatics Security (Security Informatics)
- Cognitive Risk Governance
- Cybersecurity Intelligence & Active Defense Strategies
- Legal “Best Efforts” Considerations in Cyberspace
A Brief Overview of the Five Pillars of a CRFC
Intentional Controls Design
Intentional controls design recognizes the importance of trust in networked information systems by advocating for the automation of internal controls design integration for IT, operational, audit and compliance controls. Intentional controls design is the process of embedding information security controls, active monitoring, audit reporting, risk management assessment and operational policy and procedure controls into network information systems through user-guided GUI application design and data repository to enable machine learning, artificial intelligence and other currently available smart system methods.
Intentional controls design is an explicit choice made by information security analysts to reduce or remove reliance on people through the use of automated controls. Automated controls must be animated through the use of machine learning, artificial intelligence algorithms and other automation based on regulatory guidance and internal policy. Intentional controls design is implemented on two levels of hierarchy:
- Enterprise-level intentional controls design anticipates that these controls are mandatory across the organization and can only be changed or modified by senior executive approval responsible for enterprise governance;
- Operational-level intentional controls design anticipates that each division or business unit may require unique control design to account for lines of business difference in regulatory regimes, risk profile, vendor relationships and other factors unique to these operations.
Cognitive Informatics Security (Security Informatics)
Cognitive informatics security is a rapidly evolving discipline within cybersecurity and health care with many branches of discipline making it difficult to come up with one definition. Think of cognitive security as an overarching strategy for cybersecurity executed through a variety of advanced computing methodologies.
“Cognitive computing has the ability to tap into and make sense of security data that has previously been dark to an organization’s defenses, enabling security analysts to gain new insights and respond to threats with greater confidence at scale and speed. Cognitive systems are taught, not programmed, using the same types of unstructured information that security analysts rely on.” [2]
The International Journal of Cognitive Informatics and Natural Intelligence defines cognitive informatics as, “a transdisciplinary enquiry of computer science, information sciences, cognitive science and intelligence science that investigates the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing. Cognitive computing is an emerging paradigm of intelligent computing methodologies and systems based on cognitive informatics that implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain.”[3]
Cyber Risk Governance
The cyber risk governance pillar is concerned with the role of the board of directors and senior management in strategic planning and executive sponsorship of cybersecurity. Boards of directors historically delegate risk and compliance reporting to the audit committee, although a few forward-thinking firms have appointed a senior risk executive who reports directly to the board. In order to implement a cognitive risk framework for cybersecurity, the entire board must participate in an orientation of the guiding principles to set the stage and tone for the transformation required to incorporate cognition into a security program.
The framework represents a transformational change in risk management, cybersecurity defense and an understanding of decision-making under uncertainty. To date, traditional risk management has lacked scientific rigor through quantitative analysis and predictive science. The framework dispels myths about risk management while aligning the practice of security and risk management using the best science and technology available today and the future.
Transformational change from an old to a new framework requires leadership from the board and senior management that goes beyond the sponsorship of a few new initiatives. The framework represents a fundamentally new vision for what is possible in risk and security to address cybersecurity or enterprise risk management. Change is challenging for most organizations; however, the transformation required to move to a new level of cognition may be the hardest, but most effective, any firm will ever undertake. This is exactly why the board and senior management must understand the framing of decision-making and the psychology of choice. Why, you may ask, must senior management understand what one does naturally and intuitively? The answer is that change is a choice and the process of decision-making among a set of options is not as intuitive or simple as one thinks.
Cybersecurity Intelligence and Defense Strategies
“Information on its own maybe of utility to the commander, but when related to other information about the operational environment and considered in the light of past experience, it gives rise to a new understanding of the information, which may be termed ‘intelligence.'”[4]
The cybersecurity intelligence and defense strategies (CIDS) pillar is based on the principles of the 17-member Defense Intelligence and Intelligence community “Joint Intelligence” report. Cybersecurity intelligence is conducted to develop information on four levels: strategic, operational, tactical and asymmetrical.
Strategic intelligence should be developed for the board of directors, senior management and the cyber risk governance committee. Operational intelligence should be designed to provide security professionals with an understanding of threats and operational environment vulnerabilities.
Tactical intelligence must provide directional guidance for offensive and defensive security strategies.
Asymmetrical intelligence strategies include monitoring the cyber black market and other market intelligence from law enforcement and other means as possible.
CIDS also acts as the laboratory for cybersecurity intelligence responsible for leading the human and technology security practice through a data-dependent format to provide rapid-response capabilities. Information-gathering is the process of providing organizational leadership with context for improved decision-making for current and forward-looking objectives that are key to operational success or to avoid operational failure. Converting information into intelligence requires an organization to develop formal processes, capabilities, analysis, monitoring and communication channels that enhance its ability to respond appropriately and in a timely manner. Intelligence-gathering assumes that the organization has in place objectives for cybersecurity that are well defined through plans of execution and possesses capabilities to respond accordingly to countermeasures (surprise) as well as expected outcomes.
Legal “Best Efforts” Considerations in Cyberspace
To say that the legal community is struggling with how to address cyber risks is an understatement, on the one hand addressing the protection of their own clients’ data and on the other hand determining negligence in an global environment where no organization can ensure against a data breach with 100 percent certainty. “The ABA Cybersecurity Legal Task Force, chaired by Judy Miller and Harvey Rishikof, is hard at work on the Cyber and Data Security Handbook. The Cyber Incident Response Handbook, which originated with the Task Force.”[5] Law firms have the same challenges as all other organizations but also have a higher standard in their ethical rules that require confidentiality of attorney-client and work product data. I looked to the guidance provided by the ABA to frame the fifth pillar of the CRFC.
The concept of “best efforts” is a contractual term used to obligate the parties to make their best attempt to accomplish a goal, typically used when there is uncertainty about the ability to meet a goal. “Courts have not required that a party under a duty to use best efforts to accomplish a given goal make every available effort to do so, regardless of the harm to it. Some courts have held that the appropriate standard is one of good faith. Black’s Law Dictionary 701 (7th ed. 1999) has defined good faith as ‘A state of mind consisting in (1) honesty in belief or purpose, (2) faithfulness to one’s duty or obligation, (3) observance of reasonable commercial standards of fair dealing in a given trade or business, or (4) absence of intent to defraud or to seek unconscionable advantage.'”[6]
Boards of directors and senior executives are held to these standards by contractual agreement whether aware of these standards or not in the event a breach occurs. The ABA has adopted a security program guide by the Carnegie Mellon University’s Software Engineering Institute. The Carnegie Mellon Enterprise Security Program (ESP) has been tailored for law firms as a prescriptive set of security related activities as well as incident response and ethical considerations. The Carnegie Mellow ESP spells out “some basic activities must be undertaken to establish a security program, no matter which best practice a firm decides to follow. (Note that they are all harmonized and can be adjusted for small firms.) Technical staff will manage most of these activities, but firm partners and staff need to provide critical input. Firm management must define security roles and responsibilities, develop top-level policies and exercise oversight. This means reviewing findings from critical activities; receiving regular reports on intrusions, system usage and compliance with policies and procedures; and reviewing the security plans and budget.”
This is information is not legal guidance to comply with an organization’s best efforts requirements. The information is provided to bring awareness to the importance the board and senior management’s participation to ensure all bases are covered in cyberrisk. The CRFC’s fifth pillar completes the framework as a link to existing standards of information security with an enhanced approach that includes cognitive science.
A cognitive risk framework for cybersecurity represents an opportunity to accelerate advances in cybersecurity and enterprise risk management simultaneously. A convergence of technology, data science, behavioral research and computing power are no longer wishful thinking about the future. The future is here but in order to fully harness the power of these technologies and the benefits possible IT security professionals and risk managers, in general, need a guidepost for comprehensive change. The cognitive risk framework for cybersecurity is the first of many advances that will change how organizations manage risk now and in the future in fundamental and profound ways few have dared to imagine.