Dr. Martin Trevino, Chief Scientist, RiskOpsAI™
A New Understanding
In a world dominated by cyber-attacks, political events, war, and an unrelenting pandemic, the ability to proactively manage enterprise risk has become a pillar of successful organizations. To stay Left of Boom[1], Enterprise Risk Management (ERM) teams have matured from “gut feel” exercises to efforts utilizing advanced analytics and machine learning.
We have also bore witness to the dramatic evolution of the career path and importance of roles associated with ERM. The Chief Information Security Officer (CISO) has risen from a niche position to now reporting directly to the Chief Executive Officer (CEO) and Board of Directors. The General Data Protection Requirement law or GDPR has conceived the position of Chief Data Privacy Officer (CDPO) as part of its commitment to information assurance and privacy. How this role will define its reporting structure and separate the responsibilities surrounding data security within the organizational structure of the firm is yet to be seen, but defying GDPR is not an option for global firms.
What is unmistakable is rise of the importance of ERM within global enterprises.
A Failed Problem Set
Despite the ardent pursuit of strategic objectives through ERM, the effective mitigation of organizational risk often remains a failed problem set. The inconvenient evidence for this assertion being in the form of short tenures of CISO’s, the never-ending list of cyber breaches, ransomware hacks, ‘leaked’ personal data, and the inability to predict the multi-dimensional pandemic or geopolitical effects on the firm and its supply chain.
The reasons for ineffective ERM efforts are numerous and often individual to the firm, but common among them is a lack of critical data – in spite of collecting massive amounts of data and the ability to effectively visualize risk data – and a lack of understanding of how the brain makes data-driven risky decisions.
The critical “so what” realization from those of us who are both participants and scientists of ERM is that an effective ERM program has less to do with collecting mountains of data, its accuracy or precision, than it does with the “right” data, being visualized in ways that facilitate the human brain in making high risk data informed decisions.
If we are to turn risk management into a successfully addressed problem set, we will have to reimagine ERM and the decision-making, be it AI driven or human-centric that accompanies it.
The New Battle Ground In ERM
The new battleground for Enterprise Risk Management is the human brain. At the cutting edge of efforts to improve risky decisions that typify ERM is the focus on the decision-making process of the brain and the decision-errors we all suffer from.
At the center of this shifting thinking are the bodies of knowledge of neuroscience and neuro & cognitive psychology. This represents a pivotal shift away from the thinking that data science and business intelligence (BI) have pursued for decades.
A vast body of knowledge now supports what we knew to be true of unscientific notions such as evidence-based decision-making will lead to better strategic decisions.[2] To the dismay of advocate of this approach it had little effect decision-making.
In the last 15 years, new research in neuroscience and psychology coupled with rediscovering knowledge of past scientific experiments has served to enlighten why data science and self-serve business intelligence rarely influences decision-making at its most critical points – creative and strategic decisions.
For decades now attempts to advance data-driven decision-making through thoughtful UX/UI design has been touted as gospel by data scientists and proponent of self-serve BI packages. Foundational tenets were created and gleefully adopted by consulting firms and individuals personally invested in data visualization.
Most notable are ‘rules’ of data visualization by seminal thinkers such as Edward Tufte. Tufte advanced such tenets as a data density ratio, simplicity, data accuracy and everything should speak to the data. While Tufte’s rules are relevant and founded in good faith they only minimally influence (if at all) how the human brain examines and accepts data in high-order decisions.[3]
Neither do Gestalt’s design principles, as they cannot alter how the human brain interrogates data, its predetermined nature, the formulation of the Internal Model or how this model is updated in the brain.
This is not to dismiss the benefits of good UX/UI and analytic design – they are no doubt important and can influence the examination of data. More research is needed in this realm
Yet, the simple fact is they cannot overcome how the human brain is structured to interrogate data, our propensity to commit decision-errors and the intractable equations which govern decision-making in the human brain.
The concept of how we as human beings go about interrogating data and making risky decisions came into its modern being with the experiments of Alfred Lukyanovich Yarbus (1914 -1986), who discovered that we all inspect data not from a neutral perspective but from one of preconceived desired outcomes.
Since this is not a scientific paper, we can jump ahead a few decades to Malcom Gladwell’s excellent book Blink, which popularized the concept of rapid cognitive assessments or “Thin Slicing.” Gladwell highlights a vast body of knowledge that suggests we go about our lives largely ‘unconscious’ and make an overwhelming number of decisions from a complicated system that is rapid and accurate, but prone to error. We often describe this simply as “gut feel.”
Subsequent research shows we are supremely confident in these decisions and readily dismiss even the most valid, concrete evidence which stands in opposition to our rapidly made decisions based on experience, knowledge, and understanding.
The world-altering work on this topic comes to us from Daniel Kahneman’s seminal research into precisely how the brain makes largely unconscious and rapid assessments though what he described as “System 1.”
Kahneman’s research into decision-errors and heuristics destroyed the notion that data or evidence would override our natural predisposition to go with our “gut.”[4] The harsh reality we now face in improving high-risk decisions in ERM is that better dashboard design, increased color pallets, and increased data-density have precious little effect on the most important decisions being made in the firm. If we are to improve data-informed, high-risk decision-making in ERM we need to understand how the human brain makes such decisions and reimagine how we access, present, interact to make data-driven decisions.
On Autopilot
In awareness studies, executives are briefly shown dashboards or Single Pane of Glass (SPOGS) that relate to their field of expertise and then asked if they understood what was displayed and if they had any questions regarding the display of the data and analytics. What is interesting is not that some executives respond with “no” but that nearly everyone does; in short, everyone believes they have a solid understanding of the data they have briefly seen.
However, when asked detailed questions regarding what they saw significant gaps are exposed. Often, decision-makers are not able to recall more than 4 pieces of information which is in line with how much the brain is rapidly able to recall. This points to what Neuroscientist have proven – that we only are aware of a small percentage of the world around us, instead relying on our internal model to “fill in the blanks.”
Despite taking in vast amounts of information, we are largely on autopilot as we go through our lives. It is perhaps the single greatest illusion the brain plays on us.
Today’s modern experiments validate what Yarbus discovered in a series of experiments in the 1960s with a painting titled “The Unexpected Visitor.” Yarbus was the first to scientifically prove that we individually inspect things (data) differently, and we do so based not from an unbiased view but from predetermined desired outcomes.
When Yarbus asked question of participants most could not recall the details – yet each was supremely confident that they had a good understanding of what they had seen. Diving deeper into precisely why the brain is lazy and resistant to utilizing data in its decision processes we enter the realm of neuroscience and attempting to decipher the structure of the brain and how its systems cooperate and, at times, compete.
The Neuroscience of Risky Decision-Making reveals that we have difficulty making data-based decisions due to the very structure of our brain.[5] We can dramatically oversimplify and highlight two systems – the limbic and neo cortex. The limbic system is the part of the brain that controls decision-making. It also governs trust and feelings, but and not language. This system makes decisions, it trusts, it feels but it cannot express how that trust or feeling influenced the decision.
The neo cortex is the portion of the brain responsible for analytic thought and language, but it has no capability for decision-making. Thus, the part of the brain that makes decisions governs emotions but cannot perform analytic functions and can’t articulate how it came to a decision.
The non-scientific but tangible evidence of this has been witnessed by countless data scientists and advocates of data-driven decision-making. After being presented ample data, metrics, measures, and indicators that went counter to the decision-maker’s mental model, the evidence was simply disregarded with senior officers uttering some permutation of “I hear you, but it just doesn’t feel right – we’re going to go with my gut.”
That’s this complex interaction of decision systems in the brain playing itself out. In the final analysis, these systems both compete and cooperate. It’s not that we have no capacity to integrate data into our decision processes. It’s just that it’s complicated and the brain has a strong propensity to laziness and to resist data that runs contrary to its mental models and beliefs – especially in cases where high degrees of complex reasoning is required.
BIAS, PREFERENCE AND NOISE
It is said we do not have an abundance-of-data problem; we have a sorting, prioritization, and selection problem. It is well acknowledged that executives suffer from information overload and that a person could spend 24 hours a day reading and watching reports on their respective areas of expertise.
It’s here that we must acknowledge and differentiate between bias, preference and noise.
First, it should be noted that scientists cannot separate bias from preference. Second, bias and preference are not bad things – they are the tools we use to navigate an impossibly complex world. That said, bias and preference can lead us to decision errors, such as overconfidence in ourselves, others, and even technology. They can cause us to ignore advancing technology, because we “like” one companies’ technology over another – even when data clearly shows that the competitor’s technology outperforms the preferred tech.
Perhaps the true single greatest challenge to better decision-making in ERM is the noise (in many forms) that must be ‘sifted through.’ Noise has been acknowledged as a principal impediment to better decision-making in both the scientific and popular literature.[6]
The incorporation of decision science, with its neuroscience and psychological basis, presents a challenge for organizations attempting to become data-driven in ERM. How can we use data science, business intelligence, advanced analytics, and even artificial intelligence (AI) to reimagine user interfaces and dashboards to “wake up” the brain and mitigate our natural tendencies to resist data in decisions which require a high degree of complex reasoning?
It’s important to note our System 1 decision process is powerful and can be highly accurate. The issue is, however, that this system also is prone to error, bias, and our individual heuristic tendencies and bypassing relevant information that contrasts its immediate assessment.
An interesting avenue in reimagining next-generation analytics is to focus on decision-makers themselves.
This entails a new form of AI that attempts to understand decision-makers’ tendencies, the data they explore, how they explore it in comparison to other individuals in their positions, and how they provide recommendations based on that analysis. This can take the form of a simple note informing the decision-maker that they are failing to explore or spend a similar amount of time on a set of metrics or data compared with their peers.
Another method of intervention could be temporally adjusting the flow of data and incorporating historical and contextual data. This would relate to the cognitive abilities of the decision-maker, the nature of human memory, and their ‘day in the life’ preference as to speed, time, and form that data is moved to them. The hypothesized outcome is a positive acceptance of the data as opposed to feeling flooded with data they do not want or are unable to examine.
The amount of contextual data is important to note as it directly relates to the nature and function of memory. Memory is not the precise and incorruptible recording of events stored for future retrieval. Rather, it is it a frail structure that must be resurrected. The issue is this reconstruction is highly flawed and subject to over-trusting ourselves as “accurate.” Presenting context to current data, metrics and measures as well as the dynamic ability to explore that historical data can be invaluable in accurately updating the mental model of the decision-maker.
Becoming a data-driven ERM organization has as much to do with understanding memory and enabling the interaction with data to complement and compensate for the weaknesses in our memory – i.e., the ability to explore historical events in both structured and hierarchical ways with temporal and contextual accuracy and richness that our memory isn’t designed to provide. This capability would be dynamic and could be adjusted daily to calendar appointments and even to the psychology state of the decision-maker. And all of this can today be quantified algorithmically at speed and scale.
Filtering, BIAS AND Preference
It is said we do not have a data saturation problem, rather we have filtering problem. Our current filtering techniques allow too much noise to intermingle with the data critical to the decision-making process. To solve this problem, understanding the individual decision-maker will be key to determining what constitutes “noise.” We are in an excellent position to make headway in this problem set as data science techniques for dimensionality reduction can applied here. We have the techniques to reduce 1,000 dimensions down to 5-7 while retaining the core goodness of the data. We can also observe and understand a ‘day in the life’ of an executive in any position to gain further understanding of what constitutes ‘important’ vs ‘noise.’ Preference is also obtained by observing behaviors and enabling the decision-maker to determine types of analytics, information, and data feeds they prefer to examine. All of this lends itself to making headway into the noise challenge and the technology and expertise is available.
There are inherent challenges in filtering information to eliminate noise and determining bias and preference, but if successful the payoff could be significant. If we can gain insights into one’s bias and preference, automated interventions can be designed to minimize both and adjust their pathing exploration of data.
Conclusion
In conclusion, new bodies of knowledge have shed considerable light on how the brain makes risky decisions and that much of the dogma advocated by designers, data scientists, and advocates of evidence-based decision-making simply doesn’t affect the decision process. We stand at a unique point in time where staying “left of boom” is a central to successfully operating on a global scale. Next-generation ERM will focus as much on the decision-making process as the accuracy and precision of the data. As we develop the next generation of risk technologies and methods we can and should boldly reimagine visual analytics and decision-making in this realm, as well.
About RiskOpsAI™
San Diego-based RiskOpsAI™ is a pioneer of AI-driven, integrated risk modeling (IRM). Built by cyber, risk, and compliance veterans, our software-as-a-service platform helps Fortune 2000 organizations discover, measure, prioritize, predict, and optimize cybersecurity, data privacy, and enterprise risks. For more information, contact us at [email protected], visit https://www.optimeyes.ai, and follow us on LinkedIn and Twitter.
[1] “Left of Boom” implies being proactive or ‘in front’ of an impactful event or “boom.” To be right of “boom” is to be reactive.
[2] Strategic decision is the term referring to decisions requiring high degrees of complex reasoning.
[3] Decisions where high degrees of complex reasoning are required.
[4] System 1 refers to Daniel Kahneman’s groundbreaking work and construct. See Thinking Fast and Slow by Daniel Kahneman.
[5] The following is a vast oversimplification of the interactions of these systems. It is also important to state that it is likely there is far more unknown than known of the interaction of these two systems.
[6] See Noise: A Flaw in Human Judgment by Daniel Kahneman, Olivier Sibony and Cass R. Sunstein.
You may also like…
Worldwide Business with Kathy Ireland 4: a Specific Deployment Example for RiskOp AI involving multiple modules.
RiskOpsAI™ is pleased to have been featured on Worldwide Business with Kathy Ireland. In this 3nd segment, Kathy...
Worldwide Business with Kathy Ireland 2: RiskOpsAI™ risk management tool is ubiquitously deployable
RiskOpsAI™ is pleased to have been featured on Worldwide Business with Kathy Ireland. In this 2nd segment, Kathy...
ESG Risks – Which one matters most: E, S or G?
As companies continue to put serious weight behind Environmental, Social, and Governance considerations when making...