- CPA to Cybersecurity
- Posts
- "Stop Trying to Manage Risk" - A GRC Practitioner's Response
"Stop Trying to Manage Risk" - A GRC Practitioner's Response
Risk management isn’t the problem. Performing risk management is.

A talk came across my feed recently that I couldn't ignore: "Stop Trying to Manage Risk" from Adam Shostack at OWASP Global AppSec 2025. I'm a fan of Shostack's work and even had the privilege of a direct IANS consultation a few years back. But I'm also a GRC practitioner, exactly the audience he's challenging.
So with an open mind (and awareness of my biases), I'll unpack his arguments and offer my take. The goal is to learn in public, in hopes it's useful to others navigating the same questions.
Table of Contents
Start with Why
Shostack explains why he was compelled to prepare this talk: students in his threat modeling classes keep gravitating toward risk language even when he deliberately avoids it.
"When I teach people how to threat model, one of the things that happens is they start talking about how do we prioritize the risks, right? That just comes out naturally in every single class I do. Someone will start to talk about risk prioritization even though I carefully don't use those words as I'm teaching."
His core observation is that we've elevated risk to an unquestionable axiom, and it's not delivering what we hope:
"People hope that risk management is going to solve all of their problems. This includes executives, it includes engineers, it includes cyber security people. We gravitate to the idea of risk... and I don't think it works."
His proposed solution draws from how other industries handle safety: prescriptive standards that remove the need for organizations to solve the same problems independently. He points to the FDA's approach to food safety, NHTSA's valuation of human life for highway engineering, and NIOSH's hierarchy of controls for workplace hazards:
"The Food and Drug Administration has a number for the acceptable rat parts in the food that you buy in the supermarket... We don't allow [food companies] to pick their own numbers."
"Why the heck are we telling everyone to solve this problem themselves? It's complicated. Let's make it part of the standards. Let's just pick a number and say here's the number that you should use."
The appeal is clear: if regulators defined acceptable thresholds, practitioners could cite standards rather than arguing with executives. As Shostack puts it: "You can talk to management and they say why that number and you say it's because that's the number that NIST tells us we have to use and they'll grumble and then say okay, moving along."
CISA's Secure by Design initiative in this awesome Black Hat talk follows similar logic, using seatbelt regulation as a model. Government intervention cut through corporate inertia and saved lives.
But I'm not convinced this approach scales across the board for information security. And that's really where I want to continue this conversation. Not as a defender of broken practices, but as someone in the "do risk management better" camp rather than the "stop doing it" camp. Let's enumerate the key points.
Product Security vs. Information Security
My first reaction to the talk is that it appears to be more product (or application) security focused, coming from OWASP and drawing on using Shostack's Microsoft Security Development Lifecycle (SDL) experience. I think of the following differences in frameworks and standards between product and information security:
Product Security / Software Assurance
NIST SSDF (SP 800-218): Secure Software Development Framework
OWASP SAMM: Software Assurance Maturity Model
Common Criteria (ISO/IEC 15408) for product security compliance
CISA Secure by Design principles
Focus: building secure software as a software company (or in-house development)
Information Security
NIST Cybersecurity Framework (CSF)
ISO 27001/27002
NIST SP 800-53: Security and Privacy Controls
Focus: protecting organizational operations holistically, including governance, identity, physical security, network architecture, and business continuity
Shostack's software vendor example of "why didn't we release the patch sooner for this known vulnerability?" is product security focused. The Microsoft AutoRun story is a classic product security decision: a product feature that was widely abused, sparking internal debate about changing behavior versus reducing malware.
But when he uses the example of phishing click rates in the information security domain, I think of the PROTECT.Awareness and Training subcategory being just one of 106 subcategories in the Cybersecurity Framework.
Prescriptive “one size fits all” regulation approaches typically don’t work well across these, which is part of the reason CSF was developed.
"There is no 'one-size-fits-all' solution to every organization's cybersecurity problems. What is effective and appropriate for one company, might not work at all for a company in a different industry."
Arguments to Stop Trying to Manage Risk
1. Risk is Treated as an Unquestionable Axiom
Risk management has become something we do because it's what security professionals do, not because we've validated that it works. When he asked the OWASP audience whether their risk quantification led directly to a decision, not many hands stayed up. When something becomes an unquestionable axiom, you stop testing whether it's effective.
🌶️ My Take: Agree, with Addition to Manage Risk Properly
This is fair and important. The GRC Engineering Manifesto makes the same observation: "automating and streamlining Legacy GRC practices simply results in producing low-value outcomes faster." Too many organizations perform risk management as a ritual: generating heat maps, maintaining risk registers, checking compliance boxes, without ever changing decisions based on what they find.
Tony Martin-Vegue's "boardroom moment" described on the GRC Engineer podcast illustrates this perfectly.
He presented "the DDoS risk is high, it's red" to a room of C-suite executives who nodded politely and moved on. Meanwhile, the insurance risk person said "$50 million exposure," and the financial risk person said "$300-500 million." Those presentations generated real conversations about trade-offs.
Bob Courtney’s approach to risk analysis at IBM in 1982 was fundamentally practical. Figure out what you stand to lose, estimate how often bad things might happen, use that to decide how much to spend on protection. It was cost-benefit analysis grounded in business outcomes—dollars in, dollars out.
Where that approach goes wrong is when it’s bureaucratized into compliance theater. An elaborate apparatus of artifacts, vocabulary and frameworks is built that lets organizations perform risk management without actually practicing it. The heat map gets generated. The risk register gets updated. The quarterly review happens. Nothing changes.
Shostack's critique lands squarely on this performance. But I don’t think the answer is to abandon Courtney's original insight that security decisions should be economic decisions. The answer is to strip away the theater and get back to the practical question: what do we stand to lose, and what should we do about it?
Richard Seiersen's work shows this is achievable—explicit uncertainty ranges, connection to what the business actually stands to lose, Monte Carlo simulations that model real scenarios. The existence of bad practice doesn't invalidate good practice. It just means we have work to do.
You might counter that strategic decisions are precisely where uncertainty is highest and numbers most likely to be false precision. This is a fair challenge. But Seiersen draws a crucial distinction between accuracy and precision: you can be perfectly wrong with lots of data, or generally correct with thoughtful estimation.

The goal isn't to predict exactly what will happen—that's fortune telling. It's to understand plausible parameters so you can make better decisions than you would with no measurement at all. And critically, risk tolerance already exists in every organization through insurance limits and capital reserves. You're not inventing new uncertainty; you're connecting security decisions to financial frameworks the business already trusts. Early probability estimates will be imperfect, but they improve with practice as predictions get compared to outcomes.
2. "Years of Heat Maps, No Arguments Ever Settled"
Heat maps don't resolve disputes. They just shift the argument to whether something is "medium" or "high."
🌶️ My Take: Strongly Agree, With a Path Forward
The NACD Director's Handbook on Cyber-Risk Oversight makes this exact observation: legacy practices like heat maps "do not allow management and the board to understand the materiality of cyber events."

But heat maps get the conversation started and are useful for rank ordering operational attention. The problem isn't the matrix itself, it's treating ordinal scores as something they never signed up to be. As Seiersen puts it:
"They're super useful and we must use them in security. We just need to use them for what they signed up for. It's when you use them for what they didn't sign up for that you start getting into trouble."

The "pay me high/medium-high/red hedging on orange" absurdity Seiersen describes exposes the core issue: ordinal values are excellent for prioritizing where to focus attention, but they never committed to being measures of financial impact or probability. When Tony Martin-Vegue said "DDoS risk is high, it's red," nobody could act on it. When the insurance person said "$50 million exposure," decisions happened.
There’s a crawl-walk-run progression to get from using heat maps to rank-order operational attention to loss exceedance curves.
3. Prescriptive Standards Would Solve Prioritization Arguments
We should standardize answers the way the FDA (Food and Drug Administration) sets acceptable contamination thresholds or the DOT (Department of Transportation) sets the value of a human life. If NIST or a regulator picked a number for "acceptable phishing click rate," practitioners could cite the standard rather than arguing with executives.
🌶️ My Take: A Prescribed Phishing Click Rate Would Make Organizations Worse Off
Physical safety hazards are consistent across contexts. Carbon tetrachloride is dangerous the same way regardless of what company you work for. Seatbelts work the same way in every car. Kip Boyle uses a fire analogy to illustrate the difference between static and dynamic threats in Fire Doesn't Innovate. The FDA can set rat parts per million in peanut butter because the hazard is static and the context is consistent.
But phishing click rates? Consider the problems:
The Wrong Metric Problem
Click rate probably isn't the most important metric, unless your organization's click rate is really high. What about report rate: how quickly employees flag suspicious emails so the SOC can contain the blast radius? What about the rate of employees who enter credentials into a phish campaign?
The Verizon DBIR consistently shows that stolen credentials are involved in roughly 30% of breaches, making "use of stolen credentials" one of the top action varieties year after year. This is why identity and access management features prominently in both the NIST CSF and the CIS Critical Security Controls. A prescribed click rate focuses on the wrong thing.
The Perverse Incentives Problem
The wrong metric can incent perverse behaviors. If we establish a prescriptive phishing threshold, it could create a security culture where employees are afraid to report incidents because they're trying to hit the number-the opposite of what we want.
4. Companies Can't Assess Risk to Users; Impact Falls on Others
Platform makers like Microsoft or Amazon can't assess risk to their users because they don't know how products are used. Windows is used by individuals and government departments; Amazon doesn't look inside S3 buckets. The risk falls on someone else, creating an externality problem.
🌶️ My Take: Agree for Platform/Product Security.
Sounds like a solid critique of application security risk management from my outsider perspective. When Microsoft shipped Windows, they couldn't predict that this particular AutoRun abuse would cost this particular healthcare organization $10 million. They could only reason about mechanisms ("is this remotely exploitable without authentication?"), not business impact to unknown third parties.
I think the critique applies less to information security. As Richard Seiersen points out, risk tolerance already exists in every organization. It's expressed in insurance limits and capital reserves. The CFO has already defined how much loss the business can absorb. When I'm assessing risk to my organization, I can connect technical issues to business outcomes because I know what we stand to lose. The externality problem applies to vendors building platforms, not to enterprises securing their own operations.
This reinforces the distinction: prescriptive regulation works better for product security (where the vendor can't assess downstream impact) than for information security (where the organization can and should assess its own context).
5. Use Engineering Mechanisms Instead of Risk
Bug bars, CVSS-style criteria, and questions about exploitability and privilege escalation settle arguments faster than debating risk. Microsoft's security bug bars ask about mechanisms (remote, anonymous, escalation of privilege) rather than likelihood or business impact. CVSS explicitly says it's not risk.
🌶️ My Take: Agree for Tactical Decisions, Incomplete for Strategic Ones
For development teams deciding whether to fix a vulnerability in this sprint, engineering mechanisms work great. The decision scope is narrow, the mechanisms are technical, the team can evaluate criteria without business context, and speed matters more than precision.
But engineering mechanisms don't help when the CISO presents to the board to justify a $10M security investment, or when the CFO asks how much cyber insurance to purchase. For strategic decisions, you need to speak the language of business: dollars, probabilities, ROI. The other risk managers in Tony Martin-Vegue's boardroom were talking about $50M insurance exposure and $300-500M financial risk exposure. They weren't using bug bars.
I'm also skeptical this approach alone would hold up in a court of law or public opinion. When regulators investigate a breach or shareholders ask what went wrong, management needs to demonstrate due diligence and due care. Bug bars show you prioritized vulnerabilities; risk management shows you understood the potential business impact and made defensible decisions with that understanding. One is a technical process; the other is governance.
I see these approaches as complementary rather than competing. Use bug bars and CVSS for tactical triage. Use proper risk quantification for strategic investment. The mistake is using either tool for the wrong job.
Bottom-Line
After working through these points, I've landed here:
The critique of ritual risk management is valid. Too many organizations generate heat maps that never change decisions, maintain risk registers that document rather than drive action, and perform the vocabulary of risk management without the practice. Calling this out is valuable.
The solution isn't abandonment; it's improvement. The tools exist: explicit uncertainty ranges, arrival rates and burndown rates, connection to business losses and insurance limits. What's missing is organizational will to stop the ritual and start the practice.
The Product Security vs Information Security distinction matters. Prescriptive regulation works better for product security (where vendors can't assess downstream impact) than for enterprise security (where organizations can and should assess their own context).
Engineering mechanisms and risk quantification are complementary, not competing. Use bug bars and CVSS for tactical triage. Use proper risk quantification for strategic investment. Match the tool to the decision.
Connect to existing business metrics. The CFO has already defined risk tolerance through insurance limits and capital reserves. Use those numbers rather than inventing parallel systems.
Risk management isn’t the problem. Performing risk management is.
References
Adam Shostack, "Stop Trying to Manage Risk," OWASP Global AppSec 2025
Tony Martin-Vegue and Ayub Fandi, "Deep-dive on Cyber Risk Quantification and GRC," GRC Engineering Podcast
Douglas Hubbard and Richard Seiersen, "How to Measure Anything in Cybersecurity Risk"
Richard Seiersen, "Risk Yoga: Stretching Strategy Into Measurable Action," ROCon 2025
Kip Boyle, "Fire Doesn't Innovate"
Executive Order 13636, "Improving Critical Infrastructure Cybersecurity," The White House, 2013



