Table of contents

Step 2 — Categorize & prioritize risks

With explicit, quantified risks from Step 1, we now categorize them and prioritize them by risk exposure. The SQuaRE quality models provide a useful taxonomy for this: ISO/IEC 25010 defines the product quality model (quality characteristics of the system itself), and related SQuaRE standards define quality-in-use models (user outcomes in context). We use these characteristics as a consistent vocabulary to describe what “good” looks like, and what could go wrong.

For every product, and at every stage of its lifecycle, the importance of these characteristics will vary. In an early prototype, functional suitability and usability may dominate, while maintainability and portability carry less risk. In production, all characteristics matter, especially reliability, security, and performance efficiency.

The purpose of Step 2 is to decide what matters most. We do that by mapping risks to quality characteristics and prioritizing them by risk exposure (likelihood × impact), which clarifies which risks we must reduce most and which can receive less attention.

Risk exposure uses a context-specific, multi-criteria definition of impact, such as financial, safety, legal, operational, or reputational. In commercial SaaS, financial impact often dominates; in safety-critical or regulated contexts, safety and legal impact may be primary.

Map risks to ISO/IEC 25010 quality characteristics

ISO/IEC 25010 defines eight product quality characteristics:

  1. Functional suitability — Does the system provide the functions users need, correctly and completely?
  2. Performance efficiency — Does the system meet performance expectations under stated conditions (time, throughput, resource use)?
  3. Compatibility (including interoperability) — Can the system operate and exchange information with other systems in its environment?
  4. Usability — Can intended users achieve their goals effectively, efficiently, and satisfactorily?
  5. Reliability — Does the system operate consistently, and can it recover when failures occur?
  6. Security — Does the system protect information and resist threats (confidentiality, integrity, authenticity, accountability)?
  7. Maintainability — Can the system be analyzed, modified, and tested efficiently as it evolves?
  8. Portability — Can the system be transferred and adapted to different environments? Additionally, ISO/IEC 25010 includes a quality-in-use model, which describes how well the system enables users to achieve their goals in real usage contexts.

Product quality vs. quality-in-use:

In the SQuaRE family, product quality describes properties of the software/system (what it does and how well it is built), while quality-in-use describes real user outcomes in context (how effectively, efficiently, and satisfactorily users achieve goals, and how risk is experienced in real use).

Good test strategies connect both layers: testing and review activities produce evidence about product quality, while quality-in-use is validated through real usage signals (telemetry, support data, user research, and feedback). Product-quality evidence should predict quality-in-use outcomes, and those predictions should be checked against real-world signals.

How to measure quality-in-use:

Quality-in-use focuses on real-world user outcomes, not just system capabilities. When mapping risks to quality-in-use, define how you will measure user effectiveness, efficiency, and satisfaction:

How to map risks:

For each explicit risk statement from Step 1, identify which ISO/IEC 25010 quality characteristic(s) would be violated if the risk materializes. A single risk can map to multiple characteristics (and often should, if the impact spans multiple dimensions).

Mapping heuristics:

Examples:

Risk Statement Quality Characteristic(s)
“If the system experiences peak load during checkout, then response times may exceed 2 seconds, resulting in customer abandonment and lost revenue” Performance efficiency
“If a regression defect is introduced in the payment module, then transactions may fail silently, resulting in lost revenue and customer trust” Functional suitability, Reliability
“If the third-party API changes its contract without notice, then our integration may fail, resulting in service disruption” Compatibility/Interoperability, Reliability
“If user input is not properly sanitized, then SQL injection attacks may occur, resulting in data breach and regulatory penalties” Security, Functional suitability
“If the codebase becomes too complex, then new features take 3x longer to implement, resulting in delayed releases” Maintainability
“If the UI is not accessible, then users with disabilities cannot complete purchases, resulting in legal compliance issues and lost revenue” Usability, Quality-in-use
“If users cannot successfully complete checkout, then purchase abandonment increases, resulting in lost revenue and poor user experience” Quality-in-use, Functional suitability

Common pitfalls:

Prioritize risks by risk exposure (likelihood × impact)

Once risks are mapped to quality characteristics, prioritize them by risk exposure = likelihood × impact. Define impact using the criteria that matter in your context (financial, safety, legal, operational, reputational). If multiple criteria matter, combine them with explicit weights that reflect your organization’s priorities and risk tolerance.

This reduces arbitrary prioritization by making trade-offs visible (for example, avoiding a focus on rare catastrophes at the expense of frequent, moderate losses, or the opposite).

Multi-criteria impact: Impact should reflect your context. In commercial SaaS, financial impact often dominates. In safety-critical domains (aviation, medical devices), safety impact dominates. In regulated industries, legal and regulatory impact may dominate. In platform and reliability contexts, operational and reputational impacts can be non-linear (for example, trust effects after repeated incidents). Choose impact criteria that match your organization’s risk tolerance and stakeholder priorities.

Engineering impact as adjustment: Treat engineering impact (speed/effort) as an adjustment factor that reflects delivery constraints (or a tie-breaker when exposures are similar), not as the primary driver of risk exposure.

Risk exposure calculation:

Calculate risk exposure as likelihood × impact, where impact is multi-criteria based on your context:

Impact scoring (0-10): Choose the impact criteria relevant to your context:

Likelihood score (0-10): Based on probability that the risk will materialize

Risk exposure = Likelihood × Impact

In multi-criteria contexts, define Weighted impact first, then compute exposure:

Engineering impact (speed/effort) as adjustment: Use engineering impact from Step 1 as a separate adjustment factor (or tie-breaker) when exposures are similar. It reflects delivery constraints and is not part of the risk exposure formula.

Examples:

Scoring is a decision aid, not a measurement of reality: Avoid false precision. The goal is consistent ranking and a traceable rationale, not “7.2 vs 7.4” debates. Risk exposure helps compare risks, but small score differences are rarely meaningful. Recalibrate your scoring in Step 4 using real outcomes so estimates improve over time.

Lifecycle adjustment:

Consider the lifecycle stage when prioritizing risks:

Prioritization example (commercial SaaS context):

Risk Financial Impact Impact Score Likelihood Likelihood Score Risk Exposure (Likelihood × Impact) Quality Characteristic
Payment module regression $50K/day 7 High (frequent changes) 8 56 Functional suitability, Reliability
Performance degradation $10K/hour 9 Medium (under load) 5 45 Performance efficiency
Security vulnerability $2M fines + $500K churn 10 Low (rare attack) 2 20 Security
Code complexity $200K contract penalty 8 High (ongoing issue) 9 72 Maintainability

Note: Risk exposure = likelihood × impact. In this commercial context, financial impact is used. Engineering impact (speed/effort) can be used as an adjustment factor when risk exposures are similar.

Common pitfalls:

Risk acceptance and thresholds

Not all risks can be eliminated, and some may remain high even after investment in controls and testing. In many standards-driven or regulated contexts, you need explicit risk acceptance criteria and documented decisions for any risk you choose to accept.

Define risk acceptance thresholds:

Establish thresholds that define when a risk is:

Thresholds should reflect your organization’s risk tolerance and context. Safety-critical and regulated systems typically use lower thresholds than commercial SaaS, and may require formal sign-off and documented justification even at moderate exposure.

Document risk acceptance decisions:

For any risk that remains above your acceptance threshold after investment in controls and testing:

This ensures that “risk-based testing” means deliberate, traceable decisions rather than unmanaged uncertainty.

Output of Step 2

By the end of Step 2, you should have:

Connection to Step 3: With prioritised risks mapped to quality characteristics, you can now select testing types aligned to those characteristics, choose appropriate test levels, balance static and dynamic work, set test design techniques and coverage targets, and choose test practices that deliver evidence efficiently.