Cyber Risk Guy

IDENTIFY: Improvement (ID.IM)

Building continuous improvement processes for cybersecurity risk identification and management in startup environments.

Author
David McDonald
Read Time
14 min
Published
August 8, 2025
Updated
August 8, 2025
COURSES AND TUTORIALS

Learning Objectives

By the end of this lesson, you will be able to:

  • Establish continuous improvement processes for cybersecurity risk identification and management
  • Learn effectively from security incidents, near-misses, and assessment findings
  • Build feedback loops that systematically enhance risk management capabilities
  • Create learning cultures that drive security improvement and innovation
  • Measure improvement progress and demonstrate risk management maturity growth

Introduction: Learning to Learn

The biggest difference between startups that excel at cybersecurity and those that struggle isn’t the initial security posture—it’s how quickly they learn and improve. Every incident, every assessment, every customer security questionnaire is an opportunity to get better. But only if you have processes to capture those lessons and turn them into improvements.

Most improvement frameworks were designed for large organizations with dedicated quality teams and formal process improvement methodologies. Startups need something more practical: lightweight processes that capture learning without bureaucracy and drive real improvements without disrupting business operations.

This lesson shows you how to build continuous improvement into your risk identification and management processes, creating a learning organization that gets better at security with every passing month.

Understanding ID.IM: Improvement

NIST CSF 2.0 ID.IM Outcomes

ID.IM-01: Improvements informed by lessons learned from cybersecurity tests, incidents, and exercises are incorporated into the relevant cybersecurity processes

ID.IM-02: Effectiveness of cybersecurity processes is monitored and reviewed over time

ID.IM-03: Organizational cybersecurity practices are updated based on monitoring activities, test results, and other inputs

Startup Improvement Philosophy

Fail Fast, Learn Faster:

  • Treat failures as learning opportunities, not blame events
  • Focus on systematic causes rather than individual mistakes
  • Share lessons broadly to prevent recurrence
  • Celebrate learning and improvement, not just success

Pragmatic Over Perfect:

  • Small improvements consistently beat big initiatives occasionally
  • Focus on improvements that deliver immediate value
  • Build improvement into existing processes, don’t create new ones
  • Measure improvement impact, not just activity

Culture Over Process:

  • Create psychological safety for reporting issues
  • Reward problem identification and solution development
  • Make improvement everyone’s responsibility
  • Lead by example with leadership engagement in improvement

Learning from Security Events

Incident Learning Framework

Types of Learning Events:

  • Security Incidents: Actual breaches, compromises, or violations
  • Near Misses: Events that could have become incidents but didn’t
  • Security Findings: Vulnerabilities, audit findings, assessment results
  • External Events: Industry incidents, peer experiences, threat intelligence
  • Operational Events: System failures, process breakdowns, human errors

Post-Incident Review Process

Immediate Phase (Within 48 hours):

  1. Incident Stabilization: Ensure incident is contained and resolved
  2. Evidence Preservation: Collect logs, screenshots, and documentation
  3. Timeline Construction: Document sequence of events and decisions
  4. Initial Impact Assessment: Understand scope and business impact

Analysis Phase (Within 1 week):

  1. Root Cause Analysis: Identify systematic causes beyond immediate triggers
  2. Control Gap Analysis: Understand what controls failed or were missing
  3. Response Evaluation: Assess effectiveness of incident response
  4. Improvement Identification: Define specific improvements needed

Implementation Phase (Within 30 days):

  1. Improvement Planning: Develop specific action plans with owners
  2. Quick Wins: Implement immediate improvements and fixes
  3. Systematic Changes: Plan longer-term process and control improvements
  4. Communication: Share lessons learned across organization

Blameless Post-Mortems

Psychological Safety Principles:

  • Focus on systems and processes, not individuals
  • Assume positive intent and good faith efforts
  • Recognize that humans make mistakes in complex systems
  • Create safe environments for honest discussion

Post-Mortem Meeting Structure:

## Post-Incident Review Meeting Agenda

### Meeting Guidelines (5 minutes)
- This is a blameless review focused on learning
- We're examining systems, not blaming individuals
- Everyone's perspective is valuable and welcome
- Goal is improvement, not punishment

### Incident Overview (10 minutes)
- What happened and when?
- What was the business impact?
- How was it discovered and resolved?

### Timeline Review (15 minutes)
- Walk through the incident timeline
- Identify decision points and actions
- Note what went well and what didn't

### Root Cause Analysis (20 minutes)
- What were the contributing factors?
- What controls failed or were missing?
- What systematic issues were revealed?

### Improvement Planning (20 minutes)
- What specific improvements will prevent recurrence?
- Who owns each improvement action?
- What are the timelines and success criteria?

### Lessons Learned (10 minutes)
- What did we learn about our systems?
- What did we learn about our processes?
- How will we share these lessons broadly?

Near-Miss Learning

Why Near-Misses Matter:

  • Provide learning without actual damage
  • Often reveal systematic vulnerabilities
  • Occur more frequently than actual incidents
  • Create teaching moments without crisis pressure

Near-Miss Reporting System:

  • Anonymous reporting options to encourage disclosure
  • Simple reporting forms focusing on facts, not fault
  • Regular review and pattern analysis
  • Recognition for near-miss reporting

Near-Miss Analysis Process:

  1. Scenario Analysis: What could have happened?
  2. Barrier Analysis: What prevented the incident?
  3. Vulnerability Assessment: What weaknesses were exposed?
  4. Improvement Opportunity: How can we strengthen defenses?

Building Feedback Loops

Internal Feedback Mechanisms

Security Metrics and KPIs:

  • Track improvement trends over time
  • Identify areas needing additional focus
  • Measure effectiveness of improvements
  • Demonstrate progress to stakeholders

Key Improvement Metrics:

  • Time to Detect: Average time to identify security issues
  • Time to Resolve: Average time to fix identified issues
  • Repeat Rate: Frequency of similar issues recurring
  • Implementation Rate: Percentage of improvements completed
  • Effectiveness Rate: Impact of improvements on risk reduction

Employee Feedback Channels:

  • Security suggestion box (anonymous option)
  • Regular security surveys and assessments
  • Team retrospectives with security components
  • Skip-level meetings for security feedback

Process Feedback Loops:

graph LR
    A[Risk Identification] --> B[Risk Assessment]
    B --> C[Risk Treatment]
    C --> D[Implementation]
    D --> E[Monitoring]
    E --> F[Measurement]
    F --> G[Improvement]
    G --> A

External Feedback Integration

Customer Security Feedback:

  • Security questionnaire responses and concerns
  • Customer incident reports and complaints
  • Security audit findings and recommendations
  • Customer satisfaction surveys with security components

Auditor and Assessor Feedback:

  • Compliance audit findings and observations
  • Penetration testing results and recommendations
  • Security assessment reports and maturity ratings
  • Certification body feedback and requirements

Industry and Peer Learning:

  • Industry association security benchmarks
  • Peer company security practices and lessons
  • Security conference learnings and insights
  • Threat intelligence and industry incidents

Vendor and Partner Feedback:

  • Security tool effectiveness assessments
  • Managed service provider recommendations
  • Technology partner security insights
  • Professional service consultant observations

Testing and Exercise Programs

Security Testing Framework

Types of Security Testing:

Technical Testing:

  • Vulnerability Scanning: Automated identification of technical vulnerabilities
  • Penetration Testing: Simulated attacks to identify exploitable vulnerabilities
  • Security Code Review: Manual and automated code security analysis
  • Configuration Review: Security configuration and hardening assessment

Process Testing:

  • Tabletop Exercises: Discussion-based scenario walkthroughs
  • Functional Exercises: Hands-on simulation of security processes
  • Full-Scale Exercises: Comprehensive incident response simulations
  • Red Team Exercises: Adversarial testing of security controls

Compliance Testing:

  • Control Testing: Verification of security control effectiveness
  • Policy Compliance: Assessment of policy adherence and enforcement
  • Regulatory Testing: Validation of regulatory requirement compliance
  • Audit Preparation: Pre-audit testing and gap identification

Testing Program Development

Startup Testing Maturity Levels:

Level 1: Basic (0-10 employees)

  • Quarterly vulnerability scans using free tools
  • Annual third-party security assessment
  • Informal incident response discussions
  • Basic phishing simulation tests

Level 2: Developing (10-25 employees)

  • Monthly automated vulnerability scanning
  • Semi-annual penetration testing
  • Quarterly tabletop exercises
  • Regular phishing and security awareness testing

Level 3: Mature (25-50 employees)

  • Continuous vulnerability management
  • Quarterly penetration testing with rotating scope
  • Monthly security exercises and simulations
  • Comprehensive security testing program

Level 4: Advanced (50+ employees)

  • Real-time vulnerability identification and management
  • Continuous penetration testing and red teaming
  • Regular full-scale incident response exercises
  • Advanced security testing with threat emulation

Exercise Planning and Execution

Tabletop Exercise Template:

## Tabletop Exercise: [Scenario Name]

### Exercise Objectives
- Test incident response procedures
- Evaluate team communication and coordination
- Identify gaps in processes and controls
- Build team confidence and capability

### Scenario Description
[Realistic scenario relevant to your business and threat landscape]

### Inject Timeline
1. **Initial Detection:** [How the incident is discovered]
2. **Initial Assessment:** [What the team learns first]
3. **Complication:** [Additional challenge or escalation]
4. **Resolution Options:** [Possible response paths]

### Discussion Questions
- How would we detect this in reality?
- Who needs to be notified and when?
- What are our response priorities?
- What tools and resources do we need?
- How do we communicate with stakeholders?

### Exercise Evaluation
- Response time and decision quality
- Communication effectiveness
- Process adherence and gaps
- Resource requirements and availability
- Improvement opportunities identified

Improvement Implementation

Improvement Prioritization Framework

Impact vs. Effort Matrix:

Impact/EffortLow EffortMedium EffortHigh Effort
High ImpactQuick Wins (Do First)Strategic Improvements (Plan)Major Initiatives (Evaluate)
Medium ImpactEasy Improvements (Do Soon)Balanced Improvements (Schedule)Question Value (Reconsider)
Low ImpactNice to Have (If Time)Low Priority (Defer)Not Worth It (Skip)

Prioritization Criteria:

  • Risk Reduction: How much does this reduce our risk exposure?
  • Cost Effectiveness: What’s the ROI of this improvement?
  • Implementation Feasibility: How difficult is this to implement?
  • Business Alignment: How well does this support business goals?
  • Regulatory Requirements: Is this required for compliance?

Improvement Tracking and Management

Improvement Register Template:

## Security Improvement Register

| ID | Improvement | Source | Priority | Owner | Due Date | Status | Impact |
|----|-------------|--------|----------|-------|----------|--------|--------|
| IMP-001 | Implement MFA | Incident #23 | High | IT Manager | 2024-03-01 | In Progress | High |
| IMP-002 | Security training | Audit finding | Medium | HR Lead | 2024-04-15 | Planned | Medium |
| IMP-003 | Update incident response | Exercise | High | Security | 2024-02-15 | Complete | High |

### Improvement Metrics
- Total improvements identified: ___
- Improvements completed: ___
- Completion rate: ___%
- Average time to implement: ___ days
- Risk reduction achieved: ___%

Change Management for Improvements

Communication Strategy:

  • Explain why improvements are needed (the problem)
  • Describe what will change (the solution)
  • Clarify how it affects different teams (the impact)
  • Provide support for the transition (the help)

Implementation Approach:

  • Pilot Testing: Test improvements with small groups first
  • Phased Rollout: Implement gradually across organization
  • Training and Support: Provide necessary education and resources
  • Feedback Collection: Gather input during implementation
  • Adjustment and Refinement: Modify based on feedback

Building a Learning Culture

Cultural Elements for Continuous Improvement

Leadership Commitment:

  • Leaders actively participate in improvement activities
  • Improvement efforts are resourced and supported
  • Learning from failures is celebrated, not punished
  • Security improvements are regular agenda items

Team Engagement:

  • Everyone is encouraged to identify improvements
  • Security champions drive improvement in their areas
  • Cross-functional collaboration on improvements
  • Regular recognition for improvement contributions

Systematic Learning:

  • Structured processes for capturing lessons
  • Regular reviews of improvement effectiveness
  • Knowledge sharing across teams and departments
  • External learning from industry and peers

Knowledge Management

Security Knowledge Repository:

  • Incident reports and lessons learned
  • Security procedures and playbooks
  • Training materials and resources
  • Best practices and guidelines
  • Threat intelligence and research

Knowledge Sharing Mechanisms:

  • Security lunch-and-learn sessions
  • Internal security newsletters
  • Team presentations on security topics
  • Security tip of the week
  • Peer mentoring and knowledge transfer

Innovation and Experimentation

Security Innovation Framework:

  • Dedicated time for security research and experimentation
  • Budget for trying new security tools and approaches
  • Safe environment for testing security ideas
  • Sharing of successful innovations across organization

Experimentation Process:

  1. Hypothesis: Define what you want to test and why
  2. Design: Plan small-scale experiment with success criteria
  3. Execute: Run experiment with careful monitoring
  4. Measure: Collect data on effectiveness and impact
  5. Learn: Extract lessons and decide on broader implementation

Hands-On Exercise: Build Your Improvement Program

Step 1: Current State Assessment

Learning Mechanisms:

  • Do you conduct post-incident reviews? [Yes/No]
  • Do you track security improvements? [Yes/No]
  • Do you measure improvement effectiveness? [Yes/No]
  • Do you share lessons learned broadly? [Yes/No]

Improvement Culture: Rate your organization (1-5 scale):

  • Leadership support for improvement: ___
  • Team engagement in improvement: ___
  • Safety to report problems: ___
  • Learning from failures: ___

Step 2: Improvement Framework Design

Post-Incident Process:

  • Who leads post-incident reviews? ___________
  • When are reviews conducted? ___________
  • How are lessons documented? ___________
  • How are improvements tracked? ___________

Feedback Channels: Select which channels you’ll implement:

  • Employee suggestion system
  • Regular security surveys
  • Team retrospectives
  • Customer feedback integration
  • Auditor recommendation tracking

Testing Program: Define your testing schedule:

  • Vulnerability scanning: ___________
  • Penetration testing: ___________
  • Tabletop exercises: ___________
  • Security awareness testing: ___________

Step 3: Implementation Plan

Quick Wins (Next 30 days):




Medium-term Goals (Next 90 days):




Long-term Objectives (Next year):




Step 4: Success Metrics

Improvement Metrics:

  • Number of improvements implemented: ___
  • Time to implement improvements: ___
  • Risk reduction from improvements: ___
  • Repeat incident rate: ___

Cultural Metrics:

  • Security suggestion submissions: ___
  • Post-incident review participation: ___
  • Improvement completion rate: ___
  • Team satisfaction with learning: ___

Real-World Example: SaaS Startup Improvement Journey

Company: 38-employee project management SaaS platform Challenge: Frequent security issues, slow improvement, blame culture

Initial State (Month 0):

  • Security incidents: 3-4 per month
  • No formal post-incident reviews
  • Improvements ad-hoc and rarely completed
  • Team afraid to report security issues

Phase 1: Foundation (Months 1-3)

Actions Taken:

  • Implemented blameless post-mortem process
  • Created simple improvement tracking spreadsheet
  • Started monthly security improvement meetings
  • Launched anonymous security suggestion box

Early Results:

  • First post-mortem revealed 8 improvement opportunities
  • 50% increase in security issue reporting
  • Completed 5 quick-win improvements
  • Team engagement starting to improve

Phase 2: Systematization (Months 4-9)

Actions Taken:

  • Automated vulnerability scanning and tracking
  • Quarterly tabletop exercises started
  • Customer security feedback integration
  • Security champion program launched

Improvement Projects:

  1. Automated security testing in CI/CD pipeline
  2. Centralized logging and monitoring
  3. Incident response playbook development
  4. Security awareness training program

Results:

  • Security incidents reduced to 1-2 per month
  • 85% of improvements completed on schedule
  • Customer security satisfaction: 4.2/5.0
  • Team confidence in security improved significantly

Phase 3: Optimization (Months 10-18)

Advanced Improvements:

  • Machine learning for anomaly detection
  • Automated incident response workflows
  • Continuous security testing program
  • Peer learning network established

Cultural Transformation:

  • Security suggestions: 3-5 per month from team
  • 100% participation in security exercises
  • Security improvements celebrated publicly
  • Proactive security risk identification

Final Outcomes:

Quantitative Results:

  • 75% reduction in security incidents
  • 90% faster incident resolution
  • 95% improvement implementation rate
  • $300,000 avoided incident costs

Qualitative Results:

  • Strong security learning culture established
  • Team actively engaged in security improvement
  • Customer trust and confidence increased
  • Security as competitive differentiator

Key Success Factors:

  • Blameless culture encouraged honest learning
  • Simple processes that didn’t burden teams
  • Quick wins built momentum and engagement
  • Leadership commitment to improvement
  • Measurement demonstrated value of improvements

Common Improvement Challenges

Challenge: “We Don’t Have Time for Improvement”

Solution:

  • Build improvement into existing processes
  • Focus on quick wins that save time later
  • Automate improvement tracking and reporting
  • Show ROI of improvements in time saved

Challenge: “People Don’t Report Problems”

Solution:

  • Create psychological safety through blameless reviews
  • Reward problem identification and reporting
  • Share how reported issues led to improvements
  • Lead by example with leadership transparency

Challenge: “Improvements Don’t Stick”

Solution:

  • Build improvements into tools and automation
  • Provide adequate training and support
  • Monitor improvement effectiveness
  • Adjust based on feedback and results

Challenge: “We Can’t Measure Improvement”

Solution:

  • Start with simple metrics like incident frequency
  • Track completion of improvement actions
  • Use before/after comparisons
  • Focus on trends rather than absolute numbers

Key Takeaways

  1. Learning Culture Beats Perfect Process: Focus on creating safety and engagement for continuous learning
  2. Every Event is a Learning Opportunity: Incidents, near-misses, and assessments all provide valuable lessons
  3. Small Improvements Compound: Consistent small improvements outperform occasional large initiatives
  4. Measurement Drives Improvement: Track progress to demonstrate value and maintain momentum
  5. Improvement is Everyone’s Job: Engage the entire organization in identifying and implementing improvements

Knowledge Check

  1. What’s the most important element of effective post-incident reviews?

    • A) Detailed technical analysis
    • B) Identifying who was at fault
    • C) Blameless focus on systematic improvements
    • D) Comprehensive documentation
  2. How often should startups conduct security improvement reviews?

    • A) Daily
    • B) Weekly
    • C) Monthly
    • D) Annually
  3. What’s the best source of security improvement ideas?

    • A) External consultants only
    • B) Security team only
    • C) Multiple sources including employees, customers, and assessments
    • D) Industry best practices only

Additional Resources


Congratulations! You’ve completed the IDENTIFY function of the NIST Cybersecurity Framework 2.0. In the next phase, we’ll explore the PROTECT function, learning how to implement safeguards to ensure delivery of critical services and limit the impact of potential cybersecurity events.

Reader Feedback

See what others are saying about this article

Did you enjoy this article?

Your feedback helps me create better content for the cybersecurity community

Share This Article

Found this helpful? Share it with your network to help others learn about cybersecurity.

Link copied to clipboard!

Share Feedback

Help improve this content by sharing constructive feedback on what worked and what didn't.

Thank you for your feedback!

Hire Me

Need help implementing your cybersecurity program? Let's work together.

Support Me

Help keep great cybersecurity content coming by supporting me on Patreon.

David McDonald

I'm David McDonald, the Cyber Risk Guy. I'm a cybersecurity consultant helping organizations build resilient, automated, cost effective security programs.

;