Consent

This site uses third party services that need your consent. Learn more

Skip to content
Blog

How to create the foundation for a strong security behavior and culture program

Most security teams want the same thing: a workforce that feels confident, supported, and capable.

Yet many organizations have built systems where employees carry most of the responsibility, even when they have the least influence over the conditions around them.

This tension is not new, but it’s becoming harder to ignore.

Perspectives from safety science—such as Human and Organizational Performance (HOP)—offers a way to make sense of it: if security depends on perfect attention or navigating confusing workflows, the issue isn’t the person. It’s the system.

This article explores what we’re learning, through ongoing research and hands-on experience, about building a stronger foundation for security behavior and culture programs. One that supports people instead of overwhelming them.

And why, especially now, that foundation matters more than ever.

How we ended up here: the state of the security industry

The security industry didn’t start with the idea that “people are the weakest link.” That line emerged slowly, as data piled up and was interpreted in a very particular way.

The Verizon Data Breach Investigations Report (DBIR) has long shown that a majority of breaches involve what they call “the human element”, everything from misdelivery and misconfiguration to social engineering. Recent editions place this anywhere from 60% to 74%.

SANS’s 2025 Security Awareness Report adds another layer: nearly 80% of organizations say social engineering is their top human risk, far ahead of anything else.

Bar chart asking “what are the top human risks?”, showing social engineering attacks, incorrect handling of sensitive data and weak passwords or poor authentication as the highest risks.
Bar chart from the SANS 2025 Security Awareness Report showing the top reported human risks, with social engineering far above all other categories

Taken at face value, the conclusion seems obvious: people are the problem.

But when these numbers are repeated year after year without context, they start having an impact on security culture. They influence budgets, policies, and even the tone of our awareness programs.

What often gets lost is why these human-related events happen:

  • Cloud and SaaS environments have become more complex than many non-specialists can safely configure.

  • Workloads have increased; inboxes are overflowing; decisions are made under time pressure.

  • Policies and compliance requirements have grown faster than teams can simplify them.

  • Technical limitations and organizational constraints quietly push responsibility downward.

The result is a familiar imbalance: a system that assumes people can catch what technology misses, even when they’re overloaded, unsupported, or simply trying to get their work done.

As a result, many organizations unintentionally place the greatest burden on employees, while the systems meant to support them remain fragile.

This is the opposite of what safety science teaches us. There is a misalignment between where risk is absorbed and where responsibility is placed.

Cybersecurity is not the first field to grapple with the tension between human behavior and system complexity. Aviation, healthcare, and other safety-critical industries have faced this challenge for decades—and still do.

A turning point came when they embraced a new way of understanding human error. That framework is called Human and Organizational Performance (HOP).

HOP: Human and Organizational Performance

HOP wasn’t created for information security. It emerged from fields where human error carries real physical consequences, and leaders had to understand why good people make mistakes and why some systems fail more often than others.

The result is a set of principles from human and organizational performance that feel surprisingly familiar to anyone working with security behavior today:

  1. People make mistakes. Error is a normal part of human work. If a single mistake causes major harm, the system wasn’t designed to handle being used by humans.

  2. Blame fixes nothing. Focusing on the person closest to the outcome may feel satisfying, but it prevents learning and hides deeper system weaknesses.

  3. Learning is the key to improvement. Incidents, near misses, mis-clicks, and reports are signals. When organizations treat them as learning opportunities, they strengthen the system over time.

  4. Context drives behavior. People’s actions make sense within the conditions they are operating under, such as workload, pressure, ambiguity, tools, and incentives, even if they don’t make sense in hindsight.

  5. How we respond matters. The way organizations respond to mistakes determines whether people speak up, report early, and help improve the system—or stay silent and work around it.

In cybersecurity, these principles help shift the focus away from “Why did this person fail?” and toward a more useful question: “What was happening around them at the moment it happened?”

These ideas are still finding their footing in cybersecurity, but the resonance is unmistakable.

Teams in the security awareness community are already discussing how much of their day-to-day work is shaped not by maliciousness or carelessness, but by conditions: unclear responsibilities, unpredictable workflows, or tools not designed for how people work.

How HOP helps us see cybersecurity differently

Inside Secure Practice, we’ve been asking ourselves why the same types of human-related incidents continue to appear even when awareness efforts improve. HOP gives us a useful lens for exploring that question by focusing on the environment around people, not just their individual choices.

The goal is not to introduce a new theory, but to understand how one HOP principle can help you design security programs that work better with how people operate at work.

Cybersecurity has been interpreting incident patterns through a lens that safety science abandoned years ago:

In cybersecurity, we often treat the user as a hazard. But the real hazard is the external threat exploiting human and system vulnerabilities. Employees are the ones exposed to it.

– Solveig W. Pettersen, Head of Research, Secure Practice

This isn’t just a semantic shift; it changes where we look for solutions.

If the hazard is external (phishing campaigns, misconfigurable systems, unsafe defaults), then:

  • Leaders and system owners carry the responsibility for reducing exposure

  • Technical and organizational controls become the primary levers

  • Awareness and individual vigilance become the last line of defense, not the first

It also explains several contradictions that security teams experience daily:

HOP gives language to something security practitioners have been sensing: most incidents reflect system design more than individual decision-making.

So, the question becomes:

How do we redesign the system, so people are not the ones catching everything that slips through?

The Hierarchy of Controls: what it is, and why it matters for cybersecurity

Once you understand HOP’s principles, the next step is figuring out how to redesign the environment, so people aren’t carrying the entire load. Safety science has long offered a way to think about this imbalance: the Hierarchy of Controls.

The Hierarchy of Controls is a foundational concept in occupational safety. Agencies like OSHA and NIOSH use it to help organizations decide where to invest their effort when trying to reduce risk.

In this article, we see it as one of the practical ways to operationalize the first HOP principle: that error is normal, and systems must absorb it. It directly influences how we design controls, training, and accountability in security programs.

The model is simple, but powerful: interventions are ranked from most effective to least effective based on how much reliance on human behavior there is.

The classic Hierarchy of Controls diagram from occupational safety, showing five layers from most to least effective

What this model tells us is that:

  1. Focusing most of the efforts on the top of the hierarchy (elimination, substitution, and engineering controls) will result in the strongest, most sustainable risk reductions.

  2. Relying more heavily on the bottom of the hierarchy (training and personal vigilance) could lead to more incidents, more variability, and more human fatigue.

This doesn’t mean training or personal responsibility are unimportant. It means they are the least effective layers for sustained risk reduction, because they depend on people performing complex tasks consistently and perfectly, often under pressure.

The hierarchy isn’t a perfect fit, since the nature of digital threats is different from physical hazards. But the logic behind it resonates strongly with the patterns we see in our incident reviews, customer conversations, and community discussions.

When we look at this model through a cybersecurity lens, it pushes us to ask questions we don’t ask often enough, such as:

  • What can we remove so people don’t encounter the risk at all?

  • What can we replace so the safer choice becomes the default?

  • What can we engineer so errors are caught before they cause harm?

  • Where do policies and procedures clarify, and where do they add friction?

  • Which behaviors should truly depend on individual skill or vigilance?

Together, HOP + the Hierarchy of Controls create a coherent path:

  • Understand why people struggle (HOP)

  • Redesign the environment so fewer struggles land on them (Hierarchy of Controls)

  • Only then build behavior and culture programs on top of this foundation, reinforcing what the system already makes easy

This is how we rebuild the pyramid the right way up—Therefore, create a strong foundation first, then build behavior and culture programs that reinforce it, rather than asking people to carry the risk alone.

A practical 5-step framework for building a balanced security behavior and culture program

An effective security behavior and culture program means strengthening the processes and technology that absorb risk before employees ever feel its pressure.

It’s the shift from “people must avoid every threat” to “the environment helps them succeed,” supported by security tools that simplify decisions, metrics that track learning over time, and policies that evolve with real workflows.

A strong security behavior and culture program brings them together, and the framework below shows how.

As we’ve been adapting the Hierarchy of Controls to cybersecurity, one thing has become clear: this isn’t a “copy–paste” model. It’s a way to shift the order of thinking, so we strengthen the system before placing expectations on people.

The goal is simple: reduce the number of moments where employees must rely on high effort, high judgment, or perfect attention to keep the organization safe.

Below is the emerging, actionable framework we’ve been developing, based on what we’ve learned from supporting real organizations through Secure Practice.

Layer Core action Examples Why it matters
1. Eliminate Remove exposure entirely Decommission systems, close ports, remove unused accounts, reduce data Fewer hazards → fewer human decisions → fewer errors
2. Substitute Replace risky pathways with safer defaults Move approvals out of email, use SSO/passkeys, safer communication channels Reduces dependence on judgment and vigilance
3. Engineer Strengthen technical controls that absorb risk Segmentation, centralized patching, IDS/IPS, malware detection, meaningful logs System carries more weight → people carry less
4. Organize Align processes, roles, and guidance with reality Role-based access, updated procedures, living policies, role-specific learning Clarity reduces friction; expectations become actionable
5. Personal controls Build skills, habits, and confidence Awareness, secure habits, reporting culture, psychological safety People perform best when supported—not overloaded

1. Start at the top: reduce exposure before asking people to behave differently

This layer rarely shows up in cybersecurity conversations, yet it quietly has the greatest impact.

When you remove unnecessary exposure, you remove entire categories of mistakes people could make.

This often means asking very practical questions:

  • Which systems or integrations can be turned off or retired?

  • Do we still store data that no longer serves a purpose?

  • Are there old accounts, mailboxes, or domains still alive “just in case”?

  • What could we safely disconnect without disrupting work?

Our adaptation of the Hierarchy of Controls places elimination at the very top.

Disconnect unused services, decommission legacy systems, close redundant ports, reduce data surfaces—every hazard removed is a decision you no longer place on your colleagues. People can’t mishandle data that no longer exists.

This step also makes the rest of your program sharper: fewer false alarms, less noise, and clearer training needs.

2. Replace risky pathways with safer defaults

Once the environment is lighter, the next step is redesigning fragile workflows. The goal is to redesign workflows, so fewer decisions depend on judgment under pressure.

Each substitution removes a moment where colleagues would otherwise need to “figure it out” under pressure.

Tools like MailRisk sit at the boundary between substitution and personal controls.

At first glance, they can resemble personal protective equipment: the individual still interacts with the message. But the critical distinction is where the cognitive burden sits.

Personal controls rely on people recognizing danger and making the right decision under pressure. The responsibility for interpretation remains with the individual.

MailRisk reduces that burden. It surfaces risk signals automatically, provides immediate context, and standardizes the response pathway. Instead of asking employees to independently assess whether something is malicious, it shifts much of that interpretation into the system and creates a safe, low-friction default action.

Illustration showing a suspicious email and a MailRisk prompt.
MailRisk is an email-checking and reporting tool that alerts employees to suspicious messages and provides instant feedback on why an email is dangerous

When a control depends on vigilance, it sits low in the hierarchy. When it reduces interpretation and embeds the safer action into the workflow, it moves upward.

That reduction in cognitive load decreases how much training is required and lowers the emotional cost of getting something “wrong.”

3. Strengthen the system so people are protected by default

Once workflows are safer, the technical environment becomes the next buffer between threats and people.

Engineering controls involve designing systems that catch or contain problems before they land in someone’s inbox or browser. For example:

  • Segmentation that limits how far incidents can spread

  • Centralized patching

  • IDS/IPS and malware detection

  • Meaningful, structured logging

  • Automated investigation tools

These controls turn many high-risk moments into non-events.

A practical example is browser isolation. When risky links open in a safe, isolated environment by default, employees don’t have to analyze URLs, hover over links, or second-guess their instincts. Even if they click, the system contains the threat before it reaches them.

That’s engineering out uncertainty, not asking people to memorize clues. Of course, these controls still need to be calibrated to the type of users and the context they work in.

When protective measures become too rigid or misaligned with real workflows, people will find ways around them—often through shadow IT or unsafe workarounds.

4. Organize work so secure behavior flows with the system

Organizational controls focus on clarity in roles, expectations, processes, and communication. This is where security behavior and culture truly begin to take shape.

Examples include:

  • Updated procedures that reflect real workflows

  • Role-based access

  • Segregation of duties

  • Policies that evolve with the work

  • Training connected to actual responsibilities

When expectations make sense, people feel aligned rather than overloaded.

Measurement tools like human risk metrics help here by showing which groups understand certain risks well and where people need more clarity, all at a privacy-respecting, aggregated level.

Instead of sending the same training to everyone, you can focus your efforts where it’s genuinely needed.

Short, gamified e-learning resources also support this layer by giving people relevant guidance in small, engaging, and rewarding moments rather than long, generalized sessions.

Screenshot from the Secure Practice app, showing the user leaderboard.
Dashboard view from Secure Practice’s gamified e-learning platform, showing achievements, recent activity, and a leaderboard that tracks learning progress

Good organizational controls make secure behavior easier, not more demanding.

5. Support people with skills, habits, and confidence

After the system and structure are doing their part, personal controls finally become effective.

This is the layer most organizations start with, but it’s the layer that depends most heavily on context and support. Here we’re talking about:

  • Security awareness: clear, role-relevant guidance that helps people recognize situations they’re likely to face in their own workflows.

  • Reporting culture: making it normal and safe to surface uncertainty.

  • Communication tone: supportive, plain-language communication lowers resistance and reduces the fear of “doing it wrong.”

  • Psychological safety: people need to feel they can ask questions, admit mistakes, and request help without judgment.

  • Practice under realistic pressure: scenario-based exercises help teams build confidence before something real happens.

Interactive cybersecurity exercises help teams build this confidence by simulating what real incidents feel like, without assigning blame. People learn how to collaborate, how to escalate, and how to ask questions early—which is often the hardest part.

Photograph of a brightly-lit room with many people taking part in a cybersecurity exercise.
Participants taking part in a live cybersecurity incident preparedness exercise, working in groups to practice decision-making and coordination in a realistic scenario

At this stage, people aren’t carrying the system—the system is supporting them.

Bonus: Create a feedback loop that strengthens the whole system

Once the hierarchy is in place, you need a way to keep learning visible.

Culture grows from the signals you pay attention to and the adjustments you make. This loop helps you identify:

  • Emerging friction

  • Where people struggle

  • Where expectations need clarifying

  • When a technical fix beats another round of training

  • How reporting behavior evolves

  • Where confidence is growing or fading

Aggregated measurement tools, reporting insights from MailRisk, lightweight learning modules, and collaborative exercises all contribute to this broader picture.

Together, they show how behavior changes over time and where the system itself needs refining.

This is how a security program becomes more resilient: by seeing culture as a share responsibility that shifts and strengthens through continuous learning, not through perfect individual performance.

Redefining what strong defense means

If there’s one thing this work keeps showing us, it’s that people rarely fail out of carelessness. They struggle when the system around them makes secure choices hard.

Strengthening the base by eliminating exposure, building safer defaults, clarifying processes, and supporting learning doesn’t remove the human side of security. It makes it possible.

It gives people room to breathe, room to ask questions, room to learn within a system that lets them be human and still stay safe.

FAQs about security behavior and culture programs

What is an SBCP?

A Security Behavior and Culture Program (SBCP) is a structured, long-term initiative that helps organizations build stronger, more security-conscious habits across their workforce.

Instead of focusing only on security awareness training, an SBCP takes a holistic approach: reducing exposure, improving processes, shaping employee behavior, and strengthening overall cybersecurity culture so people feel supported, not blamed, when facing cyber threats.

What are the key aspects of security behavior and culture programs in organizations?

An effective SBCP typically includes:

  • Reducing unnecessary exposure to cyber threats

  • Substituting risky workflows with safer defaults

  • Strong technical controls that protect people by default

  • Clear processes that match real work, not idealized flowcharts

  • Behavioral science-informed nudges that make secure choices effortless

  • Security initiatives that align with organizational goals and business goals

  • Continuous learning through reporting rates, benchmarks, and real-world signals

Together, these elements strengthen the organization’s security posture while improving everyday employee engagement.

How can a security behavior and culture program improve organizational cybersecurity?

A well-designed SBCP improves cybersecurity by shifting weight downward, into systems, processes, and workflows, so employees don’t navigate risk alone.

It builds towards:

  • Reporting rates for phishing emails and other threats

  • The organization’s ability to respond to security incidents

  • Awareness of cybersecurity risks at every level

  • Alignment between security initiatives and organizational goals

By making secure behavior easier than insecure alternatives, a strong security culture emerges more naturally.

What challenges do organizations encounter when designing an SBCP?

Common challenges include:

  • Treating cybersecurity training as a one-time fix

  • Relying too heavily on employee vigilance

  • Lack of buy-in from leadership or overworked teams

  • Training programs that ignore real-world workloads

  • Difficulty aligning an SBCP with broader risk management goals

  • Measuring impact beyond completion rates

Many organizations also struggle to bridge the gap between technical controls and employee behavior, which is why mixed teams (security, IT, HR, communications) tend to see better results.

What are the four main security behaviors?

Many programs focus on four foundational behaviors:

  1. Recognizing potential threats (e.g., phishing attempts)

  2. Reporting suspicious activity quickly

  3. Using secure tools and processes consistently

  4. Asking for help early, before something becomes a security incident

These behaviors are strengthened when the system around them is designed well.

What is security awareness training?

Security awareness training is a set of training programs that help employees understand common cyber threats (like phishing emails, social engineering, or malware).

It usually includes short modules, phishing simulations, gamification elements, and real-world examples that guide people toward safer decisions.

Is security awareness training that important?

Yes, but only as part of a broader, system-level effort. Awareness helps people recognize potential threats and take action, but research and case studies show that training alone cannot absorb complex cybersecurity risks.

The strongest results come when awareness is combined with supportive processes, safer defaults, real-time reporting channels, and a security culture that encourages learning rather than perfection.

What are the seven dimensions of security culture?

Different models exist, but many CISOs and security professionals rely on dimensions such

  • Attitudes toward cybersecurity

  • Behaviors (e.g., reporting suspicious activity)

  • Cognition (understanding of cybersecurity risks)

  • Communication (how information flows)

  • Compliance with policies

  • Norms around secure work

  • Responsibilities and role clarity

These dimensions help organizations track cultural progress through sensible benchmarks and not just completion rates.

What are the first signs of a successful shift in cybersecurity culture?

Early signs often appear quietly:

  • More questions being asked before risky decisions

  • Earlier reporting of suspicious emails or potential threats

  • Clearer communication between teams

  • Fewer workarounds and less shadow IT

  • Employees feeling more confident, not more anxious

  • Improvement in small, repeated behaviors rather than big events

These signals often show up before major metrics move, and they’re reliable indicators that a strong security culture is taking root.

Explore