NAT and HRO; Summary

The NAT and HRO theories both simplify the cause of accidents. HRO underestimates the problems of uncertainty. NAT recognizes the difficulty of dealing with uncertainty but underestimates and oversimplifies the potential ways to cope with uncertainty. Both theories believe that redundancy is the only way to handle risk.

Limitations of Both NAT and HRO

Perrow contributed with his definition of NAT, by identifying interactive complexity and tight coupling as critical factors which shouldn’t be discounted. His top-down system view of accidents versus the bottom-up, component reliability view of the HRO theorists is critical in understanding and preventing future accidents. While the HRO theorists do offer more suggestions, most of them are inapplicable to complex systems or oversimplify the problems involved.

A top-down, systems approach to safety

First, it is important to recognize the difference between reliability and safety. HRO researchers talk about a “culture of reliability” where it is assumed that if each person and component in the system operates reliably, there will be no accidents.

Highly reliable systems are not necessarily safe and highly safe systems are not necessarily reliable. Reliability and safety are different qualities and should not be confused. In fact, these two qualities often conflict. Increasing reliability may decrease safety and increasing safety may decrease reliability.

Reliability in engineering is defined as the probability that a component satisfies its specified behavioral requirements over time and under given conditions. If a human operator does not follow the specified procedures, then they are not operating reliably. In some cases that can lead to an accident. In other cases, it may prevent an accident when the specified procedures turn out to be unsafe under the circumstances.

If the goal is to increase safety, then we should be talking about enhancing the safety culture, not the reliability culture. The safety culture is that part of organizational culture that reflects the general attitude and approaches to safety and risk management. Aircraft carriers do have a very strong safety culture and many of the aspects of this culture observed by the HRO researchers can and should be copied by other organizations but labeling these characteristics as “reliability” is misleading and can lead to misunderstanding what is needed to increase safety in complex, tightly coupled systems.

Safety is an emergent or system property, not a component property. Determining whether a plant is acceptably safe is not possible by examining a single valve in the plant (although conclusions can be reached about the valve’s reliability). Safety can be determined only by the relationship between the valve behavior and the other plant components and often the external environment of the plant—that is, in the context of the whole. A component and its specified behavior may be perfectly safe in one system but not when used in another.

Sources:

Shrivastava, S., Sonpar, K. &Pazzaglia F. (2009) ”Normal accident theory versus High reliability theory: a resolution and call for an open systems view of accidents”, find it here

Marais, K., Dulac, N. & Leveson, N.: ”Beyond normal accidents and hugh reliability organizations: The need for an alternative approach to safety in Complex systems”, MIT find it here

NAT and HRO; HRO

As promised this post is about High reliability organizations. If you haven’t read the latest post about natural accident theory (NAT), we recommend you to, by following this link

Around the same time Perrow articulated his NAT (natural accident theory), another stream of research emerged. Sholars from the Berkely campus of the University of California came together to study how organizations that operate complex, manage to remain accident-free for a longer time. This group and other scholar’s latter became what we know as HRO.

High reliability organizations operate in complex, high-hazard domains, for extended periods without serios accidents or catastrophic failure.  

Just like in the NAT, HRP are related to the two dimensions; interactive complexity and loose/tight coupling – which Perrow claimed together determine a system’s susceptibility to accidents.
Interactive complexity refers to the presence of unfamiliar or unplanned and unexpected sequences of events in a system that are either not visible or not immediately comprehensible. A tightly coupled system is one that is highly interdependent: Each part of the system is tightly linked to many other parts and therefore a change in one part can rapidly affect the status of other parts. Tightly coupled systems respond quickly to perturbations, but this response may be disastrous. Loosely coupled or decoupled systems have fewer or less tight links between parts and therefore can absorb failures or unplanned behavior without destabilization.  An HRO is hypercomplecity – extreme variety of components, system, and levels, combined by a really tight coupling, meaning a reciprocal interdependence across many units and levels.

Why reliability?

Although HRT scholars have abandoned attempts to explicitly define reliability, they appear to agree that reliability is the ability to maintain and execute error-free operations. 

HRO scholars report that HRO emphasis the following conditions as being necessary, but not sufficient, for ensuring reliability: a strategic prioritization of safety, careful attention to design and procedures, a limited degree of trial-and-error learning, redundancy, decentralized decision making, continuous training often through simulation, and strong cultures that encourage vigilance and responsiveness to potential accidents.

Why is this important?

It is important to recognize that standardization is necessary but not sufficient for achieving resilient and reliable health care systems. High reliability is an ongoing process or an organizational frame of mind, not a specific structure. Examples of such organizations could be health care organizations aiming to become highly reliable in their report of practice. Other examples include Air traffic control system, nuclear power plant and NASA. In each case, even a minor error could have catastrophic consequences.

Your organization

Even though your organization might not hold lives in its hands, all organizations still face risks to profits, customer satisfaction, and reputation. The behaviors of HROs can be very instructive for those trying to figure out how to error-proof processes, avoid surprises, and deliver the desired outcome every single time.

Sources:

Shrivastava, S., Sonpar, K. &Pazzaglia F. (2009) ”Normal accident theory versus High reliability theory: a resolution and call for an open systems view of accidents”, find it here

Marais, K., Dulac, N. & Leveson, N.:” Beyond normal accidents and high reliability organizations: The need for an alternative approach to safety in Complex systems”, MIT find it here

NAT and HRO; NAT

In the following three weeks, the theoretical debate between two dominant schools on the origins of accidents and reliability, Normal Accident Theory (NAT) and High Reliability Theory (HRT), is elaborated and evaluated. For this week, we will start with NAT.

NAT

Background

Charles Perrow’s initial formulation of what has come to be known as Normal Accident Theory (NAT) was developed in the aftermath of the accident at the Three Mile Island nuclear power plant in 1979. Perrow introduced the idea that in some technological systems, accidents are inevitable or “normal”. He defined two related dimensions; interactive complexity and loose/tight coupling – which Perrow claimed together determine a system’s susceptibility to accidents.
Interactive complexity refers to the presence of unfamiliar or unplanned and unexpected sequences of events in a system that are either not visible or not immediately comprehensible. A tightly coupled system is one that is highly interdependent: Each part of the system is tightly linked to many other parts and therefore a change in one part can rapidly affect the status of other parts. Tightly coupled systems respond quickly to perturbations, but this response may be disastrous. Loosely coupled or decoupled systems have fewer or less tight links between parts and therefore can absorb failures or unplanned behavior without destabilization. 

According to the theory, systems, company, and organizations which is interactive complexity and tight coupling will experience accidents that cannot be foreseen or prevented. These systems are called System Accidents.

Why is this important?

But how do we get from nuclear meltdown to offshore projects or construction projects? Once you start to dissect them, some of the proceedings we manage exhibit both these features in abundance.

All the time our industry deals with risk analysis scenarios, foul weather, temporary infrastructure and communications, staff unfamiliar with their role or location, large-scale deployment of team members with low levels of training, contractors, supply-chain and much more. Any single one of these elements has the potential to suffer a failure that might interact unexpectedly with another part of the system. There are so mange variables for a project team to deal with – especially when you add the unpredictability of humanity into the mix – that it’s easy to imagine the interaction of dozens of potential accident scenarios over the course of just a single project day. It is Perrow’s “complexity” in a nutshell.

Perrow concludes, however, that accidents are inevitable in these systems and therefore systems for which accidents would have extremely serious consequences should not be built is overly pessimistic. The argument advanced is essentially that the efforts to improve safety in interactively complex, tightly coupled systems all involve increasing complexity and therefore only render accidents more likely. The flaw in the argument is that the only solution he considers improving safety is redundancy, where decisions are taken at the lowest appropriate level and coordination happens at the highest necessary level. In such systems staff at all relevant levels are trained and – just as importantly – empowered and supported to take the potentially economic and life-saving decisions.

Of course, this is easy to write and far harder to achieve: such is the complexity of the systems in which we operate that the impacts of decisions by individuals throughout the chain can have far-reaching effects, themselves adding to the problems we seek to resolve. Through planning and training, however, key roles and positions can be identified where fast-paced coupling can be matched.

In next week we will elaborate and evaluate High reliability Organizations, so stay tuned!

Sources:

Shrivastava, S., Sonpar, K. &Pazzaglia F. (2009) ”Normal accident theory versus High reliability theory: a resolution and call for an open systems view of accidents”, find it here

Marais, K., Dulac, N. & Leveson, N.: ”Beyond normal accidents and high reliability organizations: The need for an alternative approach to safety in Complex systems”, MIT find it here

Risk Society; part 3

In the last two posts, we have circled around Beck’s theory about risk society, but we have not paid attention to the good deal of criticism it has received throughout the time.

Robert Dingwall has argued that “Risk Society” was influenced more by German cultural and intellectual traditions than by a careful analysis of risks over time and across societal contexts. In his view, risk society theory cannot be generally applicable because it is rooted in one scholar’s delimited perspective, derived from one society’s historical experience.
Deborah Lupton has pointed out, among other things, that Beck’s writings reveal an ontological confusion about risk, in that he sometimes adopts a realist approach to risk while at other times he comes across as a social constructionist. Sociological theorist Jeffrey Alexander has strongly criticized Beck, Giddens, and Lash for their claims regarding reflexive modernization.

Anthony Elliot has faulted Beck along various lines, for example for ignoring important dimensions of how risk is perceived, but more important, for assuming that risk is a central feature of contemporary social life and that it is increasing — a position Elliot calls “excessivist.”

Most relevant for this discussion, we need to highlight one criticisms of the risk society.
The first is circling about weather, as Beck claims, the risks societies face today are so large that they are qualitatively different from those that existed in the pre-modern and modern eras. Just like Elliot, Bryan Turner challenges this assumption, and asking instead:

“[W]ere the epidemics of syphilis and bubonic plague in earlier periods any different from the modern environmental illnesses to which Beck draws our attention? That is, do Beck’s criteria of risk, such as their impersonal and unobservable nature, really stand up to historical scrutiny?

The devastating plagues of earlier centuries were certainly global, democratic, and general . . . [and] with the spread of capitalist colonialism, it is clearly the case that in previous centuries many aboriginal peoples such as those of North America and Australia were engulfed by environmental, medical and political catastrophes that wiped out entire populations. “

The black death killed an estimated 30 percent of the population of Europe. In the 20th century, the waves of influenza that spread worldwide in 1918 and 1919 killed between 50 and 100 million people, or up to 6 percent of the world’s population at the time. The last century also saw the emergence of the scourge of AIDS. In 2008, an estimated 33 million people worldwide were infected with HIV, and 67 percent of those cases were concentrated in Africa, where numbers were expected to rise. And by this time the COVID-pandemic killed almost 4 million people.

Clearly the potential for catastrophe on a worldwide scale existed both prior to and independent of the technologies of late modern society. Beck did not just underestimate non-technological risks like epidemics. He also appears to have been so preoccupied with the perils associated with the technologies he viewed as risky that he overlooked two of the world’s most significant human-induced threats, climate change and financial risk, as well as risks associated with terrorism.

Source:

Tierney, K. (2015): “The social roots of risk: How vulnerable are we?”, find the link here

Risk Society; part 2

In the last post, we began to hear about what Ulrich Beck call “risk society”, which we will elaborate further today.

The Risk Society is to sum up defined as: “a systematic way of dealing with hazards and insecurities induced and introduced by modernization itself” according to Beck. To continue from last week, here are a further vocabulary taken from Beck’s work, guiding ourselves through the risk society in a more positive way.

Risk community
“Risk is not, in other words, the catastrophe, but the anticipation of the catastrophe. It is not a personal anticipation, it is a social construction. Today, people are aware that risks are transnational, and they are starting to believe in the possibility of an enormous catastrophe, like radical climate change or a terror attack. For this sole reason we find ourselves tied to others, beyond borders, religions, cultures. In one way or the other, risk produces a certain community of destination and, perhaps, even a worldwide public space”.

The availability to recognize the difference between quantitative risks and non-quantitative uncertainties: in the availability to negotiate between various rationalities, rather than engaging in mutual condemnation; in the availability to raise modern taboos on rational bases; and – finally – in recognizing the importance of demonstrating to the collective will that we are acting in a responsible way in terms of the losses that will always happen, despite all precautions.

Culture of uncertainty
“What we need is a “culture of uncertainty”, which must clearly be distinguished from “culture of residual risk” on one hand, and the culture of “risk-free” or “security” on the other. The key to a culture of uncertainty is in the availability to openly talk of how we face risks, in the availability to recognize the difference between quantitative risks and non-quantitative uncertainties: in the availability to negotiate between various rationalities, rather than engaging in mutual condemnation; in the availability to raise modern taboos on rational bases; and – last but not least – in recognizing the importance of demonstrating to the collective will that we are acting in a responsible way in terms of the losses that will always happen, despite all precautions.

A culture of uncertainty that will no longer recklessly talk of “residual risk” because every interlocutor will recognize that risks are only residual if they happen to others, that the aim of a democratic community is to take on a common responsibility. The culture of uncertainty, however, is also different compared to a “culture of security”. By this I mean a culture where absolute security is considered a right towards which all society should lean. Such a culture would choke any innovation in its cage”.

Risk globalization
“Within the reflection of the modernization processes, productive forces have lost their innocence. The growth of technological-economic “progress” is ever more obscured by the production of risks. “Initially, they might be legitimized as “hidden side effects”. But with their universalization, with the criticism carried out by the public opinion and the (anti)scientific analysis, risks definitely emerge from latency and acquire a new and central meaning for social and political conflict.”

By this we need to accept the insecurities as a element of the society.

For further reading

  • Beck, Ulrich (1992). Risk Society: Towards a New Modernity. Translated by Ritter, Mark. London: Sage Publications. ISBN 978-0-8039-8346-5.
  • Beck, Ulrich (2020) “Review of Risk Society: Towards a New Modernity”

Risk Society

We face risk everywhere we go in our everyday. These are technological risk that, because they are produced by society, are considered preventable; a contrast to natural risks which have traditionally been studied by hazards researchers. When that said, both types of risks are affected by social, economic, political and cultural systems. In this way they are both “social” in their form.

According to the German sociologist, Ulrich Beck, we are all exposed to risk, because we live in the risk society. Not because we live in a more dangerous world compared to before, but because risk has become the center of the public debate.
This post is about the risk society we live in, and why wee need to understand this, according to Ulrich Beck. Beck’s research implies that we can only anticipate danger and overcome fears, if we understand that risk has now become the center of life, for each and every one of us. We live in a time where  risk is on the global horizon, which companies such as individuals, are guided by.
According to Beck, we need, not only to act responsibility, but it implies a strategic advantage. The ability to anticipate a risk, is to not turn emergencies into social panic and fears into catastrophes.  

To guide us through the risk society, Beck teaches us to make a positive change of the ways and in the practices of a strategic decision.  

Anticipating
“From a sociological perspective, the concept of risk is always a matter of anticipation. Risk is the anticipation of disaster in the now, in order to prevent that disaster from happening or even worse. Anticipating a risk means putting the potential danger in perspective. The anticipation of the disaster puts a strain on the most steadfast certainties, but offers everyone the chance to produce significant changes, kickstarting new energies”.

An unexpected event
“Even if there is no catastrophe, we find ourselves in the middle of a social development in which the expectation of the unexpected, the waiting for possible risks increasingly dominates the scene of our lives: individual risks and collective risks. A new phenomenon that becomes a stress factor for the institutions of law, finance, for the political system and even for families’ everyday lives. Being able to live in the risk society means anticipating the unexpected”.

Freedom within risk 
Risk is a constructive part of the social insecurity we live in. We cannot, however, definite it in absolute terms: “Every insecurity is always relative to the context and concrete risks that a person or a society need to deal with”.    

Beck concludes: “we need to accept insecurity as an element of our freedom. It might seem paradoxical, but this is also a form of democratization: it is the choice, that is continually renewed, between various possible options. The change stems from this choice”. 

In the next post, we’ll learn even more about the risk society according to Ulrich Beck!

For further reading:

– Beck, Ulrich (1992). Risk Society: Towards a New Modernity. Translated by Ritter, Mark. London: Sage Publications. ISBN 978-0-8039-8346-5.

– Beck, Ulrich (2020) “Review of Risk Society: Towards a New Modernity”

Does your company have a Contingency Plan?

Good strategies always involve a Business Contingency Plan (CP), in case the original plan backfires, and does not work as expected. In this case you need a CP to achieve the same goal as planned. A CP will work as your ‘plan B’ in such case.

Let us see why you need a business contingency plan and how to create one in a few simple steps!

What is a BCP?

But first, let’s define what a contingency plan is.

A contingency plan is a proactive strategy that describes the course of actions the management and staff of an organization need to take in response to an event that could possibly happen in the future. A CP is, in other words, related to likelihood and possibility which we can not predict with certainty.  

What is the purpose of a BCP?

A CP helps you stay prepared for unforeseen events and minimize their impact. The purpose of a business contingency plan is to help your business resume normal business operations after a disruptive event. A CP can also help organizations recover from accidents, manage risk, avoid negative publicity, and handle employee injuries.
In times where your primary plan doesn’t work, you need to execute the plan B. By this your business can react faster to unexpected events.

How to make a CP?

An effective CP is based on good research and brainstorming. The four steps below show you how to develop a business contingency plan to help you prepare for the unexpected.

1) Identify the risks

Before you can prepare for an event, you need to know what you are preparing for. Because of this you need to identify the major events that can have a negative impact on the course of your business and on the key resources, such as your employees, IT systems, machines etc.. Think of all the possible risks in your organization. As you are brainstorming, you could with advantage involve employee from other teams, to ensure that you are preparing for risks in the entire organization, and not only in your team.

Tip: use a min map to organize and categorize the risks you gather from the brainstorming session!

2) Prioritize the risks

 Once the list is created, you need to start prioritizing them, based on the threat they pose. Make sure you spend your time preparing for events that have a high chance of occurring. You would not want to spend all your time preparing for events you’re not experiencing.

Tip: To determine which risks are more likely yo occur, use a risk impact scale!

3) Develop contingency plans                     

Once you have created a prioritized list, it’s time to put a plan together to mitigate those risks. As you write a contingency plan, it should include visuals or a step-by-step guide that outlines what to do once the event has happened and how to keep your business running. Include a list of everyone, both inside and outside of the organization, who needs to be contacted should the event occur, along with up-to-date contact information.

Tip: we recommend you begin with the threats you consider high priority!

4) Maintain the plan

Even after you’ve developed a CP, the process doesn’t stop here. Once you have completed the contingency plans, make sure that:

  • The CP is quickly accessible to all employees and stakeholders
  • You communicate the plan to everyone who could potentially be affected
  • Review your plan frequently (Personnel, operational, and technological changes can make the plan inefficient, which means you may need to make some changes)

Benefits of a CP

Without a backup plan, you’re opening yourself to unnecessary risks. Here we have listed some om the most important benefits of a CP, that you cannot ignore:

  • Helps your business react quickly to negative events
  • A CP lists the actions that needs to be taken, and by this everyone knows what to do, without wasting time panicking
  • Allows to minimize damage and loss of production

What is the CP planning process in your organization? Let us know in the comment section below!

Sources:

The danish template for CP

For inspiration take a look at CP templates:

Risk strategy; risk transfer, sharing and spreading

This post focuses on the last risk management strategy, which we have introduced throughout the last weeks. Take a look at the lasts posts to get the full overview!

The final and most debated goal of risk management strategies is according to Senior Disaster Management Specialist, Damon P. Coppola, risk transfer, sharing or spreading. The concept of the goal is not actually to reduce the risk, but to dilute its consequence or likelihood across a large group of people such that each suffers an average consequence. Risk transfer involves moving the risk to another third party or entity, even though this may include giving up some control. By outsourcing, moving to an insurance agency, or leasing property, your organization is not responsible all alone when something goes wrong.
The most common form of risk transfer is insurance, which includes reinsurance. Insurance reduces the financial consequence of a hazard’s risk by eliminating the monetary loss associated with property damage. Insurers charge a calculated payment that is priced according to the hazard’s expected frequency and consequence. Payment of the premium guarantees the repayment of losses to impacted participants if the insured hazard occurs. In this way the cost of the secondary hazards is thereby shared by, or spread across, all participants through the payment of premiums. The risk transfer safeguards the project team against unpredictable risks such as weather, political unrests, or COVID-19, which are outside of the project team’s control.    
OBS: Risk management may seem superfluous at the beginning of the project. When a project manager is beginning a new project, it is indeed difficult to consider what could go wrong, especially if the project team is overconfidence biased (as described in our earlier post). Therefore, risk management must be considered an absolute priority from the beginning of the project!

Risk transfer do not always result in lower costs. Instead, a risk transfer is the best strategy when you can reduce future damage. In this way insurance can cost money, but it may end up being more cost-effective, than having the risk occur and being solely responsible for reparations.

Risk sharing includes sharing the risk impacts or liability among suppliers, partners, contractors, or companies by a contract. This sharing enables them to reduce risks around capacity and to reduce the risk of price fluctuations. For instance, if a power supply fails in an expensive server causing the loss of revenue for a customer, you could ask and receive a replacement power supply.

Summary of risk management strategies

Avoid, accept, transfer, consequence, or likelihood reduction. For each risk you encounter, you and your organization will have to deal with it. A pre assessment or risk analysis enable more options than just a major construction recall.  

Within your organization’s risk management framework, you should be aware of the different strategies along with understanding the guidelines for their implementation. Engineers and managers make decisions concerning risks every day, throughout the organization. Providing a set of clear strategies along with guidance allows the entire organization to appropriately mitigate risks daily.  

Feel free to comment, or contact us for more information!

Source:

Coppola, D. (2015): “Introduction to international disaster management”

Risk and culture

Culture is based on the unique human ability to classify experiences, code such classifications and pass such abstractions on. Cultural theory aims to understand why different people and social groups fear different risks. More specifically the theory claims that this is largely determined by social aspects and cultural adherence. The basis of cultural theory is anthropologists Mary Douglas’ and Michael Thompson grid-group typology (1978).

Why is this important? 

Cultural theory draws focus away from concepts such as risk and safety, and towards social institutions. To deal with risk in a reasonable manner you must understand the underlying mechanisms. 
You can use this theory and model to understand cultures in countries and companies and hence decide how to influence them. Do understand your own culture, which may be different, as well as the multiple cultures that may come into conflict at times.

The grid/group typology 

Grid 
Grid refers to the degree to which individuals’ choices are circumscribed by their position in society. At one site of the spectrum, people are homogeneous in their abilities, work and activity and can easily interchange roles. This makes them less dependent on one another. At the other site of the spectrum exists distinct roles and positions within the group with specialization and different accountability. There are also different degrees of entitlement, depending on position and there may be a different balance of exchange between and across individuals. This makes it useful to share and organize together. 

Group  
Group refers to the degree of solidarity among members of the society and how strongly people are bonded together. On one hand there are separated individuals, perhaps with common reason to group together, though with less a sense of unity and connection. On the other hand, some people have a connected sense of identity, concerning more personally to one another.  

 The grid/group model
If the dimensions are placed in a two-axis system, from low to high, four outcomes occur. These represent four different kinds of social environments, or biases so to say. These are termed individualistic-, egalitarian-, hierarchical-, and fatalistic worldviews, and they have a self-preserving pattern of risk perceptions. 

Individualistic

Individualist experience low grid and low group. They value individual initiative in the marketplace, and fear threats that might obstruct their individual freedom. In general, individualists see risk is an opportunity as long as it does not limit freedom. Self-regulation is a critical principle here, as if one person takes advantage of others then power differences arise and a fatalistic culture would develop. 

Egalitarian

Egalitarians experience low grid and high group. The good of the group comes before the good of any individual, because everyone is equal. They fear development that may increase the inequalities amongst people. Furthermore, they tend to be skeptical to expert knowledge because they suspect that experts and strong institutions might misuse their authority. 

Hierarchic

Hierarchists experience high grid and high group. A hierarchist society has a well-defined role for each member. Hierarchists believe in the need for a well-defined system of rules, and fear social deviance (such as crime) that disrupts those rules. Hierarchists have a great deal of faith in expert knowledge.  

Fatalistic

Fatalists experience high grid and low group. Fatalists take little part in social life, though they feel tied and regulated by social groups they do not belong to. This makes the fatalist quite indifferent about risk – what the person fear and not is mostly decided by others. The fatalist would rather be unaware of dangers since it is assumed to be unavoidable to them anyway. 

These four worldviews, or ways of life, make up the central part of the cultural theory. 

Note: In addition to the four worldviews described above, one group does not fit this pattern.  

Sources 

See also 

  • Thompson, M., Ellis, R., and Wildavsky, A., (1990) Cultural Theory, Westview Press, Colorado, CO 

Vulnerability Assessment

Understand your vulnerabilities is just as vital as risk assessment because vulnerabilities can lead to risks. If there is a universal imperative when it comes to mitigating vulnerabilities, it’s to analyze them first before you try to fix them. The more compound they are, the more critically important this assessment steps becomes. 

What is a vulnerability assessment? 

A vulnerability assessment refers to the process of defining, identifying, classifying, and then prioritizing all the vulnerabilities that exist in various infrastructures, applications within the company. 

With an effective vulnerability assessment, your organization has the tool needed to understand your security weaknesses, how to assess the risks associated with those weaknesses, and last how to put protections in place which reduce the likelihood of occurring.

How to perform a vulnerability assessment

There are three general steps that your company can follow:

  1. Identify and rang the vulnerability
  2. Document the vulnerabilities
  3. Create guidance

Step 1: Identify and rang the vulnerabilities

In this step you should define the risk and critical value for each action. You can construct a matrix with columns for each vulnerability, a possible scenario, the probability of an event and the impact of such an event.

Tip: Focus on what matters most!

Step 2: Document the vulnerabilities

The purpose of the step is to document the vulnerabilities, so you easily can identify and reproduce the findings in the future.  

Step 3: Create guidance

Use the profile to provide a clear graphical outline of which actions are associated with the greatest vulnerabilities, and likewise which to consider new or additional measure against.  

Tip: Your vulnerability assessment should be reviewed and updated on a regular basis or when changes have been made!

Note: The vulnerability profile cannot stand alone. It should be done along with a risk assessment. (Re)read the post about risk assessment.  

The pros and the cons of vulnerability assessment:

Sources

Snedaker, S. & Rima, C. (2014) ”Risk assessment, Vulnerability Assessment”, in (red.) Business Continuity and Disaster Recovery Planning for IT Professionals, 2.nd. edition

Balbix: “Brief overview of vulnerability assessment”, available online: https://www.balbix.com/insights/vulnerability-assessments-drive-enhanced-security-and-cyber-resilience/