Skip to Main Content

insightsarticles

Swarmbots, hivenets, and other stinging insects

03.23.18

With the rise of artificial intelligence, most malware programs are starting to think together. Fortinet recently released a report that highlights some terms we need to start paying attention to:

Bot
A “bot” is an automated program that, in this case, runs against IP addresses to find specific vulnerabilities and exploit them. Once it finds the vulnerability, it has the ability to insert malware such as ransomware or Trojans (a type of malware disguised as legitimate software) into the vulnerable device. These programs adapt to what they find in order to infect a system and then make themselves invisible.

Swarmbot
Now, think about thousands of different bots, attacking one target at the same time. That’s a swarm, or in the latest lingo, a swarmbot. Imagine a swarmbot attacking any available access into your network. This is a bot on steroids.

Hivenet
A “hivenet” is a self-learning cluster of compromised devices that share information and customize attacks. Hivenets direct swarmbots based on what they learn during an attack. They represent a significant advance in malware development, and are now considered by some to be a kind of artificial intelligence. The danger lies is in a hivenet’s ability to think during an attack.

Where do they run? Everywhere.
Bots and hives can run on any compromised internet-connected devices. This includes webcams, baby cams, DVRs, home routers, refrigerators, drones, “smart” TVs, and, very, very soon, (if not already) mobile phones and tablets. Anything that has an IP address and is not secured is vulnerable.

With some 2.9 billion botnet communications per quarter that we know of, attacks aren’t just theory anymore — they’re inevitable.

Organizations have heating and cooling systems, physical security systems, security cameras and multiple types of devices now accessible from the internet. Even community water, electric and telecommunications systems are vulnerable to attack — if they are accessible.

What can you do? Take care of your business—at home and at work.
At home, how many devices do you own with an IP address? In the era of smart homes, it can add up quickly. Vendors are fast to jump on the “connect from anywhere” bandwagon, but not so fast to secure their devices. How many offered updates to the device’s software in the last year? How would you know? Do any of the products address communications security? If the answer is “none,” you are at risk.

When assessing security at work, all organizations need to consider smart devices and industrial control systems that are Internet accessible, including phone systems, web conferencing devices, heating and cooling systems, fire systems, even elevators. What has an IP address? Vulnerable areas have expanded exponentially in the name of convenience and cost saving. Those devices may turn out to be far more expensive than their original price tag  remember the Target data breach? A firewall will not be sufficient protection if a compromised vendor has access.

Evaluate the Risks of Internet Accessibility
It may be great if you can see who is ringing your doorbell at home from your office, but only if you are sure you are the only one who can do that. Right now, my home is very “stupid,” and I like it that way. I worry about my wireless garage door opener, but at least someone has to be at my house to compromise it. My home firewall is commercial grade because most small office/home office routers are abysmally insecure, and are easily hacked. Good security costs money.

It may be more convenient for third-party vendors to access your internal equipment from their offices, but how secure are their offices? (There is really no way to know, except by sending someone like me in). Is your organization monitoring outgoing traffic from your network through your firewall? That’s how you discover a compromised device. Someone needs to pay attention to that traffic. You may not host valuable information, but if you have 300 unsecured devices, you can easily become part of a swarm.

Be Part of the Solution
Each one of us needs to eliminate or upgrade the devices that can become bots. At home, check your devices and install better security, in the same way you would upgrade locks on doors and windows to deter burglars. Turn off your computers when they are not in use. Ensure your anti-virus software is current on every device that has an operating system. Being small is no longer safe. Every device will matter.

Related Services

Accounting and Assurance

All teams experience losing streaks, and all franchise dynasties lose some luster. Nevertheless, the game must go on. What can coaches do? The answer: be prepared, be patient, and be PR savvy. Business managers should keep these three P’s in mind as they read Chapter 8 in BerryDunn’s Cybersecurity Playbook for Management, which highlights how organizations can recover from incidents.

In the last chapter, we discussed incident response. What’s the difference between incident response and incident recovery?

RG: Incident response refers to detecting and identifying an incident—and hopefully eradicating the source or cause of the incident, such as malware. Incident recovery refers to getting things back to normal after an incident. They are different sides of the same resiliency coin.

I know you feel strongly that organizations should have incident response plans. Should organizations also have incident recovery plans?

RG: Absolutely. Have a recovery plan for each type of possible incident. Otherwise, how will your organization know if it has truly recovered from an incident? Having incident recovery plans will also help prevent knee-jerk decisions or reactions that could unintentionally cover up or destroy an incident’s forensic evidence.

In the last chapter, you stated managers and their teams can reference or re-purpose National Institute of Standards and Technology (NIST) special publications when creating incident response plans. Is it safe to assume you also suggest referencing or re-purposing NIST special publications when creating incident recovery plans?

RG: Yes. But keep in mind that incident recovery plans should also mesh with, or reflect, any business impact analyses developed by your organization. This way, you will help ensure that your incident recovery plans prioritize what needs to be recovered first—your organization’s most valuable assets.

That said, I should mention that cybersecurity attacks don’t always target an organization’s most valuable assets. Sometimes, cybersecurity attacks simply raise the “misery index” for a business or group by disrupting a process or knocking a network offline.

Besides having incident recovery plans, what else can managers do to support incident recovery?

RG: Similar to what we discussed in the last chapter, managers should make sure that internal and external communications about the incident and the resulting recovery are consistent, accurate, and within the legal requirements for your business or industry. Thus, having a good incident recovery communication plan is crucial. 

When should managers think about bringing in a third party to help with incident recovery?

RG: That’s a great question. I think this decision really comes down to the confidence you have in your team’s skills and experience. An outside vendor can give you a lot of different perspectives but your internal team knows the business. I think this is one area that it doesn’t hurt to have an outside perspective because it is so important and we often don’t perceive ourselves as the outside world does. 

This decision also depends on the scale of the incident. If your organization is trying to recover from a pretty significant or high-impact breach or outage, you shouldn’t hesitate to call someone. Also, check to see if your organization has cybersecurity insurance. If your organization has cybersecurity insurance, then your insurance company is likely going to tell you whether or not you need to bring in an outside team. Your insurance company will also likely help coordinate outside resources, such as law enforcement and incident recovery teams.

Do you think most organizations should have cybersecurity insurance? 

RG: In this day and age? Yes. But organizations need to understand that, once they sign up for cybersecurity insurance, they’re going to be scrutinized by the insurance company—under the microscope, so to speak—and that they’ll need to take their “cybersecurity health” very seriously.

Organizations need to really pay attention to what they’re paying for. My understanding is that many different types of cybersecurity insurance have very high premiums and deductibles. So, in theory, you could have a $1 million insurance policy, but a $250,000 deductible. And keep in mind that even a simple incident can cost more than $1 million in damages. Not surprisingly, I know of many organizations signing up for $10 million insurance policies. 

How can managers improve internal morale and external reputation during the recovery process?

RG: Well, leadership sets the tone. It’s like in sports—if a coach starts screaming and yelling, then it is likely that the players will start screaming and yelling. So set expectations for measured responses and reactions. 

Check in on a regular basis with your internal security team, or whoever is conducting incident recovery within your organization. Are team members holding up under pressure? Are they tired? Have you pushed them to the point where they are fatigued and making mistakes? The morale of these team members will, in part, dictate the morale of others in the organization.

Another element that can affect morale is—for lack of a better word—idleness resulting from an incident. If you have a department that can’t work due to an incident, and you know that it’s going to take several days to get things back to normal, you may not want department members coming into work and just sitting around. Think about it. At some point, these idle department members are going to grumble and bicker, and eventually affect the wider morale. 

As for improving external reputation?I don’t think it really matters, honestly, because I don’t think most people really, truly care. Why? Because everyone is vulnerable, and attacks happen all the time. At this point in time, cyberattacks seem to be part of the normal course and rhythm of business. Look at all the major breaches that have occurred over the past couple of years. There’s always some of immediate, short-term fallout, but there’s been very little long-term fallout. Now, that being said, it is possible for organizations to suffer a prolonged PR crisis after an incident. How do you avoid this? Keep communication consistent—and limit interactions between employees and the general public. One of the worst things that can happen after an incident is for a CEO to say, “Well, we’re not sure what happened,” and then for an employee to tweet exactly what happened. Mixed messages are PR death knells. 

Let’s add some context. Can you identify a business or group that, in your opinion, has handled the incident recovery process well?

RG: You know, I can’t, and for a very good reason. If a business or group does a really good job at incident recovery, then the public quickly forgets about the incident—or doesn’t even hear about it in the first place. Conversely, I can identify many businesses or groups that have handled the incident recovery process poorly, typically from a PR perspective.

Any final thoughts about resiliency?

RG: Yes. As you know, over the course of this blog series, I have repeated the idea that IT is not the same as security. These are two different concepts that should be tackled by two different teams—or approached in their appropriate context. Similarly, managers need to remember that resiliency is not an IT process—it’s a business process. You can’t just shove off resiliency responsibilities onto your IT team. As managers, you need to get directly involved with resiliency, just as you need to get directly involved with maturity, capacity, and discovery. 

So, we’ve reached the end of this blog series. Above all else, what do you hope managers will gain from it? 

RG: First, the perspective that to understand your organization’s cybersecurity, is to truly understand your organization and its business. And I predict that some managers will be able to immediately improve business processes once they better grasp the cybersecurity environment. Second, the perspective that cybersecurity is ultimately the responsibility of everyone within an organization. Sure, having a dedicated security team is great, but everyone—from the CEO to the intern—plays a part. Third, the perspective that effective cybersecurity is effective communication. A siloed, closed-door approach will not work. And finally, the perspective that cybersecurity is always changing, so that it’s a best practice to keep reading and learning about it. Anyone with questions should feel free to reach out to me directly.

Article
Incident recovery: Cybersecurity playbook for management #8

Artificial Intelligence, or AI, is no longer the exclusive tool of well-funded government entities and defense contractors, let alone a plot device in science fiction film and literature. Instead, AI is becoming as ubiquitous as the personal computer. The opportunities of what AI can do for internal audit are almost as endless as the challenges this disruptive technology represents.

To understand how AI will influence internal audit, we must first understand what AI is.The concept of AI—a technology that can perceive the world directly and respond to what it perceives—is often attributed to Alan Turing, though the term “Artificial Intelligence” was coined much later in 1956 at Dartmouth College, in Hanover, New Hampshire. Turing was a British scientist who developed the machine that cracked the Nazis’ Enigma code. Turing thought of AI as a machine that could convince a human that it also was human. Turing’s humble description of AI is as simple as it is elegant. Fast-forward some 60 years and AI is all around us and being applied in novel ways almost every day. Just consider autonomous self- driving vehicles, facial recognition systems that can spot a fugitive in a crowd, search engines that tailor our online experience, and even Pandora, which analyzes our tastes in music.

Today, in practice and in theory, there are four types of AI. Type I AI may be best represented by IBM’s Deep Blue, a chess-playing computer that made headlines in 1996 when it won a match against Russian chess champion Gary Kasparov. Type I AI is reactive. Deep Blue can beat a chess champion because it evaluates every piece on the chessboard, calculates all possible moves, then predicts the optimal move among all possibilities. Type I AI is really nothing more than a super calculator, processing data much faster than the human mind can. This is what gives Type I AI an advantage over humans.

Type II AI, which we find in autonomous cars, is also reactive. For example, it applies brakes when it predicts a collision; but, it has a low form of memory as well. Type II AI can briefly remember details, such as the speed of oncoming traffic or the distance between the car and a bicyclist. However, this memory is volatile. When the situation has passed, Type II AI deletes the data from its memory and moves on to the next challenge down the road.

Type II AI's simple form of memory management and the ability to “learn” from the world in which it resides is a significant advancement. 
The leap from Type II AI to Type III AI has yet to occur. Type III AI will not only incorporate the awareness of the world around it, but will also be able to predict the responses and motivations of other entities and objects, and understand that emotions and thoughts are the drivers of behavior. Taking the autonomous car analogy to the next step, Type III AI vehicles will interact with the driver. By conducting a simple assessment of the driver’s emotions, the AI will be able to suggest a soothing playlist to ease the driver's tensions during his or her commute, reducing the likelihood of aggressive driving. Lastly, Type IV AI–a milestone that will likely be reached at some point over the next 20 or 30 years—will be self-aware. Not only will Type IV AI soothe the driver, it will interact with the driver as if it were another human riding along for the drive; think of “HAL” in Arthur C. Clarke’s 2001: A Space Odyssey.

So what does this all mean to internal auditors?
While it may be a bit premature to predict AI’s impact on the internal audit profession, AI is already being used to predict control failures in institutions with robust cybersecurity programs. When malicious code is detected and certain conditions are met, AI-enabled devices can either divert the malicious traffic away from sensitive data, or even shut off access completely until an incident response team has had time to investigate the nature of the attack and take appropriate actions. This may seem a rather rudimentary use of AI, but in large financial institutions or manufacturing facilities, minutes count—and equal dollars. Allowing AI to cut off access to a line of business that may cost the company money (and its reputation) is a significant leap of faith, and not for the faint of heart. Next generation AI-enabled devices will have even more capabilities, including behavioral analysis, to predict a user’s intentions before gaining access to data.

In the future, internal audit staff will no doubt train AI to seek conditions that require deeper analysis, or even predict when a control will fail. Yet AI will be able to facilitate the internal audit process in other ways. Consider AI’s role in data quality. Advances in inexpensive data storage (e.g., the cloud) have allowed the creation and aggregation of volumes of data subject to internal audit, making the testing of the data’s completeness, integrity, and reliability a challenging task considering the sheer volume of data. Future AI will be able to continuously monitor this data, alerting internal auditors not only of the status of data in both storage and motion, but also of potential fraud and disclosures.

The analysis won’t stop there.
AI will measure the performance of the data in meeting organizational objectives, and suggest where efficiencies can be gained by focusing technical and human resources to where the greatest risks to the organization exist in near real-time. This will allow internal auditors to develop a common operating picture of the day-to-day activities in their business environments, alerting internal audit when something doesn’t quite look right and requires further investigation.

As promising as AI is, the technology comes with some ethical considerations. Because AI is created by humans, it is not always vacant of human flaws. For instance, AI can become unpredictably biased. AI used in facial recognition systems has made racial judgments based on certain common facial characteristics. In addition, AI that gathers data from multiple sources that span a person’s financial status, credit status, education, and individual likes and dislikes could be used to profile certain groups for nefarious intentions. Moreover, AI has the potential to be weaponized in ways that we have yet to comprehend.

There is also the question of how internal auditors will be able to audit AI. Keeping AI safe from internal fraudsters and external adversaries is going to be paramount. AI’s ability to think and act faster than humans will challenge all of us to create novel ways of designing and testing controls to measure AI’s performance. This, in turn, will likely make partnerships with consultants that can fill knowledge gaps even more valuable. 

Challenges and pitfalls aside, AI will likely have a tremendous positive effect on the internal audit profession by simultaneously identifying risks and evaluating processes and control design. In fact, it is quite possible that the first adopters of AI in many organizations may not be the cybersecurity departments at all, but rather the internal auditor’s office. As a result, future internal auditors will become highly technical professionals and perhaps trailblazers in this new and amazing technology.

Article
Artificial intelligence and the future of internal audit

The world of professional sports is rife with instability and insecurity. Star athletes leave or become injured; coaching staff make bad calls or public statements. The ultimate strength of a sports team is its ability to rebound. The same holds true for other groups and businesses. Chapter 7 in BerryDunn’s Cybersecurity Playbook for Management looks at how organizations can prepare for, and respond to, incidents.

The final two chapters of this Cybersecurity Playbook for Management focus on the concept of resiliency. What exactly is resiliency?
RG
: Resiliency refers to an organization’s ability to keep the lights on—to keep producing—after an incident. An incident is anything that disrupts normal operations, such as a malicious cyberattack or an innocent IT mistake.

Among security professionals, attitudes toward resiliency have changed recently. Consider the fact that the U.S. Department of Defense (DOD) has come out and said, in essence, that cyberwarfare is a war that it cannot win—because cyberwarfare is so complex and so nuanced. The battlefield changes daily and the opponents have either a lot of resources or a lot of time on their hands. Therefore, the DOD is placing an emphasis on responding and recovering from incidents, rather than preventing them.

That’s sobering.
RG
: It is! And businesses and organizations should take note of this attitude change. Protection, which was once the start and endpoint for security, has given way to resiliency.

When and why did this attitude change occur?
RG
: Several years ago, security experts started to grasp just how clever certain nation states, such as China and Russia, were at using malicious software. If you could point to one significant event, likely the 2013 Target breach is it.

What are some examples of incidents that managers need to prepare for?
RG
: Examples range from external breaches and insider threats to instances of malfeasance or incompetence. Different types of incidents lead to the same types of results—yet you can’t have a broad view of incidents. Managers should work with their teams to create incident response plans that reflect the threats associated with their specific line of business. A handful of general incident response plans isn’t going to cut it.

Managers need to work with their teams to develop a specific incident response plan for each specific type of incident. Why? Well, think of it this way: Your response to a careless employee should be different from your response to a malicious employee, for a whole host of legal reasons.

Incident response is not a cookie-cutter process. In fact, it is quite the opposite. This is one of the reasons I highly suggest that security teams include staff members with liberal arts backgrounds. I’m generalizing, but these people tend to be creative. And when you’re responding to incidents, you want people who can look at a problem or situation from a global or external perspective, not just a technical or operational perspective. These team members can help answer questions such as, what does the world see when they look at our organization? What organizational information might be valuable to, or targeted by, malicious actors? You’ll get some valuable fresh perspectives.

How short or long should the typical incident response plan be?
RG
: They can be as short as needed; I often see good incident response plans no more than three or four pages in length. However, it is important that incident response plans are task oriented, so that it is clear who does what next. And when people follow an incident response plan, they should physically or digitally check off each activity, then record each activity.

What system or software do you recommend for recording incidents and responses?
RG
: There are all types of help desk software you can use, including free and open source software. I recommend using help desk software with workflow capabilities so your team can assign and track tasks.

Any other tips for developing incident response plans?
RG
: First, managers should work with, and solicit feedback from, different data owners and groups within the organization—such as IT, HR, and Legal—when developing incident response plans. If you create these documents in a vacuum, they will be useless.

Second, managers and their teams should take their time and develop the most “solid” incident response plans possible. Don’t rush the process. The effectiveness of your incident response plans will be critical in assessing your organization’s ability to survive a breach. Because of this, you should be measuring your response plans through periodic testing, like conducting tabletop exercises.

Third, keep your organization’s customers in mind when developing these plans. You want to make sure external communications are consistent, accurate, and within the legal requirements for your business or industry. The last thing you want is customers receiving conflicting messages about the incident. This can cause unnecessary grief for you, but can also cause an unmeasurable loss of customer confidence.

Are there any decent incident response plans in the public domain that managers and their teams can adapt for their own purposes?
RG
: Yes. My default reference is the National Institute of Standards and Technology (NIST). NIST has many special publications that describe the incident response process, how to develop a solid plan, and how to test your plan.

Should organizations have dedicated incident response teams?
RG: Definitely. Larger organizations usually have the resources and ability to staff these teams internally. Smaller organizations may want to consider hiring a reputable third party to act as an incident response team. The key with hiring a third party? Don’t wait until an incident occurs! If you wait, you’re going to panic, and make panic-based decisions. Be proactive and hire a third party on retainer.

That said, even larger organizations should consider hiring a third party on an annual basis to review incident response plans and processes. Why? Because every organization can grow complacent, and complacency kills. A third party can help gauge the strengths and weaknesses of your internal incident response teams, and provide suggestions for general or specific training. A third party can also educate your organization about the latest and greatest cyber threats.

Should managers empower their teams to conduct internal “hackathons” in order to test incident response?
RG
: Sure! It’s good practice, and it can be a lot of fun for team members. There are a few caveats. First, don’t call it a “hackathon.” The word can elicit negative reactions from upper management—whose support you really need. Call it “active testing” or “continuous improvement exercises.” These activities allow team members to think creatively, and are opportunities for them to boost their cybersecurity knowledge. Second, be prepared for pushback. Some managers worry if team members gain more cybersecurity skills, then they’ll eventually leave the organization for another, higher-paying job. I think you should be committed to the growth of your team members; it’ll only make your organization more secure.

What are some best practices managers should follow when reporting incidents to their leadership?
RG
: Keep the update quick, brief, and to the point. Leave all the technical jargon out, and keep everything in a business context. This way leadership can grasp the ramifications of the event and understand what matters. Be prepared to outline how you’re responding and what actions leadership can take to support the incident response team and protect the business. In the last chapter, I mentioned what I call the General Colin Powell method of reporting, and I suggest using that method when informing leadership. Tell them what you know, what you don’t know, what you think, and what you recommend. Have answers, or at least a plan.

Above all else, don’t scare leadership. If you present them with panic, you’re going to get panic back. Be a calm voice in the storm. Management will respond better, as will your team.

Another thing to keep in mind is different business leaders have different responses to this sort of news. An elected official, for example, might react differently than the CEO of a private company, simply due to possible political fallout. Keep this context in mind when reporting incidents. It can help you craft the message.

How much organization-wide communication should there be about incidents?
RG
: That’s a great question, but a tough one to answer. Transparency is good, but it can also unintentionally lead to further incidents. Do you really want to let your whole organization know about an exploitable weakness? Also, employees can spread information about incidents on social media, which can actually lead to the spread of misinformation. If you are in doubt about whether or not to inform the entire organization about an incident, refer to your Legal Department. In general, organization-wide communication should be direct: We’ve had an incident; these are the facts; this is what you are allowed to say on social media; and this is what you’re not allowed to say on social media.

Another great but tough question: When do you tell the public about an incident? For this type of communication, you’re going to need buy-in from various sources: leadership, Legal, HR, and your PR team or external PR partners. You have to make sure the public messaging is consistent. Otherwise, citizens and the media will try to poke holes in your official story. And that can lead to even more issues.

So what’s next?
RG
: Chapter 8 will focus on how managers can help their organizations recover from a cybersecurity incident.

Read Incident response: Cybersecurity playbook for management #8 here.

Article
Incident response: Cybersecurity playbook for management #7

Any sports team can pull off a random great play. Only the best sports teams, though, can pull off great plays consistently — and over time. The secret to this lies in the ability of the coaching staff to manage the team on a day-to-day basis, while also continually selling their vision to the team’s ownership. Chapter Six in BerryDunn’s Cybersecurity Playbook for Management looks at how managers can achieve similar success through similar actions.

The title of this chapter is “The Workflow.” What are we talking about today?
RG
: In previous chapters, we’ve walked managers through cybersecurity concepts like maturity, capacity, and discovery. Today, we’re going to discuss how you can foster a consistent and repeatable cybersecurity program — the cybersecurity workflow, if you will. And for managers, this is where game planning begins. To achieve success, they need to effectively oversee their team on a day-to-day basis, and continually sell the cybersecurity program to the business leadership for whom they work — the board or CEO.

Let’s dive right in. How exactly do managers oversee a cybersecurity program on a day-to-day basis?
RG
: Get out of the way, and let your team do its work. By this point, you should know what your team is capable of. Therefore, you need to trust your team. Yet you should always verify. If your team recommends purchasing new software, have your team explain, in business terms, the reasons for the purchase. Then verify those reasons. Operationalizing tools, for example, can be difficult and costly, so make sure they put together a road map with measurable outcomes before you agree to buy any tools — even if they sound magical!

Second, empower your team by facilitating open dialogue. If your team brings you bad news, listen to the bad news — otherwise, you’ll end up alienating people. Know that your team is going to find things within your organization’s “auditable universe” that are going to make you uncomfortable from a cybersecurity point of view. Nevertheless, you need to encourage your team to share the information, so don’t overreact.

Third, give your team a communication structure that squelches a crisis-mode mentality — “Everything’s a disaster!” In order to do that, make sure your team gives every weakness or issue they discover a risk score, and log the score in a risk register. That way, you can prioritize what is truly important.

Fourth, resolve conflicts between different people or groups on your team. Take, for example, conflict between IT staff and security staff, (read more here). It is a common issue, as there is natural friction between these groups, so be ready to deal with it. IT is focused on running operations, while security is focused on protecting operations. Sometimes, protection mechanisms can disrupt operations. Therefore, managers need to act as peacemakers between the two groups. Don’t show favoritism toward one group or another, and don’t get involved in nebulous conversations regarding which group has “more skin in the game.” Instead, focus on what is best for your organization from a business perspective. The business perspective ultimately trumps either IT or security concerns.

Talk about communication for a moment. Managers often come from business backgrounds, while technical staff often come from IT backgrounds. How do you foster clear communication across this divide?
RG
: Have people talk in simple terms. Require everyone on your team use plain language to describe what they know or think. I recommend using what I call the Colin Powell method of reporting:

1. Tell me what you know.
2. Tell me what you don’t know.
3. Tell me what you think.
4. Tell me what you recommend.

When you ask team members questions in personal terms — “Tell me what you know”—you tend to receive easy-to-understand, non-jargon answers.

Something that we really haven’t talked about in this series is cybersecurity training. Do you suggest managers implement regular cybersecurity training for their team?
RG
: This is complicated, and my response will likely be be a little controversial to many. Yes, most organizations should require some sort of cybersecurity training. But I personally would not invest a lot of time or money into cybersecurity training beyond the basics for most users and specific training for technical staff. Instead, I would plan to spend more money on resiliency — responding to, and recovering from, a cybersecurity attack or incident. (We’ll talk about resiliency more in the next two chapters.) Why? Well, you can train people all day long, but it only takes one person to be malicious, or to make an innocent mistake, that leads to a cybersecurity attack or incident. Let’s look at my point from a different perspective. Pretend you’re the manager of a bank, and you have some money to spend on security. Are you going to spend that money on training your employees how to identify a robber? Or are you going to spend that money on a nice, state-of-the-art vault?

Let’s shift from talking about staff to talking about business leadership. How do managers sell the cybersecurity program to them?
RG
: Use business language, not technical language. For instance, a CEO may not necessarily care much about the technical behavior of a specific malware, but they are going to really care about the negative effects that malware can have on the business.

Also, keep the conversation short, simple, and direct. Leadership doesn’t have time to hear about all you’re doing. Leadership wants progress updates and a clear sense of how the cybersecurity program is helping the business. I suggest discussing three to four high-priority security risks, and summarizing how you and your team are addressing those risks.

And always remember that in times of crisis, those who keep a cool head tend to gain the most support. Therefore, when talking to the board or CEO, don’t be the bearer of “doom and gloom.” Be calm, positive, empowering, and encouraging. Provide a solution. And make leadership part of the solution by reminding them that they, too, have cybersecurity responsibilities, such as communicating the value of the cybersecurity program to the organization — internal PR, in other words.

How exactly should a manager communicate this info to leadership? Do you suggest one-on-one chats, reports, or presentations?
RG
: This all depends on leadership. You know, some people are verbal learners; some people are visual learners. It might take some trial and error to figure out the best medium for conveying your information, and that’s OK. Remember: cybersecurity is an ongoing process, not a one-and-done event. However, if you are going to pursue the one-on-one chat route, just be prepared, materials-wise. If leadership asks for a remediation plan, then you better have that remediation plan ready to present!

What is one of the biggest challenges that managers face when selling cybersecurity programs to leadership?RG: One of the biggest challenges is addressing questions about ROI, because there often are no quantifiable financial ROIs for cybersecurity. But organizations have to protect themselves. So the question is, how much money is your organization willing to spend to protect itself? Or, in other words, how much risk can your organization reduce — and does this reduction justify the cost?

One possible way to communicate the value of cybersecurity to leadership is to compare it to other necessary elements within the organization, such as HR. What is the ROI of HR? Who knows? But do you really want your organization to lack an HR department? Think of all the possible logistic and legal issues that could swamp your organization without an HR department. It’s terrifying to think about! And the same goes for cybersecurity.

We’ve talked about how managers should communicate with their team and with business leadership. What about the organization as a whole?
RG
: Sure! Regular email updates are great, especially if you keep them “light,” so to speak. Don’t get into minutia. That said, I also think a little bit of secrecy goes a long way. Organizations need to be aware of, and vigilant toward, insider threats. Loose lips sink ships, you know? Gone are the days when a person works for an organization for 30+ years. Employees come and go pretty frequently. As a result, the concept of company loyalty has changed. So make sure your organization-wide updates don’t give away too much cybersecurity information.

So what’s next?
RG:
Chapter 7 will focus on how managers can help their organizations respond to a cybersecurity attack or incident.

Read Incident response: Cybersecurity playbook for management #7 now.

Article
The workflow: Cybersecurity playbook for management #6

A professional sports team is an ever-changing entity. To have a general perspective on the team’s fluctuating strengths and weaknesses, a good coach needs to trust and empower their staff to discover the details. Chapter 5 in BerryDunn’s Cybersecurity Playbook for Management looks at how discovery can help managers understand their organization’s ever-changing IT environment. 

What is discovery, and how does it connect to capacity?
RG: Discovery is the process of mapping your organization’s capacity—people, processes, and tools—so you understand what your organization’s IT environment has. In other words, it’s the auditing of your IT environment.

Of course, the most valuable thing within your IT environment, other than the people who access it, is the “thing” that drives your business. Often this thing is data, but it could be proprietary processes or machinery. For the purposes of this blog, we’ll focus on data. Discovery naturally answer questions such as:

• What in our IT environment is important to our business?
• How is it being used?
• Who has access to it, and how can we better protect it? 

How can managers tackle discovery?
RG: First, you need to understand discovery requires accepting the fact that the environment is always evolving. Discovery is not a one-and-done process—it, never ends. People introduce new things, like updated software, into IT environments all the time. Your IT environment is an always-shifting playing field. Think of Amazon’s Alexa devices. When someone plugs one into your internal wireless network, they’ve just expanded your attack surface for a hacker by introducing a new device with its own set of vulnerabilities.

Second, you have to define the “auditable universe” by establishing manageable boundaries in direct proportion to your discovery team’s capabilities. I often see solicitations for proposals that ask for discovery of all assets in an IT environment. That could include a headquarters building, 20 satellite offices, and remote workers, and is going to take a long time to assess. I recently heard of a hospital discovering 41,000 internet-connected devices on their network—mostly Internet of Things (IoT) resources, such as heart monitors. Originally, the hospital had only been aware of about one-third of these devices. Keeping your boundaries realistic and manageable can prevent your team from being overwhelmed.

Third, your managers should refrain from getting directly involved with discovery because it’s a pretty technical and time-consuming process. You should task a team to conduct discovery, and provide the discovery team with adequate tools. There are a lot of good tools that can help map networks and manage assets; we’ll talk about them later in this blog. Managers should mainly concern themselves with the results of discovery and trust in the team’s ability to competently map out the IT environment. Remember, the IT environment is always evolving, so even as the results roll in, things are changing.

Who should managers select for the discovery team?
RG: Ideally, various groups of people. For instance, it makes sense for HR staff to conduct the people part of discovery. Likewise, it makes sense for data owners—staff responsible for certain data—to conduct the process part of discovery, and for IT staff to conduct the tool part.

However, I should point out that if you have limited internal resources, then the IT staff can conduct all three parts of discovery, working closely with all stakeholders. IT staff will have a pretty good sense of where data is held within the organization’s IT environment, and they will develop an understanding of what is important to the organization.

Could an organization’s security staff conduct discovery?
RG: Interestingly enough, security staff don’t always have day-to-day interactions with data. They are more focused on overall data protection strategies and tactics. Therefore, it makes more sense to leverage other staff, but the results of discovery (e.g., knowing where data resides, understanding the sensitivity of data) need to be shared with security staff. Ultimately, this knowledge will help security staff better protect your data.

What about hiring external resources to conduct discovery?
RG: It depends on what you’re trying to do. If the goal of discovery is to comply with some sort of regulatory standard or framework, then yes, hiring external resources makes sense. These resources could come in and, using the discovery process, conduct a formal assessment. It may also make sense to hire external resources if you’re short-staffed, or if you have a complex environment with undocumented data repositories, processes, and tools. Yet in each of these scenarios, the external resources will only be able to provide a point-in-time baseline. 

Otherwise, I recommend leveraging your internal staff. An internal discovery team should be able to handle the task if adequately staffed and resourced, and team members will learn a lot in the process. And as discovery never really ends, do you want to have to perpetually hire external resources?

People make up a big part of capacity. Should the discovery team focus on people and their roles in this process?
RG: Yes! It sounds odd that people and their roles are included in discovery, but it is important to know who is using and touching your data. At a minimum, the discovery team needs to conduct background checks. (This is one example of where HR staff need to be part of the discovery process.)

How can the discovery team best map processes?
RG: The discovery team has to review each process with the respective data owner. Now, if you are asking the data owners themselves to conduct discovery, then you should have them illustrate their own workflows. There are various process mapping tools, such as Microsoft Visio, that data owners can use for this.

The discovery team needs to acknowledge that data owners often perform their processes correctly through repetition—the problems or potential vulnerabilities stem from an inherently flawed or insecure process, or having one person in charge of too many processes. Managers should watch out for this. I’ll give you a perfect example of the latter sort of situation. I once helped a client walk through the process of system recovery.

During the process we discovered that the individual responsible for system recovery also had the ability to manipulate database records and to print checks. In theory, that person could have been able to cut themselves a check and then erase its history from the system. That’s a big problem!

Other times, data owners perform their processes correctly, but inadvertently use compromised or corrupted tools, such as free software downloaded from the internet. The discovery team has to identify needed policy and procedure changes to prevent these situations from happening.

Your mention of vulnerable software segues nicely to the topic of tools. How can the discovery team best map the technologies the organization uses?
RG: Technology is inherently flawed. You can’t go a week without hearing about a new vulnerability in a widely used system or application. I suggest researching network scanning tools for identifying hosts within your network; vulnerability testing tools for identifying technological weaknesses or gaps; and penetration testing tools for simulating cyber-attacks to assess cybersecurity defenses.

Let’s assume a manager has tasked a team to conduct discovery. What’s the next step?
RG: If you recall, in the previous blog I discussed the value of adopting a cybersecurity risk register, which is a document used to list the organization’s cybersecurity risks, record required risk mitigation actions, and identify who “owns” the risk. The next step is for your discovery team to start completing the risk register. The manager uses this risk register, and subsequent discussions with the team, to make corresponding business decisions to improve cybersecurity, such as purchasing new tools—and to measure the progress of mitigating any vulnerabilities identified in the discovery process. A risk register can become an invaluable resource planning tool for managers.

For discovery purposes, what’s the best format for a cybersecurity risk register?
RG: There are very expensive programs an organization can use to create a risk register. Some extremely large banking companies use the RSA Archer GRC platform. However, you can build a very simple risk register in Excel. An Excel spreadsheet would work well for small and some mid-sized organizations, but there are other relatively inexpensive solutions available. I say this because managers should aim for simplicity. You don’t want the discovery team getting bogged down by a complex risk register.

Finally, what are some discovery resources and reference guides that managers should become familiar with and utilize?
RG: I recommend the National Institute of Standards and Technology (NIST) Special Publication series. They outline very specific and detailed discovery methodologies you can use to improve your discovery process.

So what’s next?
RG: Chapter 6 will focus on synthesizing maturity, capacity, and discovery to create a resilient organization from a cybersecurity point of view.

Read The workflow: Cybersecurity playbook for management #6 here.

Article
Discovery: Cybersecurity playbook for management #5

Just as sports teams need to bring in outside resources — a new starting pitcher, for example, or a free agent QB — in order to get better and win more games, most organizations need to bring in outside resources to win the cybersecurity game. Chapter 4 in our Cybersecurity Playbook for Management looks at how managers can best identify and leverage these outside resources, known as external capacity.

In your last blog, you mentioned that external capacity refers to outside resources — people, processes, and tools — you hire or purchase to improve maturity. So let’s start with people. What advice would you give managers for hiring new staff?
RG: I would tell them to search for new staff within their communities of interest. For instance, if you’re in financial services, use the Financial Services Information Sharing and Analysis Center (FS-ISAC) as a resource. If you’re in government, look to the Multi-State Information Sharing and Analysis Center (MS-ISAC). Perhaps more importantly, I would tell managers what NOT to do.

First, don’t get caught up in the certification trap. There are a lot of people out there who are highly qualified on paper, but who don’t have a lot of the real-world experience. Make sure you find people with relevant experience.

Second, don’t blindly hire fresh talent. If you need to hire a security strategist, don’t hire someone right out of college just getting started. While they might know security theories, they’re not going to know much about business realities.

Third, vet your prospective hires. Run national background checks on them, and contact their references. While there is a natural tendency to trust people, especially cybersecurity professionals, you need to be smart, as there are lots of horror stories out there. I once worked for a bank in Europe that had hired new security and IT staff. The bank noticed a pattern: these workers would work for six or seven months, and then just disappear. Eventually, it became clear that this was an act of espionage. The bank was ripe for acquisition, and a second bank used these workers to gather intelligence so it could make a takeover attempt. Every organization needs to be extremely cautious.

Finally, don’t try to hire catchall staff. People in management often think: “I want someone to come in and rewrite all of our security policies and procedures, and oversee strategic planning, and I also want them to work on the firewall.” It doesn’t work that way. A security strategist is very different from a firewall technician — and come with two completely different areas of focus. Security strategists focus on the high-level relationship between business processes and outside threats, not technical operations. Another point to consider: if you really need someone to work on your firewall, look at your internal capacity first. You probably already have staff who can handle that. Save your budget for other resources.

You have previously touched upon the idea that security and IT are two separate areas.
RG
: Yes. And managers need to understand that. Ideally, an organization should have a Security Department and an IT Department. Obviously, IT and Security work hand-in-glove, but there is a natural friction between the two, and that is for good reason. IT is focused on running operations, while security is focused on protecting them. Sometimes, protection mechanisms can disrupt operations or impede access to critical resources.

For example, two-factor authentication slows down the time to access data. This friction often upsets both end users and IT staff alike; people want to work unimpeded, so a balance has to be struck between resource availability and safeguarding the system itself. Simply put, IT sometimes cares less about security and more about keeping end users happy — and while that it is important, security is equally important.

What’s your view on hiring consultants instead of staff?
RG
: There are plenty of good security consultants out there. Just be smart. Vet them. Again, run national background checks, and contact their references. Confirm the consultant is bonded and insured. And don’t give them the keys to the kingdom. Be judicious when providing them with administrative passwords, and distinguish them in the network so you can keep an eye on their activity. Tell the consultant that everything they do has to be auditable. Unfortunately, there are consultants who will set up shop and pursue malicious activities. It happens — particularly when organizations hire consultants through a third-party hiring agency. Sometimes, these agencies don’t conduct background checks on consultants, and instead expect the client to.

The consultant also needs to understand your business, and you need to know what to expect for your money. Let’s say you want to hire a consultant to implement a new firewall. Firewalls are expensive and challenging to implement. Will the consultant simply implement the firewall and walk away? Or will the consultant not only implement the firewall, but also teach and train your team in using and modify the firewall? You need to know this up front. Ask questions and agree, in writing, the scope of the engagement — before the engagement begins.

What should managers be aware of when they hire consultants to implement new processes?
RG
: Make sure that the consultant understands the perspectives of IT, security, and management, because the end result of a new process is always a business result, and new processes have to make financial sense.

Managers need to leverage the expertise of consultants to help make process decisions. I’ll give you an example. In striving to improve their cybersecurity maturity, many organizations adopt a cybersecurity risk register, which is a document used to list the organization’s cybersecurity risks, record actions required to mitigate those risks, and identify who “owns” the risk. However, organizations usually don’t know best practices for using a risk register. This sort of tool can easily become complex and unruly, and people lose interest when extracting data from a register becomes difficult or consumes a lot of time reading.

A consultant can help train staff in processes that maximize a risk register’s utility. Furthermore, there’s often debate about who owns certain risks. A consultant can objectively arbitrate who owns each risk. They can identify who needs to do X, and who needs to do Y, ultimately saving time, improving staff efficiency, and greatly improving your chances of project success.

Your mention of a cybersecurity risk register naturally leads us to the topic of tools. What should managers know about purchasing or implementing new technology?
RG
: As I mentioned in the last blog, organizations often buy tools, yet rarely maximize their potential. So before managers give the green light to purchase new tools, they should consider ways of leveraging existing tools to perform more, and more effective, processes.

If a manager does purchase a new tool, they should purchase one that is easy to use. Long learning curves can be problematic, especially for smaller organizations. I recommend managers seek out tools that automate cybersecurity processes, making the processes more efficient.

For example, you may want to consider tools that perform continuous vulnerability scans or that automatically analyze data logs for anomalies. These tools may look expensive at first glance, but you have to consider how much it would cost to hire multiple staff members to look for vulnerabilities or anomalies.

And, of course, managers should make sure that a new tool will truly improve their organization’s safeguards against cyber-attack. Ask yourself and your staff: Will this tool really reduce our risk?

Finally, managers need to consider eliminating tools that aren’t working or being used. I once worked with an organization that had expensive cybersecurity tools that simply didn’t function well. When I asked why it kept them, I was told that the person responsible for them was afraid that a breach would occur if they were removed. Meanwhile, these tools were costing the organization around $60,000 a month. That’s real money. The lesson: let business goals, and not fear, dictate your technology decisions.

So, what’s next?
RG
: So far in this series we have covered the concepts of maturity and capacity. Next, we’re going to look at the concept of discovery. Chapter 5 will focus on internal audit strategies that you can use to determine, or discover, whether or not your organization is using tools and processes effectively.

Read Discovery: Cybersecurity playbook for management #5 now.

Article
External capacity: Cybersecurity playbook for management #4

It may be hard to believe some seasons, but every professional sports team currently has the necessary resources — talent, plays, and equipment — to win. The challenge is to identify and leverage them for maximum benefit. And every organization has the necessary resources to improve its cybersecurity. Chapter 3 in BerryDunn’s Cybersecurity Playbook for Management looks at how managers can best identify and leverage these resources, known collectively as internal capacity.

The previous two chapters focused on using maturity models to improve an organization’s cybersecurity. The next two are about capacity. What is the difference, and connection, between maturity and capacity, and why is it important? 
RG: Maturity refers to the “as is” state of an organization’s cybersecurity program compared to its desired “to be” state. Capacity refers to the resources an organization can use to reach the “to be” state. There are two categories of capacity: external and internal. External capacity refers to outside resources — people, processes, and tools — you can hire or purchase to improve maturity. (We’ll discuss external capacity more in our next installment.) Internal capacity refers to in-house people, processes, and tools you can leverage to improve maturity. 

Managers often have an unclear picture of how to use resources to improve cybersecurity. This is mainly because of the many demands found in today's business environments. I recommend managers conduct internal capacity planning. In other words, they need to assess the internal capacity needed to increase cybersecurity maturity. Internal capacity planning can answer three important questions:

1. What are the capabilities of our people?
2. What processes do we need to improve?
3. What tools do we have that can help improve processes and strengthen staff capability?

What does the internal capacity planning process look like?
RG
: Internal capacity planning is pretty easy to conduct, but there’s no standard model. It’s not a noun, like a formal report. It’s a verb — an act of reflection. It’s a subjective assessment of your team members’ abilities and their capacity to perform a set of required tasks to mature the cybersecurity program. These are not easy questions to ask, and the answers can be equally difficult to obtain. This is why you should be honest in your assessment and urge your people to be honest with themselves as well. Without this candor, your organization will spin its wheels reaching its desired “to be” state.

Let’s start with the “people” part of internal capacity. How can managers assess staff?RG: It’s all about communication. Talk to your staff, listen to them, and get a sense of who has the ability and desire for improving cybersecurity maturity in certain subject areas or domains, like Risk Management or Event and Incident Response. If you work at a small organization,  start by talking to your IT manager or director. This person may not have a lot of cybersecurity experience, but he or she will have a lot of operational risk experience. IT managers and directors tend to gravitate toward security because it’s a part of their overall responsibilities. It also ensures they have a voice in the maturing process.

In the end, you need to match staff expertise and skillsets to the maturity subject areas or domains you want to improve. While an effective manager already has a sense of staff expertise and skillsets, you can add a SWOT analysis to clarify staff strengths, weaknesses, opportunities, and threats.

The good news: In my experience, most organizations have staff who will take to new maturity tasks pretty quickly, so you don’t need to hire a bunch of new people.

What’s the best way to assess processes?
RG
: Again, it’s all about communication. Talk to the people currently performing the processes, listen to them, and confirm they are giving you honest feedback. You can have all the talent in the world, and all the tools in the world — but if your processes are terrible, your talent and tools won’t connect. I’ve seen organizations with millions of dollars’ worth of tools without the right people to use the tools, and vice versa. In both situations, processes suffer. They are the connective tissue between people and tools. And keep in mind, even if your current ones are good, most  tend to grow stale. Once you assess, you probably need to develop some new processes or improve the ones in place.

How should managers and staff develop new processes?
RG
: Developing new ones can be difficult  we’re talking change, right? As a manager, you have to make sure the staff tasked with developing them are savvy enough to make sure the processes improve your organization’s maturity. Just developing a new one, with little or no connection to maturity, is a waste of time and money. Just because measuring maturity is iterative, doesn’t mean your approach to maturing cybersecurity has to be. You need to take a holistic approach across a wide range of cybersecurity domains or subject areas. Avoid any quick, one-and-done processes. New ones should be functional, repeatable, and sustainable; if not, you’ll overburden your team. And remember, it takes time to develop new ones. If you have an IT staff that’s already struggling to keep up with their operational responsibilities, and you ask them to develop a new process, you’re going to get a lot of pushback. You and the IT staff may need to get creative — or look toward outside resources, which we’ll discuss in chapter 4.

What’s the best way to assess tools?
RG
: Many organizations buy many tools, rarely maximize their potential. And on occasion, organizations buy tools but never install them. The best way to assess tools is to select staff to first measure the organization’s inventory of tools, and then analyze them to see how they can help improve maturity for a certain domain or subject area. Ask questions: Are we really getting the maximum outputs those tools offer? Are they being used as intended?

I’ll give you an example. There’s a company called SolarWinds that creates excellent IT management tools. I have found many organizations use SolarWinds tools in very specific, but narrow, ways. If your organization has SolarWinds tools, I suggest reaching out to your IT staff to see if the organization is leveraging the tools to the greatest extent possible. SolarWinds can do so much that many organizations rarely leverage all its valuable feature.

What are some pitfalls to avoid when conducting internal capacity planning?
RG
: Don’t assign maturity tasks to people who have been with the organization for a really long time and are very set in their ways, because they may be reluctant to change. As improving maturity is a disruptive process, you want to assign tasks to staff eager to implement change. If you are delegating the supervision of the maturity project, don’t delegate it to a technology-oriented person. Instead, use a business-oriented person. This person doesn’t need to know a lot about cybersecurity — but they need to know, from a business perspective, why you need to implement the changes. Otherwise, your changes will be more technical in nature than strategic. Finally, don’t delegate the project to someone who is already fully engaged on other projects. You want to make sure this person has time to supervise the project.

Is there ever a danger of receiving incorrect information about resource capacity?
RG
: Yes, but you’ll know really quickly if a certain resource doesn’t help improve your maturity. It will be obvious, especially when you run the maturity model again. Additionally, there is a danger of staff advocating for the purchase of expensive tools your organization may not really need to manage the maturity process. Managers should insist that staff strongly and clearly make the case for such tools, illustrating how they will close specific maturity gaps.

When purchasing tools a good rule of thumb is: are you going to get three times the return on investment? Will it decrease cost or time by three times, or quantifiably reduce risk by three times? This ties in to the larger idea that cybersecurity is ultimately a function of business, not a function of IT. It also conveniently ties in with external capacity, the topic for chapter four.

Read our next cybersecurity playbook article, External capacity: Cybersecurity playbook for management #4here.

Article
Tapping your internal capacity for better results: Cybersecurity playbook for management #3

It’s one thing for coaching staff to see the need for a new quarterback or pitcher. Selecting and onboarding this talent is a whole new ballgame. Various questions have to be answered before moving forward: How much can we afford? Are they a right fit for the team and its playing style? Do the owners approve?

Management has to answer similar questions when selecting and implementing a cybersecurity maturity model, and form the basis of this blog – chapter 2 in BerryDunn’s Cybersecurity Playbook for Management.

What are the main factors a manager should consider when selecting a maturity model?
RG: All stakeholders, including managment, should be able to easily understand the model. It should be affordable for your organization to implement, and its outcomes achievable. It has to be flexible. And it has to match your industry. It doesn’t make a lot of sense to have an IT-centric maturity model if you’re not an extremely high-tech organization. What are you and your organization trying to accomplish by implementing maturity modeling? If you are trying to improve the confidentiality of data in your organization’s systems, then the maturity model you select should have a data confidentiality domain or subject area.

Managers should reach out to their peer groups to see which maturity models industry partners and associates use successfully. For example, Municipality A might look at what Municipality B is doing, and think: “How is Municipality B effectively managing cybersecurity for less money than we are?” Hint: there’s a good chance they’re using an effective maturity model. Therefore, Municipality A should probably select and implement that model. But you also have to be realistic, and know certain other factors—such as location and the ability to acquire talent—play a role in effective and affordable cybersecurity. If you’re a small town, you can’t compare yourself to a state capital.

There’s also the option of simply using the Cybersecurity Capability Maturity Model (C2M2), correct?
RG: Right. C2M2, developed by the U.S. Department of Energy, is easily scalable and can be tailored to meet specific needs. It also has a Risk Management domain to help ensure that an organization’s cybersecurity strategy supports its enterprise risk management strategy.

Once a manager has identified a maturity model that best fits their business or organization, how do they implement it?
RG: STEP ONE: get executive-level buy-in. It’s critical that executive management understands why maturity modeling is crucial to an organization's security. Explain to them how maturity modeling will help ensure the organization is spending money correctly and appropriately on cybersecurity. By sponsoring the effort, providing adequate resources, and accepting the final results, executive management plays a critical role in the process. In turn, you need to listen to executive management to know their priorities, issues, and resource constraints. When facilitating maturity modeling, don’t drive toward a predefined outcome. Understand what executive management is comfortable implementing—and what the business or organization can afford.

STEP TWO: Identify leads who are responsible for each domain or subject area of the maturity model. Explain to these leads why the organization is implementing maturity modeling, expected outcomes, and how their input is invaluable to the effort’s success. Generally speaking, the leads responsible for subject areas are very receptive to maturity modeling, because—unlike an audit—a maturity model is a resource that allows staff to advocate their needs and to say: “These are the resources I need to achieve effective cybersecurity.”

Third, have either management or these subject area leads communicate the project details to the lower levels of the organization, and solicit feedback, because staff at these levels often have unique insight on how best to manage the details.

The fourth step is to just get to work. This work will look a little different from one organization to another, because every organization has its own processes, but overall you need to run the maturity model—that is, use the model to assess the organization and discover where it measures up for each subject area or domain. Afterwards, conduct work sessions, collect suggestions and recommendations for reaching specific maturity levels, determine what it’s going to cost to increase maturity, get approval from executive management to spend the money to make the necessary changes, and create a Plan of Action and Milestones (POA&M). Then move forward and tick off each milestone.

Do you suggest selecting an executive sponsor or an executive steering committee to oversee the implementation?
RG: Absolutely. You just want to make sure the executive sponsors or steering committee members have both the ability and the authority to implement changes necessary for the modeling effort.

Should management consider hiring vendors to help implement their cybersecurity maturity models?
RG: Sure. Most organizations can implement a maturity model on their own, but the good thing about hiring a vendor is that a vendor brings objectivity to the process. Within your organization, you’re probably going to find erroneous assumptions, differing opinions about what needs to be improved, and bias regarding who is responsible for the improvements. An objective third party can help navigate these assumptions, opinions, and biases. Just be aware some vendors will push their own maturity models, because their models require or suggest organizations buy the vendors’ software. While most vendor software is excellent for improving maturity, you want to make sure the model you’re using fits your business objectives and is affordable. Don’t lose sight of that.

How long does it normally take to implement a maturity model?

RG: It depends on a variety of factors and is different for every organization. Keep in mind some maturity levels are fairly easy to reach, while others are harder and more expensive. It goes without saying that well-managed organizations implement maturity models more rapidly than poorly managed organizations.

What should management do after implementation?
RG: Run the maturity model again, and see where the organization currently measures up for each subject area or domain. Do you need to conduct a maturity model assessment every year? No, but you want to make sure you’re tracking the results year over year in order to make sure improvements are occurring. My suggestion is to conduct a maturity model assessment every three years.

One final note: make sure to maintain the effort. If you’re going to spend time and money implementing a maturity model, then make the changes, and continue to reassess maturity levels. Make sure the process becomes part of your organizations’ overall strategic plan. Document and institutionalize maturity modeling. Otherwise, the organization is in danger of losing this knowledge when the people who spearheaded the effort retire or pursue new opportunities elsewhere.

What’s next?
RG: Over the next couple of blogs, we’ll move away from talking about maturity modeling and begin talking about the role capacity plays in cybersecurity. Blog #3 will instruct managers on how to conduct an internal assessment to determine if their organizations have the people, processes, and technologies they need for effective cybersecurity.

Read our next cybersecurity playbook article, Tapping your internal capacity for better results: Cybersecurity playbook for management #3, here.

Article
Selecting and implementing a maturity model: Cybersecurity playbook for management #2

For professional baseball players who get paid millions to swing a bat, going through a slump is daunting. The mere thought of a slump conjures up frustration, anxiety and humiliation, and in extreme cases, the possibility of job loss.

The concept of a slump transcends sports. Just glance at the recent headlines about Yahoo, Equifax, Deloitte, and the Democratic National Committee. Data breaches occur on a regular basis. Like a baseball team experiencing a downswing, these organizations need to make adjustments, tough decisions, and major changes. Most importantly, they need to realize that cybersecurity is no longer the exclusive domain of Chief Information Security Officers and IT departments. Cybersecurity is the responsibility of all employees and managers: it takes a team.

When a cybersecurity breach occurs, people tend to focus on what goes wrong at the technical level. They often fail to see that cybersecurity begins at the strategic level. With this in mind, I am writing a blog series to outline the activities managers need to take to properly oversee cybersecurity, and remind readers that good cybersecurity takes a top-down approach. Consider the series a cybersecurity playbook for management. This Q&A blog — chapter 1 — highlights a basic concept of maturity modeling.

Let’s start with the basics. What exactly is a maturity model?
RG
: A maturity model is a framework that assesses certain elements in an organization, and provides direction to improve these elements. There are project management, quality management, and cybersecurity maturity models.

Cybersecurity maturity modeling is used to set a cybersecurity target for management. It’s like creating and following an individual development program. It provides definitive steps to take to reach a maturity level that you’re comfortable with — both from a staffing perspective, and from a financial perspective. It’s a logical road map to make a business or organization more secure.

What are some well-known maturity models that agencies and companies use?
RG
: One of the first, and most popular is the Program Review for Information Security Management Assistance (PRISMA), still in use today. Another is the Capability Maturity Model Integration (CMMI) model, which focuses on technology. Then there are some commercial maturity models, such as the Gartner Maturity Model, that organizations can pay to use.

The model I prefer is the Cybersecurity Capability Maturity Model (C2M2), developed by the U.S. Department of Energy. I like C2M2 because it directly maps to the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) compliance, which is a prominent industry standard. C2M2 is easily understandable and digestible, it scales to the size of the organization, and it is constantly updated to reflect the most recent U.S. government standards. So, it’s relevant to today’s operational environment.

Communication is one of C2M2’s strengths. Because there is a mechanism in the model requiring management to engage and support the technical staff, it facilitates communication and feedback at not just the operational level, but at the tactical level, and more significantly, the management level, where well-designed security programs start.

What’s the difference between processed-based and capability-based models?
RG
: Processed-based models focus on performance or technical aspects — for example, how mature are processes for access controls? Capability-based models focus on management aspects — is management adequately training people to manage access controls?

C2M2 combines the two approaches. It provides practical steps your organization can take, both operationally and strategically. Not only does it provide the technical team with direction on what to do on a daily basis to help ensure cybersecurity, it also provides management with direction to help ensure that strategic goals are achieved.

Looking at the bigger picture, what does an organization look like from a managerial point of view?
RG
: First, a mature organization communicates effectively. Management knows what is going on in their environment.

Most of them have very competent staff. However, staff members don’t always coordinate with others. I once did some security work for a company that had an insider threat. The insider threat was detected and dismissed from the company, but management didn’t know the details of why or how the situation occurred. Had there been an incident response plan in place (one of the dimensions C2M2 measures) — or even some degree of cybersecurity maturity in the company, they would’ve had clearly defined steps to take to handle the insider threat, and management would have been aware from an early stage. When management did find out about the insider threat, it became a much bigger issue than it had to be, and wasted time and resources. At the same time, the insider threat exposed the company to a high degree of risk. Because upper management was unaware, they were unable to make a strategic decision on how to act or react to the threat.

That’s the beauty of C2M2. It takes into account the responsibilities of both technical staff and management, and has a built-in communication plan that enables the team to work proactively instead of reactively, and shares cybersecurity initiatives between both management and technical staff.

Second, management in a mature organization knows they can’t protect everything in the environment — but they have a keen awareness of what is really important. Maturity modeling forces management to look at operations and identify what is critical and what really needs to be protected. Once management knows what is important, they can better align resources to meet particular challenges.

Third, in a mature organization, management knows they have a vital role to play in supporting the staff who address the day-to-day operational and technical tasks that ultimately support the organization’s cybersecurity strategy.

What types of businesses, not-for-profits, and government agencies should practice maturity modeling?
RG
: All of them. I’ve been in this industry a long time, and I always hear people say: “We’re too small; no one would take any interest in us.”

I conducted some work for a four-person firm that had been hired by the U.S. military. My company discovered that the firm had a breach and the four of them couldn’t believe it because they thought they were too small to be breached. It doesn’t matter what the size of your company is: if you have something someone finds very valuable, they’re going to try to steal it. Even very small companies should use cybersecurity models to reduce risk and help focus their limited resources on what is truly important. That’s maturity modeling: reducing risk by using approaches that make the most sense for your organization.

What’s management’s big takeaway?
RG
: Cybersecurity maturity modeling aligns your assets with your funding and resources. One of the most difficult challenges for every organization is finding and retaining experienced security talent. Because maturity modeling outlines what expertise is needed where, it can help match the right talent to roles that meet the established goals.

So what’s next?
RG
: In our next installment, we’ll analyze what a successful maturity modeling effort looks like. We’ll discuss the approach, what the outcome should be, and who should be involved in the process. We’ll discuss internal and external cybersecurity assessments, and incident response and recovery.

You can read our next chapter, Selecting and implementing a maturity model: Cybersecurity playbook for management #2here.

Article
Maturity modeling: Cybersecurity playbook for management #1