Skip to Main Content

insightsarticles

Cyberattacks in higher education—How prepared are you?

08.30.19

In light of the recent cyberattacks in higher education across the US, more and more institutions are finding themselves no longer immune to these activities. Security by obscurity is no longer an effective approach—all  institutions are potential targets. Colleges and universities must take action to ensure processes and documentation are in place to prepare for and respond appropriately to a potential cybersecurity incident.

BerryDunn’s Rick Gamache recently published several blog articles on incident response that are relevant to the recent cyberattacks. Below I have provided several of his points tailored to higher education leaders to help them prepare for cybersecurity incidents at their institutions.

What are some examples of incidents that managers need to prepare for?

Examples range from external breaches and insider threats to instances of malfeasance or incompetence. Different types of incidents lead to the same types of results—yet you can’t have a broad view of incidents. Managers should work with their teams to create incident response plans that reflect the threats associated with higher education institutions. A handful of general incident response plans isn’t going to cut it.

Managers need to work with their teams to develop a specific incident response plan for each specific type of incident. Why? Well, think of it this way: Your response to a careless employee should be different from your response to a malicious employee, for a whole host of legal reasons. Incident response is not a cookie-cutter process. In fact, it is quite the opposite. This is one of the reasons I highly suggest security teams include staff members outside of IT. When you’re responding to incidents, you want people who can look at a problem or situation from an external perspective, not just a technical or operational perspective within IT. These team members can help answer questions such as, what does the world see when they look at our institution? What institutional information might be valuable to, or targeted by, malicious actors? You’ll get some valuable fresh perspectives.

How short or long should the typical incident response plan be?

I often see good incident response plans no more than three or four pages in length. However, it is important that incident response plans are task oriented, so that it is clear who does what next. And when people follow an incident response plan, they should physically or digitally check off each activity, then record each activity.

What system or software do you recommend for recording incidents and responses?

There are all types of help desk software you can use, including free and open source software. I recommend using help desk software with workflow capabilities, so your team can assign and track tasks.

Any other tips for developing incident response plans?

First, managers should work with, and solicit feedback from across the academic and administrative areas within the institution when developing incident response plans. If you create these documents in a vacuum, they will be useless.

Second, managers and their teams should take their time and develop the most “solid” incident response plans possible. Don’t rush the process. The effectiveness of your incident response plans will be critical in assessing your institution’s ability to survive a breach. Because of this, you should be measuring your response plans through periodic testing, like conducting tabletop exercises.

Third, keep your students and external stakeholders in mind when developing these plans. You want to make sure external communications are consistent, accurate, and within the legal requirements for your institution. The last thing you want is students and stakeholders receiving conflicting messages about the incident. 

Are there any decent incident response plans in the public domain that managers and their teams can adapt for their own purposes?

Yes. My default reference is the National Institute of Standards and Technology (NIST). NIST has many special publications that describe the incident response process, how to develop a solid plan, and how to test your plan.

Should institutions have dedicated incident response teams?

Definitely. Institutions should identify and staff teams using internal resources. Some institutions may want to consider hiring a reputable third party to act as an incident response team. The key with hiring a third party? Don’t wait until an incident occurs! If you wait, you’re going to panic, and make panic-based decisions. Be proactive and hire a third party on retainer.

That said, institutions should consider hiring a third party on an annual basis to review incident response plans and processes. Why? Because every institution can grow complacent, and complacency kills. A third party can help gauge the strengths and weaknesses of your internal incident response teams, and provide suggestions for general or specific training. A third party can also educate your institution about the latest and greatest cyber threats.

Should managers empower their teams to conduct internal “hackathons” in order to test incident response?

Sure! It’s good practice, and it can be a lot of fun for team members. There are a few caveats. First, don’t call it a hackathon. The word can elicit negative or concerned reactions. Call it “active testing” or “continuous improvement exercises.” These activities allow team members to think creatively, and are opportunities for them to boost their cybersecurity knowledge. Second, be prepared for pushback. Some managers worry if team members gain more cybersecurity skills, then they’ll eventually leave the institution for another, higher-paying job. I think you should be committed to the growth of your team members―it’ll only make your institution more secure.

What are some best practices managers should follow when reporting incidents to their leadership?

Keep the update quick, brief, and to the point. Leave all the technical jargon out, and keep everything in an institutional context. This way leadership can grasp the ramifications of the event and understand what matters. Be prepared to outline how you’re responding and what actions leadership can take to support the incident response team and protect the institution. In the last chapter, I mentioned what I call the General Colin Powell method of reporting, and I suggest using that method when informing leadership. Tell them what you know, what you don’t know, what you think, and what you recommend. Have answers, or at least a plan.

How much institution-wide communication should there be about incidents?

That’s a great question, but a tough one to answer. Transparency is good, but it can also unintentionally lead to further incidents. Do you really want to let your whole institution know about an exploitable weakness? Also, employees can spread information about incidents on social media, which can actually lead to the spread of misinformation. If you are in doubt about whether or not to inform the entire institution about an incident, refer to your Legal Department. In general, institution-wide communication should be direct: We’ve had an incident; these are the facts; this is what you are allowed to say on social media; and this is what you’re not allowed to say on social media.

Another great but tough question: When do you tell the public about an incident? For this type of communication, you’re going to need buy-in from various sources: senior leadership, Legal, HR, and your PR team or external PR partners. You have to make sure the public messaging is consistent. Otherwise, citizens and the media will try to poke holes in your official story. And that can lead to even more issues.

What are the key takeaways for higher education leaders?

Here are key takeaways to help higher education leaders prepare for and respond appropriately to cybersecurity incidents:

  1. Understand your institution’s current cybersecurity environment. 
    Questions to consider: Do you have Chief Information Security Officer (CISO) and/or a dedicated cybersecurity team at your institution? Have you conducted the appropriate audits and assessments to understand your institution’s vulnerabilities and risks?
  2. Ensure you are prepared for cybersecurity incidents. 
    Questions to consider: Do you have a cybersecurity plan with the appropriate response, communication, and recovery plans/processes? Are you practicing your plan by walking through tabletop exercises? Do you have incident response teams?

Higher education continues to face growing threats of cybersecurity attacks – and it’s no longer a matter of if, but when. Leaders can help mitigate the risk to their institutions by proactively planning with incident response plans, communication plans, and table-top exercises. If you need help creating an incident response plan or wish to speak to us regarding preparing for cybersecurity threats, please reach out to us.
 

Related Industries

Related Professionals

Read this if you are a CIO, CFO, Provost, or President at a higher education institution.

In my conversations with CIO friends over the past weeks, it is obvious that the COVID-19 pandemic has forced a lot of change for institutions. Information technology is the underlying foundation for supporting much of this change, and as such, IT leaders face a variety of new demands now and into the future. Here are important considerations going forward.

Swift impact to IT and rapid response

The COVID-19 pandemic has had a significant impact on higher education. At the onset of this pandemic, institutions found themselves quickly pivoting to work from home (WFH), moving to remote campus operations, remote instruction within a few weeks, and in some cases, a few days. Most CIOs I spoke with indicated that they were prepared, to some extent, thanks to Cloud services and online class offerings already in place—it was mostly a matter of scaling the services across the entire campus and being prepared for returning students and faculty on the heels of an extended spring break.

Services that were not in place required creative and rapid deployment to meet the new demand. For example, one CIO mentioned the capability to have staff accept calls from home. The need for softphones to accommodate student service and helpdesk calls at staff homes required rapid purchase, deployment, and training.

Most institutions have laptop loan programs in place but not scaled to the size needed during this pandemic. Students who choose to attend college on campus are now forced to attend school from home and may not have the technology they need. The need for laptop loans increased significantly. Some institutions purchased and shipped laptops directly to students’ homes. 

CIO insights about people

CIOs shared seeing positive outcomes with their staff. Almost all of the CIOs I spoke with mentioned how the pandemic has spawned creativity and problem solving across their organizations. In some cases, past staffing challenges were put on hold as managers and staff have stepped up and engaged constructively. Some other positive changes shared by CIOs:

  • Communication has improved—a more intentional exchange, a greater sense of urgency, and problem solving have created opportunities for staff to get engaged during video calls.
  • Teams focusing on high priority initiatives and fewer projects have yielded successful results. 
  • People feel a stronger connection with each other because they are uniting behind a common purpose.

Perhaps this has reduced the noise that most staff seem to hear daily about competing priorities and incoming requests that seem to never end.

Key considerations and a framework for IT leaders 

It is too early to fully understand the impact on IT during this phase of the pandemic. However, we are beginning to see budgetary concerns that will impact all institutions in some way. As campuses work to get their budgets settled, cuts could affect most departments—IT included. In light of the increased demand for technology, cuts could be less than anticipated to help ensure critical services and support are uninterrupted. Other future impacts to IT will likely include:

  • Support for a longer term WFH model and hybrid options
  • Opportunities for greater efficiencies and possible collaborative agreements between institutions to reduce costs
  • Increased budgets for online services, licenses, and technologies
  • Need for remote helpdesk support, library services, and staffing
  • Increased training needs for collaborative and instructional software
  • Increased need for change management to help support and engage staff in the new ways of providing services and support
  • Re-evaluation of organizational structure and roles to right-size and refocus positions in a more virtual environment
  • Security and risk management implications with remote workers
    • Accessibility to systems and classes 

IT leaders should examine these potential changes over the next three to nine months using a phased approach. The diagram below describes two phases of impact and areas of focus for consideration. 

Higher Education IT Leadership Phases

As IT leaders continue to support their institutions through these phases, focusing on meeting the needs of faculty, staff, and students will be key in the success of their institutions. Over time, as IT leaders move from surviving to thriving, they will have opportunities to be strategic and create new ways of supporting teaching and learning. While it remains to be seen what the future holds, change is here. 

How prepared are you to support your institution? 

If we can help you navigate through these phases, have perspective to share, or any questions, please contact us. We’re here to help.

Article
COVID-19: Key considerations for IT leaders in Higher Ed

Editor’s note: If you are a higher education CFO, CIO, CTO or other C-suite leader, this blog is for you.

The Gramm-Leach-Bliley Act (GLBA) has been in the news recently as the Federal Trade Commission (FTC) has agreed to extend a deadline for public comment regarding proposed changes to the Safeguards Rule. Here’s what you need to know.

GLBA, also known as the Financial Modernization Act, is a 1999 federal law providing rules to financial institutions for protecting consumer information. Colleges and universities fall under this act because they conduct financial activities (e.g., administration of financial aid, loans, and other financial services).

Under the Safeguards Rule financial Institutions must develop, implement, and maintain a comprehensive information security program that consists of safeguards to handle customer information.

Proposed changes

The FTC is proposing five modifications to the Safeguards Rule. The new act will:

  • Provide more detailed guidance to impacted institutions regarding how to develop and implement specific aspects of an overall information security program.
  • Improve the accountability of an institution’s information security programs.
  • Exempt small business from certain requirements.
  • Expand the definition of “financial institutions” to include entities engaged in activities that the Federal Reserve Board determines to be incidental to financial activities.
  • Propose to include the definition of “financial institutions” and related examples in the rule itself rather than cross-reference them from a related FTC rule (Privacy of Consumer Financial Information Rule).

Potential impacts for your institution

The Federal Register, Volume 84, Number 65, published the notice of proposed changes that once approved by the FTC would add more prescriptive rules that could have significant impact on your institution. For example, these rules would require institutions to:

  1. Expand existing security programs with additional resources.
  2. Produce additional documentation.
  3. Create and implement additional policies and procedures.
  4. Offer various forms of training and education for security personnel.

The proposed rules could require institutions to increase their commitment in time and staffing, and may create hardships for institutions with limited or challenging resources.

Prepare now

While these changes are not final and the FTC is requesting public comment, here are some things you can do to prepare for these potential changes:

  • Evaluate whether your institution is compliant to the current Safeguards Rule.
  • Identify gaps between current status and proposed changes.
  • Perform a risk assessment.
  • Ensure there is an employee designated to lead the information security program.
  • Monitor the FTC site for final Safeguard Rules updates.

In the meantime, reach out to us if you would like to discuss the impact GLBA will have on your institution or if you would like assistance with any of the recommendations above. You can view a comprehensive list of potential changes here.

Source: Federal Trade Commission. Safeguards Rule. Federal Register, Vol. 84, No. 65. FTC.gov. April 4, 2019. https://www.ftc.gov/enforcement/rules/rulemaking-regulatory-reform-proceedings/safeguards-rule

Article
Higher ed: GLBA is the new four-letter word, but it's not as bad as you think

Focus on the people: How higher ed institutions can successfully make an ERP system change

The enterprise resource planning (ERP) system is the heart of an institution’s business, maintaining all aspects of day-to-day operations, from student registration to staff payroll. Many institutions have used the same ERP systems for decades and face challenges to meet the changing demands of staff and students. As new ERP vendors enter the marketplace with new features and functionality, institutions are considering a change. Some things to consider:

  1. Don’t just focus on the technology and make change management an afterthought. Transitioning to a new ERP system takes considerable effort, and has the potential to go horribly wrong if sponsorship, good planning, and communication channels are not in place. The new technology is the easy part of a transition—the primary challenge is often rooted in people’s natural resistance to change.  
  2. Overcoming resistance to change requires a thoughtful and intentional approach that focuses on change at the individual level. Understanding this helps leadership focus their attention and energy to best raise awareness and desire for the change.
  3. One effective tool that provides a good framework for successful change is the Prosci ADKAR® model. This framework has five distinct phases that align with ERP change:

These phases provide an approach for developing activities for change management, preparing leadership to lead and sponsor change and supporting employees through the implementation of the change.

The three essential steps to leveraging this framework:

  1. Perform a baseline assessment to establish an understanding of how ready the organization is for an ERP change
  2. Provide sponsorship, training, and communication to drive employee adoption
  3. Prepare and support activities to implement, celebrate, and sustain participation throughout the ERP transition

Following this approach with a change management framework such as the Prosci ADKAR® model can help an organization prepare, guide, and adopt ERP change more easily and successfully. 

If you’re considering a change, but need to prepare your institution for a healthy ERP transition using change management, chart yourself on this ADKAR framework—what is your organization’s change readiness? Do you have appropriate buy-in? What problems will you face?

You now know that this framework can help your changes stick, and have an idea of where you might face resistance. We’re certified Prosci ADKAR® practitioners and have experience guiding Higher Ed leaders like you through these steps. Get in touch—we’re happy to help and have the experience and training to back it up. Please contact the team with any questions you may have.

1Prosci ADKAR®from http://www.prosci.com

Article
Perspectives of an Ex-CIO

All teams experience losing streaks, and all franchise dynasties lose some luster. Nevertheless, the game must go on. What can coaches do? The answer: be prepared, be patient, and be PR savvy. Business managers should keep these three P’s in mind as they read Chapter 8 in BerryDunn’s Cybersecurity Playbook for Management, which highlights how organizations can recover from incidents.

In the last chapter, we discussed incident response. What’s the difference between incident response and incident recovery?

RG: Incident response refers to detecting and identifying an incident—and hopefully eradicating the source or cause of the incident, such as malware. Incident recovery refers to getting things back to normal after an incident. They are different sides of the same resiliency coin.

I know you feel strongly that organizations should have incident response plans. Should organizations also have incident recovery plans?

RG: Absolutely. Have a recovery plan for each type of possible incident. Otherwise, how will your organization know if it has truly recovered from an incident? Having incident recovery plans will also help prevent knee-jerk decisions or reactions that could unintentionally cover up or destroy an incident’s forensic evidence.

In the last chapter, you stated managers and their teams can reference or re-purpose National Institute of Standards and Technology (NIST) special publications when creating incident response plans. Is it safe to assume you also suggest referencing or re-purposing NIST special publications when creating incident recovery plans?

RG: Yes. But keep in mind that incident recovery plans should also mesh with, or reflect, any business impact analyses developed by your organization. This way, you will help ensure that your incident recovery plans prioritize what needs to be recovered first—your organization’s most valuable assets.

That said, I should mention that cybersecurity attacks don’t always target an organization’s most valuable assets. Sometimes, cybersecurity attacks simply raise the “misery index” for a business or group by disrupting a process or knocking a network offline.

Besides having incident recovery plans, what else can managers do to support incident recovery?

RG: Similar to what we discussed in the last chapter, managers should make sure that internal and external communications about the incident and the resulting recovery are consistent, accurate, and within the legal requirements for your business or industry. Thus, having a good incident recovery communication plan is crucial. 

When should managers think about bringing in a third party to help with incident recovery?

RG: That’s a great question. I think this decision really comes down to the confidence you have in your team’s skills and experience. An outside vendor can give you a lot of different perspectives but your internal team knows the business. I think this is one area that it doesn’t hurt to have an outside perspective because it is so important and we often don’t perceive ourselves as the outside world does. 

This decision also depends on the scale of the incident. If your organization is trying to recover from a pretty significant or high-impact breach or outage, you shouldn’t hesitate to call someone. Also, check to see if your organization has cybersecurity insurance. If your organization has cybersecurity insurance, then your insurance company is likely going to tell you whether or not you need to bring in an outside team. Your insurance company will also likely help coordinate outside resources, such as law enforcement and incident recovery teams.

Do you think most organizations should have cybersecurity insurance? 

RG: In this day and age? Yes. But organizations need to understand that, once they sign up for cybersecurity insurance, they’re going to be scrutinized by the insurance company—under the microscope, so to speak—and that they’ll need to take their “cybersecurity health” very seriously.

Organizations need to really pay attention to what they’re paying for. My understanding is that many different types of cybersecurity insurance have very high premiums and deductibles. So, in theory, you could have a $1 million insurance policy, but a $250,000 deductible. And keep in mind that even a simple incident can cost more than $1 million in damages. Not surprisingly, I know of many organizations signing up for $10 million insurance policies. 

How can managers improve internal morale and external reputation during the recovery process?

RG: Well, leadership sets the tone. It’s like in sports—if a coach starts screaming and yelling, then it is likely that the players will start screaming and yelling. So set expectations for measured responses and reactions. 

Check in on a regular basis with your internal security team, or whoever is conducting incident recovery within your organization. Are team members holding up under pressure? Are they tired? Have you pushed them to the point where they are fatigued and making mistakes? The morale of these team members will, in part, dictate the morale of others in the organization.

Another element that can affect morale is—for lack of a better word—idleness resulting from an incident. If you have a department that can’t work due to an incident, and you know that it’s going to take several days to get things back to normal, you may not want department members coming into work and just sitting around. Think about it. At some point, these idle department members are going to grumble and bicker, and eventually affect the wider morale. 

As for improving external reputation?I don’t think it really matters, honestly, because I don’t think most people really, truly care. Why? Because everyone is vulnerable, and attacks happen all the time. At this point in time, cyberattacks seem to be part of the normal course and rhythm of business. Look at all the major breaches that have occurred over the past couple of years. There’s always some of immediate, short-term fallout, but there’s been very little long-term fallout. Now, that being said, it is possible for organizations to suffer a prolonged PR crisis after an incident. How do you avoid this? Keep communication consistent—and limit interactions between employees and the general public. One of the worst things that can happen after an incident is for a CEO to say, “Well, we’re not sure what happened,” and then for an employee to tweet exactly what happened. Mixed messages are PR death knells. 

Let’s add some context. Can you identify a business or group that, in your opinion, has handled the incident recovery process well?

RG: You know, I can’t, and for a very good reason. If a business or group does a really good job at incident recovery, then the public quickly forgets about the incident—or doesn’t even hear about it in the first place. Conversely, I can identify many businesses or groups that have handled the incident recovery process poorly, typically from a PR perspective.

Any final thoughts about resiliency?

RG: Yes. As you know, over the course of this blog series, I have repeated the idea that IT is not the same as security. These are two different concepts that should be tackled by two different teams—or approached in their appropriate context. Similarly, managers need to remember that resiliency is not an IT process—it’s a business process. You can’t just shove off resiliency responsibilities onto your IT team. As managers, you need to get directly involved with resiliency, just as you need to get directly involved with maturity, capacity, and discovery. 

So, we’ve reached the end of this blog series. Above all else, what do you hope managers will gain from it? 

RG: First, the perspective that to understand your organization’s cybersecurity, is to truly understand your organization and its business. And I predict that some managers will be able to immediately improve business processes once they better grasp the cybersecurity environment. Second, the perspective that cybersecurity is ultimately the responsibility of everyone within an organization. Sure, having a dedicated security team is great, but everyone—from the CEO to the intern—plays a part. Third, the perspective that effective cybersecurity is effective communication. A siloed, closed-door approach will not work. And finally, the perspective that cybersecurity is always changing, so that it’s a best practice to keep reading and learning about it. Anyone with questions should feel free to reach out to me directly.

Article
Incident recovery: Cybersecurity playbook for management

Artificial Intelligence, or AI, is no longer the exclusive tool of well-funded government entities and defense contractors, let alone a plot device in science fiction film and literature. Instead, AI is becoming as ubiquitous as the personal computer. The opportunities of what AI can do for internal audit are almost as endless as the challenges this disruptive technology represents.

To understand how AI will influence internal audit, we must first understand what AI is.The concept of AI—a technology that can perceive the world directly and respond to what it perceives—is often attributed to Alan Turing, though the term “Artificial Intelligence” was coined much later in 1956 at Dartmouth College, in Hanover, New Hampshire. Turing was a British scientist who developed the machine that cracked the Nazis’ Enigma code. Turing thought of AI as a machine that could convince a human that it also was human. Turing’s humble description of AI is as simple as it is elegant. Fast-forward some 60 years and AI is all around us and being applied in novel ways almost every day. Just consider autonomous self- driving vehicles, facial recognition systems that can spot a fugitive in a crowd, search engines that tailor our online experience, and even Pandora, which analyzes our tastes in music.

Today, in practice and in theory, there are four types of AI. Type I AI may be best represented by IBM’s Deep Blue, a chess-playing computer that made headlines in 1996 when it won a match against Russian chess champion Gary Kasparov. Type I AI is reactive. Deep Blue can beat a chess champion because it evaluates every piece on the chessboard, calculates all possible moves, then predicts the optimal move among all possibilities. Type I AI is really nothing more than a super calculator, processing data much faster than the human mind can. This is what gives Type I AI an advantage over humans.

Type II AI, which we find in autonomous cars, is also reactive. For example, it applies brakes when it predicts a collision; but, it has a low form of memory as well. Type II AI can briefly remember details, such as the speed of oncoming traffic or the distance between the car and a bicyclist. However, this memory is volatile. When the situation has passed, Type II AI deletes the data from its memory and moves on to the next challenge down the road.

Type II AI's simple form of memory management and the ability to “learn” from the world in which it resides is a significant advancement. 
The leap from Type II AI to Type III AI has yet to occur. Type III AI will not only incorporate the awareness of the world around it, but will also be able to predict the responses and motivations of other entities and objects, and understand that emotions and thoughts are the drivers of behavior. Taking the autonomous car analogy to the next step, Type III AI vehicles will interact with the driver. By conducting a simple assessment of the driver’s emotions, the AI will be able to suggest a soothing playlist to ease the driver's tensions during his or her commute, reducing the likelihood of aggressive driving. Lastly, Type IV AI–a milestone that will likely be reached at some point over the next 20 or 30 years—will be self-aware. Not only will Type IV AI soothe the driver, it will interact with the driver as if it were another human riding along for the drive; think of “HAL” in Arthur C. Clarke’s 2001: A Space Odyssey.

So what does this all mean to internal auditors?
While it may be a bit premature to predict AI’s impact on the internal audit profession, AI is already being used to predict control failures in institutions with robust cybersecurity programs. When malicious code is detected and certain conditions are met, AI-enabled devices can either divert the malicious traffic away from sensitive data, or even shut off access completely until an incident response team has had time to investigate the nature of the attack and take appropriate actions. This may seem a rather rudimentary use of AI, but in large financial institutions or manufacturing facilities, minutes count—and equal dollars. Allowing AI to cut off access to a line of business that may cost the company money (and its reputation) is a significant leap of faith, and not for the faint of heart. Next generation AI-enabled devices will have even more capabilities, including behavioral analysis, to predict a user’s intentions before gaining access to data.

In the future, internal audit staff will no doubt train AI to seek conditions that require deeper analysis, or even predict when a control will fail. Yet AI will be able to facilitate the internal audit process in other ways. Consider AI’s role in data quality. Advances in inexpensive data storage (e.g., the cloud) have allowed the creation and aggregation of volumes of data subject to internal audit, making the testing of the data’s completeness, integrity, and reliability a challenging task considering the sheer volume of data. Future AI will be able to continuously monitor this data, alerting internal auditors not only of the status of data in both storage and motion, but also of potential fraud and disclosures.

The analysis won’t stop there.
AI will measure the performance of the data in meeting organizational objectives, and suggest where efficiencies can be gained by focusing technical and human resources to where the greatest risks to the organization exist in near real-time. This will allow internal auditors to develop a common operating picture of the day-to-day activities in their business environments, alerting internal audit when something doesn’t quite look right and requires further investigation.

As promising as AI is, the technology comes with some ethical considerations. Because AI is created by humans, it is not always vacant of human flaws. For instance, AI can become unpredictably biased. AI used in facial recognition systems has made racial judgments based on certain common facial characteristics. In addition, AI that gathers data from multiple sources that span a person’s financial status, credit status, education, and individual likes and dislikes could be used to profile certain groups for nefarious intentions. Moreover, AI has the potential to be weaponized in ways that we have yet to comprehend.

There is also the question of how internal auditors will be able to audit AI. Keeping AI safe from internal fraudsters and external adversaries is going to be paramount. AI’s ability to think and act faster than humans will challenge all of us to create novel ways of designing and testing controls to measure AI’s performance. This, in turn, will likely make partnerships with consultants that can fill knowledge gaps even more valuable. 

Challenges and pitfalls aside, AI will likely have a tremendous positive effect on the internal audit profession by simultaneously identifying risks and evaluating processes and control design. In fact, it is quite possible that the first adopters of AI in many organizations may not be the cybersecurity departments at all, but rather the internal auditor’s office. As a result, future internal auditors will become highly technical professionals and perhaps trailblazers in this new and amazing technology.

Article
Artificial intelligence and the future of internal audit

The world of professional sports is rife with instability and insecurity. Star athletes leave or become injured; coaching staff make bad calls or public statements. The ultimate strength of a sports team is its ability to rebound. The same holds true for other groups and businesses. Chapter 7 in BerryDunn’s Cybersecurity Playbook for Management looks at how organizations can prepare for, and respond to, incidents.

The final two chapters of this Cybersecurity Playbook for Management focus on the concept of resiliency. What exactly is resiliency?
RG
: Resiliency refers to an organization’s ability to keep the lights on—to keep producing—after an incident. An incident is anything that disrupts normal operations, such as a malicious cyberattack or an innocent IT mistake.

Among security professionals, attitudes toward resiliency have changed recently. Consider the fact that the U.S. Department of Defense (DOD) has come out and said, in essence, that cyberwarfare is a war that it cannot win—because cyberwarfare is so complex and so nuanced. The battlefield changes daily and the opponents have either a lot of resources or a lot of time on their hands. Therefore, the DOD is placing an emphasis on responding and recovering from incidents, rather than preventing them.

That’s sobering.
RG
: It is! And businesses and organizations should take note of this attitude change. Protection, which was once the start and endpoint for security, has given way to resiliency.

When and why did this attitude change occur?
RG
: Several years ago, security experts started to grasp just how clever certain nation states, such as China and Russia, were at using malicious software. If you could point to one significant event, likely the 2013 Target breach is it.

What are some examples of incidents that managers need to prepare for?
RG
: Examples range from external breaches and insider threats to instances of malfeasance or incompetence. Different types of incidents lead to the same types of results—yet you can’t have a broad view of incidents. Managers should work with their teams to create incident response plans that reflect the threats associated with their specific line of business. A handful of general incident response plans isn’t going to cut it.

Managers need to work with their teams to develop a specific incident response plan for each specific type of incident. Why? Well, think of it this way: Your response to a careless employee should be different from your response to a malicious employee, for a whole host of legal reasons.

Incident response is not a cookie-cutter process. In fact, it is quite the opposite. This is one of the reasons I highly suggest that security teams include staff members with liberal arts backgrounds. I’m generalizing, but these people tend to be creative. And when you’re responding to incidents, you want people who can look at a problem or situation from a global or external perspective, not just a technical or operational perspective. These team members can help answer questions such as, what does the world see when they look at our organization? What organizational information might be valuable to, or targeted by, malicious actors? You’ll get some valuable fresh perspectives.

How short or long should the typical incident response plan be?
RG
: They can be as short as needed; I often see good incident response plans no more than three or four pages in length. However, it is important that incident response plans are task oriented, so that it is clear who does what next. And when people follow an incident response plan, they should physically or digitally check off each activity, then record each activity.

What system or software do you recommend for recording incidents and responses?
RG
: There are all types of help desk software you can use, including free and open source software. I recommend using help desk software with workflow capabilities so your team can assign and track tasks.

Any other tips for developing incident response plans?
RG
: First, managers should work with, and solicit feedback from, different data owners and groups within the organization—such as IT, HR, and Legal—when developing incident response plans. If you create these documents in a vacuum, they will be useless.

Second, managers and their teams should take their time and develop the most “solid” incident response plans possible. Don’t rush the process. The effectiveness of your incident response plans will be critical in assessing your organization’s ability to survive a breach. Because of this, you should be measuring your response plans through periodic testing, like conducting tabletop exercises.

Third, keep your organization’s customers in mind when developing these plans. You want to make sure external communications are consistent, accurate, and within the legal requirements for your business or industry. The last thing you want is customers receiving conflicting messages about the incident. This can cause unnecessary grief for you, but can also cause an unmeasurable loss of customer confidence.

Are there any decent incident response plans in the public domain that managers and their teams can adapt for their own purposes?
RG
: Yes. My default reference is the National Institute of Standards and Technology (NIST). NIST has many special publications that describe the incident response process, how to develop a solid plan, and how to test your plan.

Should organizations have dedicated incident response teams?
RG: Definitely. Larger organizations usually have the resources and ability to staff these teams internally. Smaller organizations may want to consider hiring a reputable third party to act as an incident response team. The key with hiring a third party? Don’t wait until an incident occurs! If you wait, you’re going to panic, and make panic-based decisions. Be proactive and hire a third party on retainer.

That said, even larger organizations should consider hiring a third party on an annual basis to review incident response plans and processes. Why? Because every organization can grow complacent, and complacency kills. A third party can help gauge the strengths and weaknesses of your internal incident response teams, and provide suggestions for general or specific training. A third party can also educate your organization about the latest and greatest cyber threats.

Should managers empower their teams to conduct internal “hackathons” in order to test incident response?
RG
: Sure! It’s good practice, and it can be a lot of fun for team members. There are a few caveats. First, don’t call it a “hackathon.” The word can elicit negative reactions from upper management—whose support you really need. Call it “active testing” or “continuous improvement exercises.” These activities allow team members to think creatively, and are opportunities for them to boost their cybersecurity knowledge. Second, be prepared for pushback. Some managers worry if team members gain more cybersecurity skills, then they’ll eventually leave the organization for another, higher-paying job. I think you should be committed to the growth of your team members; it’ll only make your organization more secure.

What are some best practices managers should follow when reporting incidents to their leadership?
RG
: Keep the update quick, brief, and to the point. Leave all the technical jargon out, and keep everything in a business context. This way leadership can grasp the ramifications of the event and understand what matters. Be prepared to outline how you’re responding and what actions leadership can take to support the incident response team and protect the business. In the last chapter, I mentioned what I call the General Colin Powell method of reporting, and I suggest using that method when informing leadership. Tell them what you know, what you don’t know, what you think, and what you recommend. Have answers, or at least a plan.

Above all else, don’t scare leadership. If you present them with panic, you’re going to get panic back. Be a calm voice in the storm. Management will respond better, as will your team.

Another thing to keep in mind is different business leaders have different responses to this sort of news. An elected official, for example, might react differently than the CEO of a private company, simply due to possible political fallout. Keep this context in mind when reporting incidents. It can help you craft the message.

How much organization-wide communication should there be about incidents?
RG
: That’s a great question, but a tough one to answer. Transparency is good, but it can also unintentionally lead to further incidents. Do you really want to let your whole organization know about an exploitable weakness? Also, employees can spread information about incidents on social media, which can actually lead to the spread of misinformation. If you are in doubt about whether or not to inform the entire organization about an incident, refer to your Legal Department. In general, organization-wide communication should be direct: We’ve had an incident; these are the facts; this is what you are allowed to say on social media; and this is what you’re not allowed to say on social media.

Another great but tough question: When do you tell the public about an incident? For this type of communication, you’re going to need buy-in from various sources: leadership, Legal, HR, and your PR team or external PR partners. You have to make sure the public messaging is consistent. Otherwise, citizens and the media will try to poke holes in your official story. And that can lead to even more issues.

So what’s next?
RG
: Chapter 8 will focus on how managers can help their organizations recover from a cybersecurity incident.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Article
Incident response: Cybersecurity playbook for management

Cloud services are becoming more and more omnipresent, and rapidly changing how companies and organizations conduct their day-to-day business.

Many higher education institutions currently utilize cloud services for learning management systems (LMS) and student email systems. Yet there are some common misunderstandings and assumptions about cloud services, especially among higher education administrative leaders who may lack IT knowledge. The following information will provide these leaders with a better understanding of cloud services and how to develop a cloud services strategy.

What are cloud services?

Cloud services are internet-based technology services provided and/or hosted by offsite vendors. Cloud services can include a variety of applications, resources, and services, and are designed to be easily scalable, cost effective, and fully managed by the cloud services vendor.

What are the different types?

Cloud services are generally categorized by what they provide. Today, there are four primary types of cloud services:

Cloud Service Types 

Cloud services can be further categorized by how they are provided:

  1. Private cloud services are dedicated to only one client. Security and control is the biggest value for using a private cloud service.
  2. Public cloud services are shared across multiple clients. Cost effectiveness is the best value of public cloud services because resources are shared among a large number of clients.
  3. Hybrid cloud services are combinations of on-premise software and cloud services. The value of hybrid cloud services is the ability to adopt new cloud services (private or public) slowly while maintaining on-premise services that continue to provide value.

How do cloud services benefit higher education institutions?

Higher education administrative leaders should understand that cloud services provide multiple benefits.
Some examples:

Cloud-Services-for-Higher-Education


What possible problems do cloud services present to higher education institutions?

At the dawn of the cloud era, many of the problems were technical or operational in nature. As cloud services have become more sophisticated, the problems have become more security and business related. Today, higher education institutions have to tackle challenges such as cybersecurity/disaster recovery, data ownership, data governance, data compliance, and integration complexities.

While these problems and questions may be daunting, they can be overcome with strong leadership and best-practice policies, processes, and controls.

How can higher education administrative leaders develop a cloud services strategy?

You should work closely with IT leadership to complete this five-step planning checklist to develop a cloud services strategy: 

1. 

Identify new services to be added or consolidated; build a business case and identify the return on investment (ROI) for moving to the cloud, in order to answer:

• 

What cloud services does your institution already have?

• 

What cloud services does your institution already have?

• 

What services should you consider replacing with cloud services, and why?

• 

How are data decisions being made?

2. 

Identify design, technical, network, and security requirements (e.g., private or public; are there cloud services already in place that can be expanded upon, such as a private cloud service), in order to answer:

• 

Is your IT staff ready to migrate, manage, and support cloud services?

• 

Do your business processes align with using cloud services?

• 

Do cloud service-provided policies align with your institution’s security policies?

• 

Do you have the in-house expertise to integrate cloud services with existing on-premise services?

3. 

Decide where data will be stored; data governance (e.g., on-premise, off-premise data center, cloud), in order to answer:

• 

Who owns the data in the institution’s cloud, and where?

• 

Who is accountable for data decisions?

4. 

Integrate with current infrastructure; ensure cloud strategy easily allows scalability for expansion and additional services, in order to answer:

• 

What integration points will you have between on-premise and cloud applications or services, and can the institution easily implement, manage, and support them?

5. 

Identify business requirements — budget, timing, practices, policies, and controls required for cloud services and compliance, in order to answer:

• 

Will your business model need to change in order to support a different cost model for cloud services (i.e., less capital for equipment purchases every three to five years versus a steady monthly/yearly operating cost model for cloud services)?

• 

Does your institution understand the current state and federal compliance and privacy regulations as they relate to data?

• 

Do you have a contingency plan if its primary cloud services provider goes out of business?

• 

Do your contracts align with institutional, state, and federal guidelines?

Need assistance?

BerryDunn’s higher education team focuses on advising colleges and universities in improving services, reducing costs, and adding value. Our team is well qualified to assist in understanding the cloud “skyscape.” If your institution seeks to maximize the value of cloud services or develop a cloud services strategy, please contact me.

Article
Cloud services 101: An almanac for higher education leaders

Any sports team can pull off a random great play. Only the best sports teams, though, can pull off great plays consistently — and over time. The secret to this lies in the ability of the coaching staff to manage the team on a day-to-day basis, while also continually selling their vision to the team’s ownership. Chapter Six in BerryDunn’s Cybersecurity Playbook for Management looks at how managers can achieve similar success through similar actions.

The title of this chapter is “The Workflow.” What are we talking about today?
RG
: In previous chapters, we’ve walked managers through cybersecurity concepts like maturity, capacity, and discovery. Today, we’re going to discuss how you can foster a consistent and repeatable cybersecurity program — the cybersecurity workflow, if you will. And for managers, this is where game planning begins. To achieve success, they need to effectively oversee their team on a day-to-day basis, and continually sell the cybersecurity program to the business leadership for whom they work — the board or CEO.

Let’s dive right in. How exactly do managers oversee a cybersecurity program on a day-to-day basis?
RG
: Get out of the way, and let your team do its work. By this point, you should know what your team is capable of. Therefore, you need to trust your team. Yet you should always verify. If your team recommends purchasing new software, have your team explain, in business terms, the reasons for the purchase. Then verify those reasons. Operationalizing tools, for example, can be difficult and costly, so make sure they put together a road map with measurable outcomes before you agree to buy any tools — even if they sound magical!

Second, empower your team by facilitating open dialogue. If your team brings you bad news, listen to the bad news — otherwise, you’ll end up alienating people. Know that your team is going to find things within your organization’s “auditable universe” that are going to make you uncomfortable from a cybersecurity point of view. Nevertheless, you need to encourage your team to share the information, so don’t overreact.

Third, give your team a communication structure that squelches a crisis-mode mentality — “Everything’s a disaster!” In order to do that, make sure your team gives every weakness or issue they discover a risk score, and log the score in a risk register. That way, you can prioritize what is truly important.

Fourth, resolve conflicts between different people or groups on your team. Take, for example, conflict between IT staff and security staff, (read more here). It is a common issue, as there is natural friction between these groups, so be ready to deal with it. IT is focused on running operations, while security is focused on protecting operations. Sometimes, protection mechanisms can disrupt operations. Therefore, managers need to act as peacemakers between the two groups. Don’t show favoritism toward one group or another, and don’t get involved in nebulous conversations regarding which group has “more skin in the game.” Instead, focus on what is best for your organization from a business perspective. The business perspective ultimately trumps either IT or security concerns.

Talk about communication for a moment. Managers often come from business backgrounds, while technical staff often come from IT backgrounds. How do you foster clear communication across this divide?
RG
: Have people talk in simple terms. Require everyone on your team use plain language to describe what they know or think. I recommend using what I call the Colin Powell method of reporting:

1. Tell me what you know.
2. Tell me what you don’t know.
3. Tell me what you think.
4. Tell me what you recommend.

When you ask team members questions in personal terms — “Tell me what you know”—you tend to receive easy-to-understand, non-jargon answers.

Something that we really haven’t talked about in this series is cybersecurity training. Do you suggest managers implement regular cybersecurity training for their team?
RG
: This is complicated, and my response will likely be be a little controversial to many. Yes, most organizations should require some sort of cybersecurity training. But I personally would not invest a lot of time or money into cybersecurity training beyond the basics for most users and specific training for technical staff. Instead, I would plan to spend more money on resiliency — responding to, and recovering from, a cybersecurity attack or incident. (We’ll talk about resiliency more in the next two chapters.) Why? Well, you can train people all day long, but it only takes one person to be malicious, or to make an innocent mistake, that leads to a cybersecurity attack or incident. Let’s look at my point from a different perspective. Pretend you’re the manager of a bank, and you have some money to spend on security. Are you going to spend that money on training your employees how to identify a robber? Or are you going to spend that money on a nice, state-of-the-art vault?

Let’s shift from talking about staff to talking about business leadership. How do managers sell the cybersecurity program to them?
RG
: Use business language, not technical language. For instance, a CEO may not necessarily care much about the technical behavior of a specific malware, but they are going to really care about the negative effects that malware can have on the business.

Also, keep the conversation short, simple, and direct. Leadership doesn’t have time to hear about all you’re doing. Leadership wants progress updates and a clear sense of how the cybersecurity program is helping the business. I suggest discussing three to four high-priority security risks, and summarizing how you and your team are addressing those risks.

And always remember that in times of crisis, those who keep a cool head tend to gain the most support. Therefore, when talking to the board or CEO, don’t be the bearer of “doom and gloom.” Be calm, positive, empowering, and encouraging. Provide a solution. And make leadership part of the solution by reminding them that they, too, have cybersecurity responsibilities, such as communicating the value of the cybersecurity program to the organization — internal PR, in other words.

How exactly should a manager communicate this info to leadership? Do you suggest one-on-one chats, reports, or presentations?
RG
: This all depends on leadership. You know, some people are verbal learners; some people are visual learners. It might take some trial and error to figure out the best medium for conveying your information, and that’s OK. Remember: cybersecurity is an ongoing process, not a one-and-done event. However, if you are going to pursue the one-on-one chat route, just be prepared, materials-wise. If leadership asks for a remediation plan, then you better have that remediation plan ready to present!

What is one of the biggest challenges that managers face when selling cybersecurity programs to leadership?RG: One of the biggest challenges is addressing questions about ROI, because there often are no quantifiable financial ROIs for cybersecurity. But organizations have to protect themselves. So the question is, how much money is your organization willing to spend to protect itself? Or, in other words, how much risk can your organization reduce — and does this reduction justify the cost?

One possible way to communicate the value of cybersecurity to leadership is to compare it to other necessary elements within the organization, such as HR. What is the ROI of HR? Who knows? But do you really want your organization to lack an HR department? Think of all the possible logistic and legal issues that could swamp your organization without an HR department. It’s terrifying to think about! And the same goes for cybersecurity.

We’ve talked about how managers should communicate with their team and with business leadership. What about the organization as a whole?
RG
: Sure! Regular email updates are great, especially if you keep them “light,” so to speak. Don’t get into minutia. That said, I also think a little bit of secrecy goes a long way. Organizations need to be aware of, and vigilant toward, insider threats. Loose lips sink ships, you know? Gone are the days when a person works for an organization for 30+ years. Employees come and go pretty frequently. As a result, the concept of company loyalty has changed. So make sure your organization-wide updates don’t give away too much cybersecurity information.

So what’s next?
RG:
Chapter 7 will focus on how managers can help their organizations respond to a cybersecurity attack or incident.

Article
The workflow: Cybersecurity playbook for management

A professional sports team is an ever-changing entity. To have a general perspective on the team’s fluctuating strengths and weaknesses, a good coach needs to trust and empower their staff to discover the details. Chapter 5 in BerryDunn’s Cybersecurity Playbook for Management looks at how discovery can help managers understand their organization’s ever-changing IT environment. 

What is discovery, and how does it connect to capacity?
RG: Discovery is the process of mapping your organization’s capacity—people, processes, and tools—so you understand what your organization’s IT environment has. In other words, it’s the auditing of your IT environment.

Of course, the most valuable thing within your IT environment, other than the people who access it, is the “thing” that drives your business. Often this thing is data, but it could be proprietary processes or machinery. For the purposes of this blog, we’ll focus on data. Discovery naturally answer questions such as:

• What in our IT environment is important to our business?
• How is it being used?
• Who has access to it, and how can we better protect it? 

How can managers tackle discovery?
RG: First, you need to understand discovery requires accepting the fact that the environment is always evolving. Discovery is not a one-and-done process—it, never ends. People introduce new things, like updated software, into IT environments all the time. Your IT environment is an always-shifting playing field. Think of Amazon’s Alexa devices. When someone plugs one into your internal wireless network, they’ve just expanded your attack surface for a hacker by introducing a new device with its own set of vulnerabilities.

Second, you have to define the “auditable universe” by establishing manageable boundaries in direct proportion to your discovery team’s capabilities. I often see solicitations for proposals that ask for discovery of all assets in an IT environment. That could include a headquarters building, 20 satellite offices, and remote workers, and is going to take a long time to assess. I recently heard of a hospital discovering 41,000 internet-connected devices on their network—mostly Internet of Things (IoT) resources, such as heart monitors. Originally, the hospital had only been aware of about one-third of these devices. Keeping your boundaries realistic and manageable can prevent your team from being overwhelmed.

Third, your managers should refrain from getting directly involved with discovery because it’s a pretty technical and time-consuming process. You should task a team to conduct discovery, and provide the discovery team with adequate tools. There are a lot of good tools that can help map networks and manage assets; we’ll talk about them later in this blog. Managers should mainly concern themselves with the results of discovery and trust in the team’s ability to competently map out the IT environment. Remember, the IT environment is always evolving, so even as the results roll in, things are changing.

Who should managers select for the discovery team?
RG: Ideally, various groups of people. For instance, it makes sense for HR staff to conduct the people part of discovery. Likewise, it makes sense for data owners—staff responsible for certain data—to conduct the process part of discovery, and for IT staff to conduct the tool part.

However, I should point out that if you have limited internal resources, then the IT staff can conduct all three parts of discovery, working closely with all stakeholders. IT staff will have a pretty good sense of where data is held within the organization’s IT environment, and they will develop an understanding of what is important to the organization.

Could an organization’s security staff conduct discovery?
RG: Interestingly enough, security staff don’t always have day-to-day interactions with data. They are more focused on overall data protection strategies and tactics. Therefore, it makes more sense to leverage other staff, but the results of discovery (e.g., knowing where data resides, understanding the sensitivity of data) need to be shared with security staff. Ultimately, this knowledge will help security staff better protect your data.

What about hiring external resources to conduct discovery?
RG: It depends on what you’re trying to do. If the goal of discovery is to comply with some sort of regulatory standard or framework, then yes, hiring external resources makes sense. These resources could come in and, using the discovery process, conduct a formal assessment. It may also make sense to hire external resources if you’re short-staffed, or if you have a complex environment with undocumented data repositories, processes, and tools. Yet in each of these scenarios, the external resources will only be able to provide a point-in-time baseline. 

Otherwise, I recommend leveraging your internal staff. An internal discovery team should be able to handle the task if adequately staffed and resourced, and team members will learn a lot in the process. And as discovery never really ends, do you want to have to perpetually hire external resources?

People make up a big part of capacity. Should the discovery team focus on people and their roles in this process?
RG: Yes! It sounds odd that people and their roles are included in discovery, but it is important to know who is using and touching your data. At a minimum, the discovery team needs to conduct background checks. (This is one example of where HR staff need to be part of the discovery process.)

How can the discovery team best map processes?
RG: The discovery team has to review each process with the respective data owner. Now, if you are asking the data owners themselves to conduct discovery, then you should have them illustrate their own workflows. There are various process mapping tools, such as Microsoft Visio, that data owners can use for this.

The discovery team needs to acknowledge that data owners often perform their processes correctly through repetition—the problems or potential vulnerabilities stem from an inherently flawed or insecure process, or having one person in charge of too many processes. Managers should watch out for this. I’ll give you a perfect example of the latter sort of situation. I once helped a client walk through the process of system recovery.

During the process we discovered that the individual responsible for system recovery also had the ability to manipulate database records and to print checks. In theory, that person could have been able to cut themselves a check and then erase its history from the system. That’s a big problem!

Other times, data owners perform their processes correctly, but inadvertently use compromised or corrupted tools, such as free software downloaded from the internet. The discovery team has to identify needed policy and procedure changes to prevent these situations from happening.

Your mention of vulnerable software segues nicely to the topic of tools. How can the discovery team best map the technologies the organization uses?
RG: Technology is inherently flawed. You can’t go a week without hearing about a new vulnerability in a widely used system or application. I suggest researching network scanning tools for identifying hosts within your network; vulnerability testing tools for identifying technological weaknesses or gaps; and penetration testing tools for simulating cyber-attacks to assess cybersecurity defenses.

Let’s assume a manager has tasked a team to conduct discovery. What’s the next step?
RG: If you recall, in the previous blog I discussed the value of adopting a cybersecurity risk register, which is a document used to list the organization’s cybersecurity risks, record required risk mitigation actions, and identify who “owns” the risk. The next step is for your discovery team to start completing the risk register. The manager uses this risk register, and subsequent discussions with the team, to make corresponding business decisions to improve cybersecurity, such as purchasing new tools—and to measure the progress of mitigating any vulnerabilities identified in the discovery process. A risk register can become an invaluable resource planning tool for managers.

For discovery purposes, what’s the best format for a cybersecurity risk register?
RG: There are very expensive programs an organization can use to create a risk register. Some extremely large banking companies use the RSA Archer GRC platform. However, you can build a very simple risk register in Excel. An Excel spreadsheet would work well for small and some mid-sized organizations, but there are other relatively inexpensive solutions available. I say this because managers should aim for simplicity. You don’t want the discovery team getting bogged down by a complex risk register.

Finally, what are some discovery resources and reference guides that managers should become familiar with and utilize?
RG: I recommend the National Institute of Standards and Technology (NIST) Special Publication series. They outline very specific and detailed discovery methodologies you can use to improve your discovery process.

So what’s next?
RG: Chapter 6 will focus on synthesizing maturity, capacity, and discovery to create a resilient organization from a cybersecurity point of view.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Article
Discovery: Cybersecurity playbook for management