Skip to Main Content

blogpost

Trusting privileged accounts in the age of data breaches

05.21.19

Who has the time or resources to keep tabs on everything that everyone in an organization does? No one. Therefore, you naturally need to trust (at least on a certain level) the actions and motives of various personnel. At the top of your “trust level” are privileged users—such as system and network administrators and developers—who keep vital systems, applications, and hardware up and running. Yet, according to the 2019 Centrify Privileged Access Management in the Modern Threatscape survey, 74% of data breaches occurred using privileged accounts. The survey also revealed that of the organizations responding:

  • 52% do not use password vaulting—password vaulting can help privileged users keep track of long, complex passwords for multiple accounts in an encrypted storage vault.
  • 65% still share the use of root and other privileged access—when the use of root accounts is required, users should invoke commands to inherent the privileges of the account (SUDO) without actually using the account. This ensures “who” used the account can be tracked.
  • Only 21% have implemented multi-factor authentication—the obvious benefit of multi-factor authentication is to enhance the security of authenticating users, but also in many sectors it is becoming a compliance requirement.
  • Only 47% have implemented complete auditing and monitoring—thorough auditing and monitoring is vital to securing privileged accounts.

So how does one even begin to trust privileged accounts in today’s environment? 

1. Start with an inventory

To best manage and monitor your privileged accounts, start by finding and cataloguing all assets (servers, applications, databases, network devices, etc.) within the organization. This will be beneficial in all areas of information security such as asset management, change control and software inventory tracking. Next, inventory all users of each asset and ensure that privileged user accounts:

  • Require privileges granted be based on roles and responsibilities
  • Require strong and complex passwords (exceeding those of normal users)
  • Have passwords that expire often (30 days recommended)
  • Implement multi-factor authentication
  • Are not shared with others and are not used for normal activity (the user of the privileged account should have a separate account for non-privileged or non-administrative activities)

If the account is only required for a service or application, disable the account’s ability to login from the server console and from across the network

2. Monitor—then monitor some more

The next step is to monitor the use of the identified privileged accounts. Enable event logging on all systems and aggregate to a log monitoring system or a Security Information and Event Management (SIEM) system that alerts in real time when privileged accounts are active. Configure the system to alert you when privileged accounts access sensitive data or alter database structure. Report any changes to device configurations, file structure, code, and executable programs. If these changes do not correlate to an approved change request, treat them as incidents and investigate.  

Consider software that analyzes user behavior and identifies deviations from normal activity. Privileged accounts that are accessing data or systems not part of their normal routine could be the indication of malicious activity or a database attack from a compromised privileged account. 

3. Secure the event logs

Finally, ensure that none of your privileged accounts have access to the logs being used for monitoring, nor have the ability to alter or delete those logs. In addition to real time monitoring and alerting, the log management system should have the ability to produce reports for periodic review by information security staff. The reports should also be archived for forensic purposes in the event of a breach or compromise.

Gain further assistance (and peace of mind) 

BerryDunn understands how privileged accounts should be monitored and audited. We can help your organization assess your current event management process and make recommendations if improvements are needed. Contact our team.

Related Industries

Related Professionals

Best practices for financial institution contracts with technology providers

As the financial services sector moves in an increasingly digital direction, you cannot overstate the need for robust and relevant information security programs. Financial institutions place more reliance than ever on third-party technology vendors to support core aspects of their business, and in turn place more reliance on those vendors to meet the industry’s high standards for information security. These include those in the Gramm-Leach-Bliley Act, Sarbanes Oxley 404, and regulations established by the Federal Financial Institutions Examination Council (FFIEC).

On April 2, 2019, the FDIC issued Financial Institution Letter (FIL) 19-2019, which outlines important requirements and considerations for financial institutions regarding their contracts with third-party technology service providers. In particular, FIL-19-2019 urges financial institutions to address how their business continuity and incident response processes integrate with those of their providers, and what that could mean for customers.

Common gaps in technology service provider contracts

As auditors of IT controls, we review lots of contracts between financial institutions and their technology service providers. When it comes to recommending areas for improvement, our top observations include:

  • No right-to-audit clause
    Including a right-to-audit clause encourages transparency and provides greater assurance that vendors are providing services, and charging for them, in accordance with their contract.
  • Unclear and/or inadequate rights and responsibilities around service disruptions
    In the event of a service incident, time and transparency are vital. Contracts that lack clear and comprehensive standards, both for the vendor and financial institution, regarding business continuity and incident response expose institutions to otherwise avoidable risk, including slow or substandard communications.
  • No defined recovery standards
    Explicitly defined recovery standards are essential to ensuring both parties know their role in responding and recovering from a disaster or other technology outage.

FIL-19-2019 also reminds financial institutions that they need to properly inform regulators when they undertake contracts or relationships with technology service providers. The Bank Service Company Act requires financial institutions to inform regulators in writing when receiving third-party services like sorting and posting of checks and deposits, computation and posting of interest, preparation and mailing of statements, and other functions involving data processing, Internet banking, and mobile banking services.

Writing clearer contracts that strengthen your institution

Financial institutions should review their contracts, especially those that are longstanding, and make necessary updates in accordance with FDIC guidelines. As operating environments continue to evolve, older contracts, often renewed automatically, are particularly easy to overlook. You also need to review business continuity and incident response procedures to ensure they address all services provided by third-parties.

Senior management and the Board of Directors hold ultimate responsibility for managing a financial institution’s relationship with its technology service providers. Management should inform board members of any and all services that the institution receives from third-parties to help them better understand your operating environment and information security needs.

Not sure what to look for when reviewing contracts? Some places to start include:

  • Establish your right-to-audit
    All contracts should include a right-to-audit clause, which preserves your ability to access and audit vendor records relating to their performance under contract. Most vendors will provide documentation of due diligence upon request, such as System and Organization Control (SOC) 1 or 2 reports detailing their financial and IT security controls.

    Many right-to-audit clauses also include a provision allowing your institution to conduct its own audit procedures. At a minimum, don’t hesitate to perform occasional walk-throughs of your vendor’s facilities to confirm that your contract’s provisions are being met.
  • Ensure connectivity with outsourced data centers
    If you outsource some or all of your core banking systems to a hosted data center, place added emphasis on your institution’s business continuity plan to ensure connectivity, such as through the use of multiple internet or dedicated telecommunications circuits. Data vendors should, by contract, be prepared to assist with alternative connectivity.
  • Set standards for incident response communications 
    Clear expectations for incident response are crucial  to helping you quickly and confidently manage the impact of a service incident on your customers and information systems. Vendor contracts should include explicit requirements for how and when vendors will communicate in the event of any issue or incident that affects your ability to serve your customers. You should also review and update contracts after each incident to address any areas of dissatisfaction with vendor communications.
  • Ensure regular testing of defined disaster recovery standards
    While vendor contracts don’t need to detail every aspect of a service provider’s recovery standards, they should ensure those standards will meet your institution’s needs. Contracts should guarantee that the vendor periodically tests, reviews, and updates their recovery standards, with input from your financial institution.

    Your data center may also offer regular disaster recovery and failover testing. If they do, your institution should participate in it. If they don’t, work with the vendor to conduct annual testing of your ability to access your hosted resources from an alternate site.

As financial institutions increasingly look to third-party vendors to meet their evolving technology needs, it is critical that management and the board understand which benefits—and related risks—those vendors present. By taking time today to align your vendor contracts with the latest FFIEC, FDIC, and NCUA standards, your institution will be better prepared to manage risk tomorrow.

For more help gaining control over risk and cybersecurity, see our blog on sustainable solutions for educating your Board of Directors and creating a culture of cybersecurity awareness.
 

Blog
Are your vendor contracts putting you at risk?

RANSOMWARE UPDATE: It happened again. Another ransomware attack hit very large corporations around the globe. Much like WannaCry, a worm spread through entire networks, and locked out encryption data and systems.

How did it work? A hacking tool known as EternalBlue (disclosed as part of the NSA’s leak on its hacking procedures) details how hackers may take advantage of a vulnerability in Microsoft Office products. The virus uses EternalBlue to spread through networks.

Here’s the kicker — Microsoft released a patch for EternalBlue in March. The ransomware attack could have been prevented if companies maintained and enforced requirements to be current with patches. As we see more of these attacks, remember to test system backups and your ability to restore systems from those backups. These backups will be critical if your organization falls victim to these or similar attacks. Restore your data, and make sure your systems are fully patched. For more information, get the Top 10 IT Security Risks E-book here.

The adage “You can’t afford to advertise, but you can’t afford not to” could well be applied to patch management. The  WannaCry/WannaCrypt ransomware malware attack that has spread across the globe has highlighted the need for keeping Windows OS up to date. Many organizations still underfund their patch management efforts and some still operate under the “I don’t want to be first” method of patching. This attack illustrates the need for swift patching. Exploits are coming out too fast to wait and see if the patch is safe. And for those still running Windows XP (unsupported since April 2014) it was just a matter of time before this happened.

Organizations and individuals that deploy basic and timely patch management techniques are protected from this attack. Microsoft released the fix in March of this year, before the exploit first showed up in April.

In this attack, ransomware is only half the story

The WannaCry code may have been able to work even on patched and updated computers, IF it had been installed correctly. Cybercriminals may target a specific organization and find ways to get the ransomware installed on internal systems, such as database servers. This can be done through social engineering techniques (e.g. tricking an employee to open an attachment or clicking a link in an email). But the criminal groups that deployed this attacked combined the ransomware with a worm that exploited a SMB vulnerability in the Windows operating system. The worm allowed the malware to spread through unpatched computers (desktops and servers) in over 100 countries.

Unfortunately, the vulnerability was allegedly discovered by the National Security Agency in 2014 but was never reported to Microsoft because the NSA and Central Intelligence Agency decided to keep the exploit as part of its cyber toolkit to spy on others. That toolkit was hacked, stolen, and made available for sale in April 2017 by WikiLeaks and a group called the Shadow Brokers.

Regardless of who carries blame, the bottom line is this disaster could have been prevented with basic upgrade and patching policies. Organizations that are hesitant to install patches for any reason may want to re-think this practice and base it on a realistic assessment of risks.

Patch management: no different from other important policies

Patch management should blend with other policies such as Backup and Recovery, Incident Response, and Disaster Recovery. Consider and assess the risk of applying patches as you would any other threat and plan for it accordingly. What you need to do:

  • Make sure the backup can recover the server in a tolerable amount of time (based on risk assessment or business impact analysis).
  • Perform backups before patching.
  • Include actions in your incident response and disaster recovery plans to recover a system impacted by a bad or improperly installed patch.
  • Invest in patching, backup, and recovery infrastructure at your organization (remember the “can’t afford not to advertise analogy).

 
Here are actions to take to protect you organization from the next WannaCry/WannaCrypt-style outbreak:

  • Block Server Message Block (SMB) ports, (particularly ports 139 and 445) from external hosts, along with User Datagram Protocol (UDP) ports 137 and 138, from the local network to the wide area network.
  • Keep anti-virus software up to date should be standard practice.
  • Whitelist the applications allowed to run on your servers or desktop using AppLocker or other white listing products.

No one security measure is foolproof. But patch management is an important and fundamental action and should be at the forefront of any organization’s layered security defense methodology. If you would like help analyzing your current patch management or security policies, our IT experts can share specific insights to help your organization.


Learn more about the top IT Security Risks for 2017.

Blog
You think patch management makes you "wanna' cry"? Try "not patching".

Read this if you are a state Medicaid Director, State Medicaid Chief Information Officer, State Medicaid Project Manager, State Procurement Officer, or work in a State Medicaid Program Integrity Unit.

The Centers for Medicare & Medicaid Services (CMS) issued a Payment Error Rate Measurement (PERM) Final Rule on July 5, 2017, that made several changes to the PERM requirements. One important change was the updates to the Medicaid Eligibility Quality Control (MEQC) requirement. 

The Final Rule restructures the MEQC program into a pilot program that requires states to conduct eligibility reviews during the two years between PERM cycles. CMS has also introduced the potential for imposing disallowances or reductions in federal funding percentage (FFP) as a result of PERM eligibility error rates that do not meet the national standard. One measure states can use to lessen the chance of this happening is by successfully carrying out the requirements of the MEQC pilot. 

What states should know―important points to keep in mind regarding MEQC reviews:

  • Each state must have a team in place to conduct MEQC reviews. The individuals responsible for the MEQC reviews and associated activities must be separate from the state agencies and personnel responsible for Medicaid and Children’s Health Insurance Program (CHIP) policy and operations, including eligibility determinations.
  • States can apply for federal funding to help cover the costs of the MEQC activities. CMS encourages states to partner with a contractor in conducting the MEQC reviews.
  • The deadline to submit the state planning document to CMS is November 1 following the end of your state’s PERM cycle. If you are a Cycle 2 state, your MEQC planning document is due by November 1, 2019. 
  • If you are a Cycle 1 state, you are (or should be) currently undergoing the MEQC reviews.
  • There are minimum sample size requirements for the MEQC review period: 400 negative cases and 400 active cases (consisting of both Medicaid and CHIP cases) over a period of 12 months.
  • Upon conclusion of all MEQC reviews, states must submit a final findings report along with a corrective action plan that addresses all error findings identified during the MEQC review period.

CMS encourages states to utilize federal funding to carry out and fulfill MEQC requirements. BerryDunn has staff with experience in preparing Advanced Planning Documents (APD) and can assist your state in submitting an APD request to CMS for these MEQC activities. 

Check out the previously released blog, “PERM: Prepared or Not Prepared?” and stay tuned for upcoming blogs about specific PERM topics, including the financial impacts of PERM, and how each review phase will affect your state.   

For questions or to find out more, contact the team

Blog
PERM: Does MEQC affect states?

Federal contractors with the Centers for Medicare & Medicaid Services (CMS) have begun performing Payment Error Rate Measurement (PERM) reviews under the Final Rule issued in July 2017—a rule that many states may not realize could negatively impact their Medicaid budgets.

PERM is a complex process—states must focus on several activities over a recurring three-year period of time—and states may not have the resources needed to make PERM requirements a priority. However, with the Final Rule, this PERM eligibility review could have financial implications. 

After freezing the eligibility measurement for four years while undergoing pilot review, CMS has established new requirements for the eligibility review component and made significant changes to the data processing and medical record review components. As part of the Final Rule, CMS may implement reductions in the amount of federal funding provided to a state’s Medicaid and Children’s Health Insurance Program (CHIP) programs based on the error rates identified from the eligibility reviews. 

Since the issuance of the Final Rule in July 2017, Cycle 1 states are the first group of states to undergo a PERM cycle, including reviews of the data processing, medical record, and eligibility components. These states are wrapping up the final review activities, and Cycle 2 states are in the early stages of their PERM reviews.

How can your state prepare?

Whether your state is a Cycle 1, Cycle 2, or Cycle 3 state, there are multiple activities your Medicaid departments should engage in throughout each three-year period of time during and between PERM cycles: 

  • Analyzing prior errors cited or known issues, along with the root cause of the error
  • Identifying remedies to reduce future errors
  • Preparing and submitting required questionnaires and documents to the federal contractors for an upcoming review cycle
  • Assisting federal contractors with current reviews and findings
  • Preparing for and undergoing Medicaid Eligibility Quality Control (MEQC) planning and required reviews
  • Corrective action planning

Is your state ready?

We’ve compiled a few basic questions to gauge your state’s readiness for the PERM review cycle:

  • Do you have measures in place to ensure all eligibility factors under review are identifiable and that all federal and state regulations are being met? The eligibility review contractor (ERC) will reestablish eligibility for all beneficiaries sampled for review. This process involves confirming all verification requirements are in the case file, income requirements are met, placement in an accurate eligibility category has taken place, and the timeframe for processing all determinations meets federal and state regulations. 
  • Do you have up-to-date policy and procedures in place for determining and processing Medicaid or CHIP eligibility of an individual? Ensuring eligibility policies and procedures meet federal requirements is just as important as ensuring the processing of applications, including both system and manual actions, meet the regulations. 
  • Do you have up-to-date policy, procedures, and system requirements in place to ensure accurate processing of all Medicaid/CHIP claims? Reviewers will confirm the accuracy of all claim payments based on state and federal regulations. Errors are often cited due to the claims processing system allowing claims to pay that do not meet regulations.
  • Do you have a dedicated team in place to address all PERM requirements to ensure a successful review cycle? This includes staff to answer questions, address review findings, and respond to requests for additional information. During a review cycle, the federal contractors will cite errors based on their best understanding of policies and/or ability to locate required documentation. Responding to requests for information or reviewing and responding to findings in a timely manner should be a priority to ensure accurate findings. 
  • Have you communicated all PERM requirements and updates to policy changes to all Medicaid/CHIP providers? Providers play two integral roles in the success of a PERM review cycle. Providers must understand all claims submission requirements in order to accurately submit claims. Additionally, the medical record review component relies on providers responding to the request for the medical records on a sampled claim. Failure to respond will result in an error. Therefore, states must maintain communication with providers to stress the importance of responding to these requests.
  • Have you begun planning for the MEQC requirement? Following basic requirements identified by CMS during your state’s MEQC period, your state must submit a case planning document to CMS for approval prior to the MEQC review period. After the MEQC review, your state should be prepared to issue findings reports, including a corrective action plan as it relates to MEQC findings.

Need help piloting your state’s PERM review process?

BerryDunn has subject matter experts experienced in conducting PERM reviews, including a thorough understanding of all three PERM review components—eligibility, data processing, and medical record reviews. 

We would love to work with your state to see that measures are in place that will help ensure the lowest possible improper payment error rate. Stay tuned for upcoming blogs where we will discuss other PERM topics, including MEQC requirements, the financial impacts of PERM, and additional details related to each phase of PERM. For questions or to find out more, please email me
 

Blog
PERM: Prepared or not prepared?

LIBOR is leaving—is your financial institution ready to make the most of it?

In July 2017, the UK’s Financial Conduct Authority announced the phasing out of the London Interbank Offered Rate, commonly known as LIBOR, by the end of 20211. With less than two years to go, US federal regulators are urging financial institutions to start assessing their LIBOR exposure and planning their transition. Here we offer some general impacts of the phasing out, some specific actions your institution can take to prepare, and, finally, background on how we got here (see Background at right).

How will the phase-out impact financial institutions?

The Federal Reserve estimates roughly $200 trillion in LIBOR-indexed notional value transactions in the cash and derivatives market2. LIBOR is used to help price a variety of financial services products,  including $3.4 trillion in business loans and $1.3 trillion in consumer loans, as well as derivatives, swaps, and other credit instruments. Even excluding loans and financial instruments set to mature before 2021—estimated by the FDIC at 82% of the above $200 trillion—LIBOR exposure is still significant3.

A financial institution’s ability to lend money is largely dependent on the relative stability of its capital position, or lack thereof. For institutions with a significant amount of LIBOR-indexed assets and liabilities, that means less certainty in expected future cash flows and a less stable capital position, which could prompt institutions to deny loans they might otherwise have approved. A change in expected cash flows could also have several indirect consequences. Criticized assets, assessed for impairment based on their expected future cash flows, could require a specific reserve due to lower present value of expected future cash flows.

The importance of fallback language in loan agreements

Fallback language in loan agreements plays a pivotal role in financial institutions’ ability to manage their LIBOR-related financial results. Most loan agreements include language that provides guidance for determining an alternate reference rate to “fall back” on in the event the loan’s original reference rate is discontinued. However, if this language is non-existent, contains fallbacks that are no longer adequate, or lacks certain key provisions, it can create unexpected issues when it comes time for financial institutions to reprice their LIBOR loans. Here are some examples:

  • Non-existent or inadequate fallbacks
    According to the Alternative Reference Rates Committee, a group of private-market participants convened by the Federal Reserve to help ensure a successful LIBOR transition, "Most contracts referencing LIBOR do not appear to have envisioned a permanent or indefinite cessation of LIBOR and have fallbacks that would not be economically appropriate"4.

    For instance, industry regulators have warned that without updated fallback language, the discontinuation of LIBOR could prompt some variable-rate loans to become fixed-rate2, causing unanticipated changes in interest rate risk for financial institutions. In a declining rate environment, this may prove beneficial as loans at variable rates become fixed. But in a rising rate environment, the resulting shrink in net interest margins would have a direct and adverse impact on the bottom line.

  • No spread adjustment
    Once LIBOR is discontinued, LIBOR-indexed loans will need to be repriced at a new reference rate, which could be well above or below LIBOR. If loan agreements don’t provide for an adjustment of the spread between LIBOR and the new rate, that could prompt unexpected changes in the financial position of both borrowers and lenders3. Take, for instance, a loan made at the Secured Overnight Financing Rate (SOFR), generally considered the likely replacement for USD LIBOR. Since SOFR tends to be lower than three-month LIBOR, a loan agreement using it that does not allow for a spread adjustment would generate lower loan payments for the borrower, which means less interest income for the lender.

    Not allowing for a spread adjustment on reference rates lower than LIBOR could also cause a change in expected prepayments—say, for instance, if borrowers with fixed-rate loans decide to refinance at adjustable rates—which would impact post-CECL allowance calculations like the weighted-average remaining maturity (WARM) method, which uses estimated prepayments as an input.

What can your financial institution do to prepare?

The Federal Reserve and the SEC have urged financial institutions to immediately evaluate their LIBOR exposure and expedite their transition. Though the FDIC has expressed no intent to examine financial institutions for the status of LIBOR planning or critique loans based on use of LIBOR3, Federal Reserve supervisory teams have been including LIBOR transitions in their regular monitoring of large financial institutions5. The SEC has also encouraged companies to provide investors with robust disclosures regarding their LIBOR transition, which may include a notional value of LIBOR exposure2.

Financial institutions should start by analyzing their LIBOR exposure beyond 2021. If you don’t expect significant exposure, further analysis may be unnecessary. However, if you do expect significant future LIBOR exposure, your institution should conduct stress testing using LIBOR as an isolated variable by running hypothetical transition scenarios and assessing the potential financial impact.

Closely examine and assess fallback language in loan agreements. For existing loan agreements, you may need to make amendments, which could require consent from counterparties2. For new loan agreements maturing beyond 2021, lenders should consider selecting an alternate reference rate. New contract language for financial instruments and residential mortgages is currently being drafted by the International Securities Dealers Association and the Federal Housing Finance Authority, respectively3—both of which may prove helpful in updating loan agreements.

Lenders should also consider their underwriting policies. Loan underwriters will need to adjust the spread on new loans to accurately reflect the price of risk, because volatility and market tendencies of alternate loan reference rates may not mirror LIBOR’s. What’s more, SOFR lacks abundant historical data for use in analyzing volatility and market tendencies, making accurate loan pricing more difficult.

Conclusion: Start assessing your LIBOR risk soon

The cessation of LIBOR brings challenges and opportunities that will require in-depth analysis and making difficult decisions. Financial institutions and consumers should heed the advice of regulators and start assessing their LIBOR risk now. Those that do will not only be better prepared―but also better positioned―to capitalize on the opportunities it presents.

Need help assessing your LIBOR risk and preparing to transition? Contact BerryDunn’s financial services specialists.

1 https://www.washingtonpost.com/business/2017/07/27/acdd411c-72bc-11e7-8c17-533c52b2f014_story.html?utm_term=.856137e72385
2 Thomson Reuters Checkpoint Newsstand April 10, 2019
3 https://www.fdic.gov/regulations/examinations/supervisory/insights/siwin18/si-winter-2018.pdf
4 https://bankingjournal.aba.com/2019/04/libor-transition-panel-recommends-fallback-language-for-key-instruments/
5 https://www.reuters.com/article/us-usa-fed-libor/fed-urges-u-s-financial-industry-to-accelerate-libor-transition-idUSKCN1RM25T

Blog
When one loan rate closes, another opens

All teams experience losing streaks, and all franchise dynasties lose some luster. Nevertheless, the game must go on. What can coaches do? The answer: be prepared, be patient, and be PR savvy. Business managers should keep these three P’s in mind as they read Chapter 8 in BerryDunn’s Cybersecurity Playbook for Management, which highlights how organizations can recover from incidents.

In the last chapter, we discussed incident response. What’s the difference between incident response and incident recovery?

RG: Incident response refers to detecting and identifying an incident—and hopefully eradicating the source or cause of the incident, such as malware. Incident recovery refers to getting things back to normal after an incident. They are different sides of the same resiliency coin.

I know you feel strongly that organizations should have incident response plans. Should organizations also have incident recovery plans?

RG: Absolutely. Have a recovery plan for each type of possible incident. Otherwise, how will your organization know if it has truly recovered from an incident? Having incident recovery plans will also help prevent knee-jerk decisions or reactions that could unintentionally cover up or destroy an incident’s forensic evidence.

In the last chapter, you stated managers and their teams can reference or re-purpose National Institute of Standards and Technology (NIST) special publications when creating incident response plans. Is it safe to assume you also suggest referencing or re-purposing NIST special publications when creating incident recovery plans?

RG: Yes. But keep in mind that incident recovery plans should also mesh with, or reflect, any business impact analyses developed by your organization. This way, you will help ensure that your incident recovery plans prioritize what needs to be recovered first—your organization’s most valuable assets.

That said, I should mention that cybersecurity attacks don’t always target an organization’s most valuable assets. Sometimes, cybersecurity attacks simply raise the “misery index” for a business or group by disrupting a process or knocking a network offline.

Besides having incident recovery plans, what else can managers do to support incident recovery?

RG: Similar to what we discussed in the last chapter, managers should make sure that internal and external communications about the incident and the resulting recovery are consistent, accurate, and within the legal requirements for your business or industry. Thus, having a good incident recovery communication plan is crucial. 

When should managers think about bringing in a third party to help with incident recovery?

RG: That’s a great question. I think this decision really comes down to the confidence you have in your team’s skills and experience. An outside vendor can give you a lot of different perspectives but your internal team knows the business. I think this is one area that it doesn’t hurt to have an outside perspective because it is so important and we often don’t perceive ourselves as the outside world does. 

This decision also depends on the scale of the incident. If your organization is trying to recover from a pretty significant or high-impact breach or outage, you shouldn’t hesitate to call someone. Also, check to see if your organization has cybersecurity insurance. If your organization has cybersecurity insurance, then your insurance company is likely going to tell you whether or not you need to bring in an outside team. Your insurance company will also likely help coordinate outside resources, such as law enforcement and incident recovery teams.

Do you think most organizations should have cybersecurity insurance? 

RG: In this day and age? Yes. But organizations need to understand that, once they sign up for cybersecurity insurance, they’re going to be scrutinized by the insurance company—under the microscope, so to speak—and that they’ll need to take their “cybersecurity health” very seriously.

Organizations need to really pay attention to what they’re paying for. My understanding is that many different types of cybersecurity insurance have very high premiums and deductibles. So, in theory, you could have a $1 million insurance policy, but a $250,000 deductible. And keep in mind that even a simple incident can cost more than $1 million in damages. Not surprisingly, I know of many organizations signing up for $10 million insurance policies. 

How can managers improve internal morale and external reputation during the recovery process?

RG: Well, leadership sets the tone. It’s like in sports—if a coach starts screaming and yelling, then it is likely that the players will start screaming and yelling. So set expectations for measured responses and reactions. 

Check in on a regular basis with your internal security team, or whoever is conducting incident recovery within your organization. Are team members holding up under pressure? Are they tired? Have you pushed them to the point where they are fatigued and making mistakes? The morale of these team members will, in part, dictate the morale of others in the organization.

Another element that can affect morale is—for lack of a better word—idleness resulting from an incident. If you have a department that can’t work due to an incident, and you know that it’s going to take several days to get things back to normal, you may not want department members coming into work and just sitting around. Think about it. At some point, these idle department members are going to grumble and bicker, and eventually affect the wider morale. 

As for improving external reputation?I don’t think it really matters, honestly, because I don’t think most people really, truly care. Why? Because everyone is vulnerable, and attacks happen all the time. At this point in time, cyberattacks seem to be part of the normal course and rhythm of business. Look at all the major breaches that have occurred over the past couple of years. There’s always some of immediate, short-term fallout, but there’s been very little long-term fallout. Now, that being said, it is possible for organizations to suffer a prolonged PR crisis after an incident. How do you avoid this? Keep communication consistent—and limit interactions between employees and the general public. One of the worst things that can happen after an incident is for a CEO to say, “Well, we’re not sure what happened,” and then for an employee to tweet exactly what happened. Mixed messages are PR death knells. 

Let’s add some context. Can you identify a business or group that, in your opinion, has handled the incident recovery process well?

RG: You know, I can’t, and for a very good reason. If a business or group does a really good job at incident recovery, then the public quickly forgets about the incident—or doesn’t even hear about it in the first place. Conversely, I can identify many businesses or groups that have handled the incident recovery process poorly, typically from a PR perspective.

Any final thoughts about resiliency?

RG: Yes. As you know, over the course of this blog series, I have repeated the idea that IT is not the same as security. These are two different concepts that should be tackled by two different teams—or approached in their appropriate context. Similarly, managers need to remember that resiliency is not an IT process—it’s a business process. You can’t just shove off resiliency responsibilities onto your IT team. As managers, you need to get directly involved with resiliency, just as you need to get directly involved with maturity, capacity, and discovery. 

So, we’ve reached the end of this blog series. Above all else, what do you hope managers will gain from it? 

RG: First, the perspective that to understand your organization’s cybersecurity, is to truly understand your organization and its business. And I predict that some managers will be able to immediately improve business processes once they better grasp the cybersecurity environment. Second, the perspective that cybersecurity is ultimately the responsibility of everyone within an organization. Sure, having a dedicated security team is great, but everyone—from the CEO to the intern—plays a part. Third, the perspective that effective cybersecurity is effective communication. A siloed, closed-door approach will not work. And finally, the perspective that cybersecurity is always changing, so that it’s a best practice to keep reading and learning about it. Anyone with questions should feel free to reach out to me directly.

Blog
Incident recovery: Cybersecurity playbook for management

Artificial Intelligence, or AI, is no longer the exclusive tool of well-funded government entities and defense contractors, let alone a plot device in science fiction film and literature. Instead, AI is becoming as ubiquitous as the personal computer. The opportunities of what AI can do for internal audit are almost as endless as the challenges this disruptive technology represents.

To understand how AI will influence internal audit, we must first understand what AI is.The concept of AI—a technology that can perceive the world directly and respond to what it perceives—is often attributed to Alan Turing, though the term “Artificial Intelligence” was coined much later in 1956 at Dartmouth College, in Hanover, New Hampshire. Turing was a British scientist who developed the machine that cracked the Nazis’ Enigma code. Turing thought of AI as a machine that could convince a human that it also was human. Turing’s humble description of AI is as simple as it is elegant. Fast-forward some 60 years and AI is all around us and being applied in novel ways almost every day. Just consider autonomous self- driving vehicles, facial recognition systems that can spot a fugitive in a crowd, search engines that tailor our online experience, and even Pandora, which analyzes our tastes in music.

Today, in practice and in theory, there are four types of AI. Type I AI may be best represented by IBM’s Deep Blue, a chess-playing computer that made headlines in 1996 when it won a match against Russian chess champion Gary Kasparov. Type I AI is reactive. Deep Blue can beat a chess champion because it evaluates every piece on the chessboard, calculates all possible moves, then predicts the optimal move among all possibilities. Type I AI is really nothing more than a super calculator, processing data much faster than the human mind can. This is what gives Type I AI an advantage over humans.

Type II AI, which we find in autonomous cars, is also reactive. For example, it applies brakes when it predicts a collision; but, it has a low form of memory as well. Type II AI can briefly remember details, such as the speed of oncoming traffic or the distance between the car and a bicyclist. However, this memory is volatile. When the situation has passed, Type II AI deletes the data from its memory and moves on to the next challenge down the road.

Type II AI's simple form of memory management and the ability to “learn” from the world in which it resides is a significant advancement. 
The leap from Type II AI to Type III AI has yet to occur. Type III AI will not only incorporate the awareness of the world around it, but will also be able to predict the responses and motivations of other entities and objects, and understand that emotions and thoughts are the drivers of behavior. Taking the autonomous car analogy to the next step, Type III AI vehicles will interact with the driver. By conducting a simple assessment of the driver’s emotions, the AI will be able to suggest a soothing playlist to ease the driver's tensions during his or her commute, reducing the likelihood of aggressive driving. Lastly, Type IV AI–a milestone that will likely be reached at some point over the next 20 or 30 years—will be self-aware. Not only will Type IV AI soothe the driver, it will interact with the driver as if it were another human riding along for the drive; think of “HAL” in Arthur C. Clarke’s 2001: A Space Odyssey.

So what does this all mean to internal auditors?
While it may be a bit premature to predict AI’s impact on the internal audit profession, AI is already being used to predict control failures in institutions with robust cybersecurity programs. When malicious code is detected and certain conditions are met, AI-enabled devices can either divert the malicious traffic away from sensitive data, or even shut off access completely until an incident response team has had time to investigate the nature of the attack and take appropriate actions. This may seem a rather rudimentary use of AI, but in large financial institutions or manufacturing facilities, minutes count—and equal dollars. Allowing AI to cut off access to a line of business that may cost the company money (and its reputation) is a significant leap of faith, and not for the faint of heart. Next generation AI-enabled devices will have even more capabilities, including behavioral analysis, to predict a user’s intentions before gaining access to data.

In the future, internal audit staff will no doubt train AI to seek conditions that require deeper analysis, or even predict when a control will fail. Yet AI will be able to facilitate the internal audit process in other ways. Consider AI’s role in data quality. Advances in inexpensive data storage (e.g., the cloud) have allowed the creation and aggregation of volumes of data subject to internal audit, making the testing of the data’s completeness, integrity, and reliability a challenging task considering the sheer volume of data. Future AI will be able to continuously monitor this data, alerting internal auditors not only of the status of data in both storage and motion, but also of potential fraud and disclosures.

The analysis won’t stop there.
AI will measure the performance of the data in meeting organizational objectives, and suggest where efficiencies can be gained by focusing technical and human resources to where the greatest risks to the organization exist in near real-time. This will allow internal auditors to develop a common operating picture of the day-to-day activities in their business environments, alerting internal audit when something doesn’t quite look right and requires further investigation.

As promising as AI is, the technology comes with some ethical considerations. Because AI is created by humans, it is not always vacant of human flaws. For instance, AI can become unpredictably biased. AI used in facial recognition systems has made racial judgments based on certain common facial characteristics. In addition, AI that gathers data from multiple sources that span a person’s financial status, credit status, education, and individual likes and dislikes could be used to profile certain groups for nefarious intentions. Moreover, AI has the potential to be weaponized in ways that we have yet to comprehend.

There is also the question of how internal auditors will be able to audit AI. Keeping AI safe from internal fraudsters and external adversaries is going to be paramount. AI’s ability to think and act faster than humans will challenge all of us to create novel ways of designing and testing controls to measure AI’s performance. This, in turn, will likely make partnerships with consultants that can fill knowledge gaps even more valuable. 

Challenges and pitfalls aside, AI will likely have a tremendous positive effect on the internal audit profession by simultaneously identifying risks and evaluating processes and control design. In fact, it is quite possible that the first adopters of AI in many organizations may not be the cybersecurity departments at all, but rather the internal auditor’s office. As a result, future internal auditors will become highly technical professionals and perhaps trailblazers in this new and amazing technology.

Blog
Artificial intelligence and the future of internal audit

The world of professional sports is rife with instability and insecurity. Star athletes leave or become injured; coaching staff make bad calls or public statements. The ultimate strength of a sports team is its ability to rebound. The same holds true for other groups and businesses. Chapter 7 in BerryDunn’s Cybersecurity Playbook for Management looks at how organizations can prepare for, and respond to, incidents.

The final two chapters of this Cybersecurity Playbook for Management focus on the concept of resiliency. What exactly is resiliency?
RG
: Resiliency refers to an organization’s ability to keep the lights on—to keep producing—after an incident. An incident is anything that disrupts normal operations, such as a malicious cyberattack or an innocent IT mistake.

Among security professionals, attitudes toward resiliency have changed recently. Consider the fact that the U.S. Department of Defense (DOD) has come out and said, in essence, that cyberwarfare is a war that it cannot win—because cyberwarfare is so complex and so nuanced. The battlefield changes daily and the opponents have either a lot of resources or a lot of time on their hands. Therefore, the DOD is placing an emphasis on responding and recovering from incidents, rather than preventing them.

That’s sobering.
RG
: It is! And businesses and organizations should take note of this attitude change. Protection, which was once the start and endpoint for security, has given way to resiliency.

When and why did this attitude change occur?
RG
: Several years ago, security experts started to grasp just how clever certain nation states, such as China and Russia, were at using malicious software. If you could point to one significant event, likely the 2013 Target breach is it.

What are some examples of incidents that managers need to prepare for?
RG
: Examples range from external breaches and insider threats to instances of malfeasance or incompetence. Different types of incidents lead to the same types of results—yet you can’t have a broad view of incidents. Managers should work with their teams to create incident response plans that reflect the threats associated with their specific line of business. A handful of general incident response plans isn’t going to cut it.

Managers need to work with their teams to develop a specific incident response plan for each specific type of incident. Why? Well, think of it this way: Your response to a careless employee should be different from your response to a malicious employee, for a whole host of legal reasons.

Incident response is not a cookie-cutter process. In fact, it is quite the opposite. This is one of the reasons I highly suggest that security teams include staff members with liberal arts backgrounds. I’m generalizing, but these people tend to be creative. And when you’re responding to incidents, you want people who can look at a problem or situation from a global or external perspective, not just a technical or operational perspective. These team members can help answer questions such as, what does the world see when they look at our organization? What organizational information might be valuable to, or targeted by, malicious actors? You’ll get some valuable fresh perspectives.

How short or long should the typical incident response plan be?
RG
: They can be as short as needed; I often see good incident response plans no more than three or four pages in length. However, it is important that incident response plans are task oriented, so that it is clear who does what next. And when people follow an incident response plan, they should physically or digitally check off each activity, then record each activity.

What system or software do you recommend for recording incidents and responses?
RG
: There are all types of help desk software you can use, including free and open source software. I recommend using help desk software with workflow capabilities so your team can assign and track tasks.

Any other tips for developing incident response plans?
RG
: First, managers should work with, and solicit feedback from, different data owners and groups within the organization—such as IT, HR, and Legal—when developing incident response plans. If you create these documents in a vacuum, they will be useless.

Second, managers and their teams should take their time and develop the most “solid” incident response plans possible. Don’t rush the process. The effectiveness of your incident response plans will be critical in assessing your organization’s ability to survive a breach. Because of this, you should be measuring your response plans through periodic testing, like conducting tabletop exercises.

Third, keep your organization’s customers in mind when developing these plans. You want to make sure external communications are consistent, accurate, and within the legal requirements for your business or industry. The last thing you want is customers receiving conflicting messages about the incident. This can cause unnecessary grief for you, but can also cause an unmeasurable loss of customer confidence.

Are there any decent incident response plans in the public domain that managers and their teams can adapt for their own purposes?
RG
: Yes. My default reference is the National Institute of Standards and Technology (NIST). NIST has many special publications that describe the incident response process, how to develop a solid plan, and how to test your plan.

Should organizations have dedicated incident response teams?
RG: Definitely. Larger organizations usually have the resources and ability to staff these teams internally. Smaller organizations may want to consider hiring a reputable third party to act as an incident response team. The key with hiring a third party? Don’t wait until an incident occurs! If you wait, you’re going to panic, and make panic-based decisions. Be proactive and hire a third party on retainer.

That said, even larger organizations should consider hiring a third party on an annual basis to review incident response plans and processes. Why? Because every organization can grow complacent, and complacency kills. A third party can help gauge the strengths and weaknesses of your internal incident response teams, and provide suggestions for general or specific training. A third party can also educate your organization about the latest and greatest cyber threats.

Should managers empower their teams to conduct internal “hackathons” in order to test incident response?
RG
: Sure! It’s good practice, and it can be a lot of fun for team members. There are a few caveats. First, don’t call it a “hackathon.” The word can elicit negative reactions from upper management—whose support you really need. Call it “active testing” or “continuous improvement exercises.” These activities allow team members to think creatively, and are opportunities for them to boost their cybersecurity knowledge. Second, be prepared for pushback. Some managers worry if team members gain more cybersecurity skills, then they’ll eventually leave the organization for another, higher-paying job. I think you should be committed to the growth of your team members; it’ll only make your organization more secure.

What are some best practices managers should follow when reporting incidents to their leadership?
RG
: Keep the update quick, brief, and to the point. Leave all the technical jargon out, and keep everything in a business context. This way leadership can grasp the ramifications of the event and understand what matters. Be prepared to outline how you’re responding and what actions leadership can take to support the incident response team and protect the business. In the last chapter, I mentioned what I call the General Colin Powell method of reporting, and I suggest using that method when informing leadership. Tell them what you know, what you don’t know, what you think, and what you recommend. Have answers, or at least a plan.

Above all else, don’t scare leadership. If you present them with panic, you’re going to get panic back. Be a calm voice in the storm. Management will respond better, as will your team.

Another thing to keep in mind is different business leaders have different responses to this sort of news. An elected official, for example, might react differently than the CEO of a private company, simply due to possible political fallout. Keep this context in mind when reporting incidents. It can help you craft the message.

How much organization-wide communication should there be about incidents?
RG
: That’s a great question, but a tough one to answer. Transparency is good, but it can also unintentionally lead to further incidents. Do you really want to let your whole organization know about an exploitable weakness? Also, employees can spread information about incidents on social media, which can actually lead to the spread of misinformation. If you are in doubt about whether or not to inform the entire organization about an incident, refer to your Legal Department. In general, organization-wide communication should be direct: We’ve had an incident; these are the facts; this is what you are allowed to say on social media; and this is what you’re not allowed to say on social media.

Another great but tough question: When do you tell the public about an incident? For this type of communication, you’re going to need buy-in from various sources: leadership, Legal, HR, and your PR team or external PR partners. You have to make sure the public messaging is consistent. Otherwise, citizens and the media will try to poke holes in your official story. And that can lead to even more issues.

So what’s next?
RG
: Chapter 8 will focus on how managers can help their organizations recover from a cybersecurity incident.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Incident response: Cybersecurity playbook for management

Any sports team can pull off a random great play. Only the best sports teams, though, can pull off great plays consistently — and over time. The secret to this lies in the ability of the coaching staff to manage the team on a day-to-day basis, while also continually selling their vision to the team’s ownership. Chapter Six in BerryDunn’s Cybersecurity Playbook for Management looks at how managers can achieve similar success through similar actions.

The title of this chapter is “The Workflow.” What are we talking about today?
RG
: In previous chapters, we’ve walked managers through cybersecurity concepts like maturity, capacity, and discovery. Today, we’re going to discuss how you can foster a consistent and repeatable cybersecurity program — the cybersecurity workflow, if you will. And for managers, this is where game planning begins. To achieve success, they need to effectively oversee their team on a day-to-day basis, and continually sell the cybersecurity program to the business leadership for whom they work — the board or CEO.

Let’s dive right in. How exactly do managers oversee a cybersecurity program on a day-to-day basis?
RG
: Get out of the way, and let your team do its work. By this point, you should know what your team is capable of. Therefore, you need to trust your team. Yet you should always verify. If your team recommends purchasing new software, have your team explain, in business terms, the reasons for the purchase. Then verify those reasons. Operationalizing tools, for example, can be difficult and costly, so make sure they put together a road map with measurable outcomes before you agree to buy any tools — even if they sound magical!

Second, empower your team by facilitating open dialogue. If your team brings you bad news, listen to the bad news — otherwise, you’ll end up alienating people. Know that your team is going to find things within your organization’s “auditable universe” that are going to make you uncomfortable from a cybersecurity point of view. Nevertheless, you need to encourage your team to share the information, so don’t overreact.

Third, give your team a communication structure that squelches a crisis-mode mentality — “Everything’s a disaster!” In order to do that, make sure your team gives every weakness or issue they discover a risk score, and log the score in a risk register. That way, you can prioritize what is truly important.

Fourth, resolve conflicts between different people or groups on your team. Take, for example, conflict between IT staff and security staff, (read more here). It is a common issue, as there is natural friction between these groups, so be ready to deal with it. IT is focused on running operations, while security is focused on protecting operations. Sometimes, protection mechanisms can disrupt operations. Therefore, managers need to act as peacemakers between the two groups. Don’t show favoritism toward one group or another, and don’t get involved in nebulous conversations regarding which group has “more skin in the game.” Instead, focus on what is best for your organization from a business perspective. The business perspective ultimately trumps either IT or security concerns.

Talk about communication for a moment. Managers often come from business backgrounds, while technical staff often come from IT backgrounds. How do you foster clear communication across this divide?
RG
: Have people talk in simple terms. Require everyone on your team use plain language to describe what they know or think. I recommend using what I call the Colin Powell method of reporting:

1. Tell me what you know.
2. Tell me what you don’t know.
3. Tell me what you think.
4. Tell me what you recommend.

When you ask team members questions in personal terms — “Tell me what you know”—you tend to receive easy-to-understand, non-jargon answers.

Something that we really haven’t talked about in this series is cybersecurity training. Do you suggest managers implement regular cybersecurity training for their team?
RG
: This is complicated, and my response will likely be be a little controversial to many. Yes, most organizations should require some sort of cybersecurity training. But I personally would not invest a lot of time or money into cybersecurity training beyond the basics for most users and specific training for technical staff. Instead, I would plan to spend more money on resiliency — responding to, and recovering from, a cybersecurity attack or incident. (We’ll talk about resiliency more in the next two chapters.) Why? Well, you can train people all day long, but it only takes one person to be malicious, or to make an innocent mistake, that leads to a cybersecurity attack or incident. Let’s look at my point from a different perspective. Pretend you’re the manager of a bank, and you have some money to spend on security. Are you going to spend that money on training your employees how to identify a robber? Or are you going to spend that money on a nice, state-of-the-art vault?

Let’s shift from talking about staff to talking about business leadership. How do managers sell the cybersecurity program to them?
RG
: Use business language, not technical language. For instance, a CEO may not necessarily care much about the technical behavior of a specific malware, but they are going to really care about the negative effects that malware can have on the business.

Also, keep the conversation short, simple, and direct. Leadership doesn’t have time to hear about all you’re doing. Leadership wants progress updates and a clear sense of how the cybersecurity program is helping the business. I suggest discussing three to four high-priority security risks, and summarizing how you and your team are addressing those risks.

And always remember that in times of crisis, those who keep a cool head tend to gain the most support. Therefore, when talking to the board or CEO, don’t be the bearer of “doom and gloom.” Be calm, positive, empowering, and encouraging. Provide a solution. And make leadership part of the solution by reminding them that they, too, have cybersecurity responsibilities, such as communicating the value of the cybersecurity program to the organization — internal PR, in other words.

How exactly should a manager communicate this info to leadership? Do you suggest one-on-one chats, reports, or presentations?
RG
: This all depends on leadership. You know, some people are verbal learners; some people are visual learners. It might take some trial and error to figure out the best medium for conveying your information, and that’s OK. Remember: cybersecurity is an ongoing process, not a one-and-done event. However, if you are going to pursue the one-on-one chat route, just be prepared, materials-wise. If leadership asks for a remediation plan, then you better have that remediation plan ready to present!

What is one of the biggest challenges that managers face when selling cybersecurity programs to leadership?RG: One of the biggest challenges is addressing questions about ROI, because there often are no quantifiable financial ROIs for cybersecurity. But organizations have to protect themselves. So the question is, how much money is your organization willing to spend to protect itself? Or, in other words, how much risk can your organization reduce — and does this reduction justify the cost?

One possible way to communicate the value of cybersecurity to leadership is to compare it to other necessary elements within the organization, such as HR. What is the ROI of HR? Who knows? But do you really want your organization to lack an HR department? Think of all the possible logistic and legal issues that could swamp your organization without an HR department. It’s terrifying to think about! And the same goes for cybersecurity.

We’ve talked about how managers should communicate with their team and with business leadership. What about the organization as a whole?
RG
: Sure! Regular email updates are great, especially if you keep them “light,” so to speak. Don’t get into minutia. That said, I also think a little bit of secrecy goes a long way. Organizations need to be aware of, and vigilant toward, insider threats. Loose lips sink ships, you know? Gone are the days when a person works for an organization for 30+ years. Employees come and go pretty frequently. As a result, the concept of company loyalty has changed. So make sure your organization-wide updates don’t give away too much cybersecurity information.

So what’s next?
RG:
Chapter 7 will focus on how managers can help their organizations respond to a cybersecurity attack or incident.

Blog
The workflow: Cybersecurity playbook for management

A professional sports team is an ever-changing entity. To have a general perspective on the team’s fluctuating strengths and weaknesses, a good coach needs to trust and empower their staff to discover the details. Chapter 5 in BerryDunn’s Cybersecurity Playbook for Management looks at how discovery can help managers understand their organization’s ever-changing IT environment. 

What is discovery, and how does it connect to capacity?
RG: Discovery is the process of mapping your organization’s capacity—people, processes, and tools—so you understand what your organization’s IT environment has. In other words, it’s the auditing of your IT environment.

Of course, the most valuable thing within your IT environment, other than the people who access it, is the “thing” that drives your business. Often this thing is data, but it could be proprietary processes or machinery. For the purposes of this blog, we’ll focus on data. Discovery naturally answer questions such as:

• What in our IT environment is important to our business?
• How is it being used?
• Who has access to it, and how can we better protect it? 

How can managers tackle discovery?
RG: First, you need to understand discovery requires accepting the fact that the environment is always evolving. Discovery is not a one-and-done process—it, never ends. People introduce new things, like updated software, into IT environments all the time. Your IT environment is an always-shifting playing field. Think of Amazon’s Alexa devices. When someone plugs one into your internal wireless network, they’ve just expanded your attack surface for a hacker by introducing a new device with its own set of vulnerabilities.

Second, you have to define the “auditable universe” by establishing manageable boundaries in direct proportion to your discovery team’s capabilities. I often see solicitations for proposals that ask for discovery of all assets in an IT environment. That could include a headquarters building, 20 satellite offices, and remote workers, and is going to take a long time to assess. I recently heard of a hospital discovering 41,000 internet-connected devices on their network—mostly Internet of Things (IoT) resources, such as heart monitors. Originally, the hospital had only been aware of about one-third of these devices. Keeping your boundaries realistic and manageable can prevent your team from being overwhelmed.

Third, your managers should refrain from getting directly involved with discovery because it’s a pretty technical and time-consuming process. You should task a team to conduct discovery, and provide the discovery team with adequate tools. There are a lot of good tools that can help map networks and manage assets; we’ll talk about them later in this blog. Managers should mainly concern themselves with the results of discovery and trust in the team’s ability to competently map out the IT environment. Remember, the IT environment is always evolving, so even as the results roll in, things are changing.

Who should managers select for the discovery team?
RG: Ideally, various groups of people. For instance, it makes sense for HR staff to conduct the people part of discovery. Likewise, it makes sense for data owners—staff responsible for certain data—to conduct the process part of discovery, and for IT staff to conduct the tool part.

However, I should point out that if you have limited internal resources, then the IT staff can conduct all three parts of discovery, working closely with all stakeholders. IT staff will have a pretty good sense of where data is held within the organization’s IT environment, and they will develop an understanding of what is important to the organization.

Could an organization’s security staff conduct discovery?
RG: Interestingly enough, security staff don’t always have day-to-day interactions with data. They are more focused on overall data protection strategies and tactics. Therefore, it makes more sense to leverage other staff, but the results of discovery (e.g., knowing where data resides, understanding the sensitivity of data) need to be shared with security staff. Ultimately, this knowledge will help security staff better protect your data.

What about hiring external resources to conduct discovery?
RG: It depends on what you’re trying to do. If the goal of discovery is to comply with some sort of regulatory standard or framework, then yes, hiring external resources makes sense. These resources could come in and, using the discovery process, conduct a formal assessment. It may also make sense to hire external resources if you’re short-staffed, or if you have a complex environment with undocumented data repositories, processes, and tools. Yet in each of these scenarios, the external resources will only be able to provide a point-in-time baseline. 

Otherwise, I recommend leveraging your internal staff. An internal discovery team should be able to handle the task if adequately staffed and resourced, and team members will learn a lot in the process. And as discovery never really ends, do you want to have to perpetually hire external resources?

People make up a big part of capacity. Should the discovery team focus on people and their roles in this process?
RG: Yes! It sounds odd that people and their roles are included in discovery, but it is important to know who is using and touching your data. At a minimum, the discovery team needs to conduct background checks. (This is one example of where HR staff need to be part of the discovery process.)

How can the discovery team best map processes?
RG: The discovery team has to review each process with the respective data owner. Now, if you are asking the data owners themselves to conduct discovery, then you should have them illustrate their own workflows. There are various process mapping tools, such as Microsoft Visio, that data owners can use for this.

The discovery team needs to acknowledge that data owners often perform their processes correctly through repetition—the problems or potential vulnerabilities stem from an inherently flawed or insecure process, or having one person in charge of too many processes. Managers should watch out for this. I’ll give you a perfect example of the latter sort of situation. I once helped a client walk through the process of system recovery.

During the process we discovered that the individual responsible for system recovery also had the ability to manipulate database records and to print checks. In theory, that person could have been able to cut themselves a check and then erase its history from the system. That’s a big problem!

Other times, data owners perform their processes correctly, but inadvertently use compromised or corrupted tools, such as free software downloaded from the internet. The discovery team has to identify needed policy and procedure changes to prevent these situations from happening.

Your mention of vulnerable software segues nicely to the topic of tools. How can the discovery team best map the technologies the organization uses?
RG: Technology is inherently flawed. You can’t go a week without hearing about a new vulnerability in a widely used system or application. I suggest researching network scanning tools for identifying hosts within your network; vulnerability testing tools for identifying technological weaknesses or gaps; and penetration testing tools for simulating cyber-attacks to assess cybersecurity defenses.

Let’s assume a manager has tasked a team to conduct discovery. What’s the next step?
RG: If you recall, in the previous blog I discussed the value of adopting a cybersecurity risk register, which is a document used to list the organization’s cybersecurity risks, record required risk mitigation actions, and identify who “owns” the risk. The next step is for your discovery team to start completing the risk register. The manager uses this risk register, and subsequent discussions with the team, to make corresponding business decisions to improve cybersecurity, such as purchasing new tools—and to measure the progress of mitigating any vulnerabilities identified in the discovery process. A risk register can become an invaluable resource planning tool for managers.

For discovery purposes, what’s the best format for a cybersecurity risk register?
RG: There are very expensive programs an organization can use to create a risk register. Some extremely large banking companies use the RSA Archer GRC platform. However, you can build a very simple risk register in Excel. An Excel spreadsheet would work well for small and some mid-sized organizations, but there are other relatively inexpensive solutions available. I say this because managers should aim for simplicity. You don’t want the discovery team getting bogged down by a complex risk register.

Finally, what are some discovery resources and reference guides that managers should become familiar with and utilize?
RG: I recommend the National Institute of Standards and Technology (NIST) Special Publication series. They outline very specific and detailed discovery methodologies you can use to improve your discovery process.

So what’s next?
RG: Chapter 6 will focus on synthesizing maturity, capacity, and discovery to create a resilient organization from a cybersecurity point of view.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Discovery: Cybersecurity playbook for management

Over the course of its day-to-day operations, every organization acquires, stores, and transmits Protected Health Information (PHI), including names, email addresses, phone numbers, account numbers, and social security numbers.

Yet the security of each organization’s PHI varies dramatically, as does its need for compliance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Organizations that meet the definition of a covered entity or business associate under HIPAA must comply with requirements to protect the privacy and security of health information.

Noncompliance can have devastating consequences for an organization, including:

  • Civil violations, with fines ranging from $100 to $50,000 per violation
  • Criminal penalties, with fines ranging from around $50,000 to $250,000, plus imprisonment

All it takes is just one security or privacy breach. As breaches of all kinds continue to rise, this may be the perfect time to evaluate the health of your organization’s HIPAA compliance. To keep in compliance and minimize your risk of a breach, your organization should have:

  • An up-to-date and comprehensive HIPAA security and privacy plan
  • Comprehensive HIPAA training for employees
  • Staff who are aware of all PHI categories
  • Sufficiently encrypted devices and strong password policies

HIPAA Health Check: A Thorough Diagnosis

If your organization doesn’t have these safeguards in place, it’s time to start preparing for the worst — and undergo a HIPAA health check.

Organizations need to understand what they have in place, and where they need to bolster their practice. Here are a variety of fact-finding methods and tools we recommend, including (but not limited to):

  • Administrative, technical, and physical risk analyses
  • Policy, procedure, and business documentation reviews
  • Staff surveys and interviews
  • IT audits and testing of data security

Once you have diagnosed your organization’s “as-is” status, you need to move your organization toward the “to-be” status — that is, toward HIPAA compliance — by:

  • Prioritizing your HIPAA security and privacy risks
  • Developing tactics to mitigate those risks
  • Providing tools and tactics for security and privacy breach prevention and minimization
  • Creating or updating policies, procedures, and business documents, including a HIPAA security and privacy plan

As each organization is different, there are many factors to consider as you go through these processes, and customize your approach to the HIPAA-compliance needs of your organization.

The Road to Wellness

An ounce of prevention is worth a pound of cure. Don’t let a security or privacy breach jump-start the compliance process. Reach out to us for a HIPAA health check. Contact us if you have any questions on how to get your organization on the road to wellness.

Blog
How healthy is your organization's HIPAA compliance?

With the rise of artificial intelligence, most malware programs are starting to think together. Fortinet recently released a report that highlights some terms we need to start paying attention to:

Bot
A “bot” is an automated program that, in this case, runs against IP addresses to find specific vulnerabilities and exploit them. Once it finds the vulnerability, it has the ability to insert malware such as ransomware or Trojans (a type of malware disguised as legitimate software) into the vulnerable device. These programs adapt to what they find in order to infect a system and then make themselves invisible.

Swarmbot
Now, think about thousands of different bots, attacking one target at the same time. That’s a swarm, or in the latest lingo, a swarmbot. Imagine a swarmbot attacking any available access into your network. This is a bot on steroids.

Hivenet
A “hivenet” is a self-learning cluster of compromised devices that share information and customize attacks. Hivenets direct swarmbots based on what they learn during an attack. They represent a significant advance in malware development, and are now considered by some to be a kind of artificial intelligence. The danger lies is in a hivenet’s ability to think during an attack.

Where do they run? Everywhere.
Bots and hives can run on any compromised internet-connected devices. This includes webcams, baby cams, DVRs, home routers, refrigerators, drones, “smart” TVs, and, very, very soon, (if not already) mobile phones and tablets. Anything that has an IP address and is not secured is vulnerable.

With some 2.9 billion botnet communications per quarter that we know of, attacks aren’t just theory anymore — they’re inevitable.

Organizations have heating and cooling systems, physical security systems, security cameras and multiple types of devices now accessible from the internet. Even community water, electric and telecommunications systems are vulnerable to attack — if they are accessible.

What can you do? Take care of your business—at home and at work.
At home, how many devices do you own with an IP address? In the era of smart homes, it can add up quickly. Vendors are fast to jump on the “connect from anywhere” bandwagon, but not so fast to secure their devices. How many offered updates to the device’s software in the last year? How would you know? Do any of the products address communications security? If the answer is “none,” you are at risk.

When assessing security at work, all organizations need to consider smart devices and industrial control systems that are Internet accessible, including phone systems, web conferencing devices, heating and cooling systems, fire systems, even elevators. What has an IP address? Vulnerable areas have expanded exponentially in the name of convenience and cost saving. Those devices may turn out to be far more expensive than their original price tag  remember the Target data breach? A firewall will not be sufficient protection if a compromised vendor has access.

Evaluate the Risks of Internet Accessibility
It may be great if you can see who is ringing your doorbell at home from your office, but only if you are sure you are the only one who can do that. Right now, my home is very “stupid,” and I like it that way. I worry about my wireless garage door opener, but at least someone has to be at my house to compromise it. My home firewall is commercial grade because most small office/home office routers are abysmally insecure, and are easily hacked. Good security costs money.

It may be more convenient for third-party vendors to access your internal equipment from their offices, but how secure are their offices? (There is really no way to know, except by sending someone like me in). Is your organization monitoring outgoing traffic from your network through your firewall? That’s how you discover a compromised device. Someone needs to pay attention to that traffic. You may not host valuable information, but if you have 300 unsecured devices, you can easily become part of a swarm.

Be Part of the Solution
Each one of us needs to eliminate or upgrade the devices that can become bots. At home, check your devices and install better security, in the same way you would upgrade locks on doors and windows to deter burglars. Turn off your computers when they are not in use. Ensure your anti-virus software is current on every device that has an operating system. Being small is no longer safe. Every device will matter.

Blog
Swarmbots, hivenets, and other stinging insects

Just as sports teams need to bring in outside resources — a new starting pitcher, for example, or a free agent QB — in order to get better and win more games, most organizations need to bring in outside resources to win the cybersecurity game. Chapter 4 in our Cybersecurity Playbook for Management looks at how managers can best identify and leverage these outside resources, known as external capacity.

In your last blog, you mentioned that external capacity refers to outside resources — people, processes, and tools — you hire or purchase to improve maturity. So let’s start with people. What advice would you give managers for hiring new staff?
RG: I would tell them to search for new staff within their communities of interest. For instance, if you’re in financial services, use the Financial Services Information Sharing and Analysis Center (FS-ISAC) as a resource. If you’re in government, look to the Multi-State Information Sharing and Analysis Center (MS-ISAC). Perhaps more importantly, I would tell managers what NOT to do.

First, don’t get caught up in the certification trap. There are a lot of people out there who are highly qualified on paper, but who don’t have a lot of the real-world experience. Make sure you find people with relevant experience.

Second, don’t blindly hire fresh talent. If you need to hire a security strategist, don’t hire someone right out of college just getting started. While they might know security theories, they’re not going to know much about business realities.

Third, vet your prospective hires. Run national background checks on them, and contact their references. While there is a natural tendency to trust people, especially cybersecurity professionals, you need to be smart, as there are lots of horror stories out there. I once worked for a bank in Europe that had hired new security and IT staff. The bank noticed a pattern: these workers would work for six or seven months, and then just disappear. Eventually, it became clear that this was an act of espionage. The bank was ripe for acquisition, and a second bank used these workers to gather intelligence so it could make a takeover attempt. Every organization needs to be extremely cautious.

Finally, don’t try to hire catchall staff. People in management often think: “I want someone to come in and rewrite all of our security policies and procedures, and oversee strategic planning, and I also want them to work on the firewall.” It doesn’t work that way. A security strategist is very different from a firewall technician — and come with two completely different areas of focus. Security strategists focus on the high-level relationship between business processes and outside threats, not technical operations. Another point to consider: if you really need someone to work on your firewall, look at your internal capacity first. You probably already have staff who can handle that. Save your budget for other resources.

You have previously touched upon the idea that security and IT are two separate areas.
RG
: Yes. And managers need to understand that. Ideally, an organization should have a Security Department and an IT Department. Obviously, IT and Security work hand-in-glove, but there is a natural friction between the two, and that is for good reason. IT is focused on running operations, while security is focused on protecting them. Sometimes, protection mechanisms can disrupt operations or impede access to critical resources.

For example, two-factor authentication slows down the time to access data. This friction often upsets both end users and IT staff alike; people want to work unimpeded, so a balance has to be struck between resource availability and safeguarding the system itself. Simply put, IT sometimes cares less about security and more about keeping end users happy — and while that it is important, security is equally important.

What’s your view on hiring consultants instead of staff?
RG
: There are plenty of good security consultants out there. Just be smart. Vet them. Again, run national background checks, and contact their references. Confirm the consultant is bonded and insured. And don’t give them the keys to the kingdom. Be judicious when providing them with administrative passwords, and distinguish them in the network so you can keep an eye on their activity. Tell the consultant that everything they do has to be auditable. Unfortunately, there are consultants who will set up shop and pursue malicious activities. It happens — particularly when organizations hire consultants through a third-party hiring agency. Sometimes, these agencies don’t conduct background checks on consultants, and instead expect the client to.

The consultant also needs to understand your business, and you need to know what to expect for your money. Let’s say you want to hire a consultant to implement a new firewall. Firewalls are expensive and challenging to implement. Will the consultant simply implement the firewall and walk away? Or will the consultant not only implement the firewall, but also teach and train your team in using and modify the firewall? You need to know this up front. Ask questions and agree, in writing, the scope of the engagement — before the engagement begins.

What should managers be aware of when they hire consultants to implement new processes?
RG
: Make sure that the consultant understands the perspectives of IT, security, and management, because the end result of a new process is always a business result, and new processes have to make financial sense.

Managers need to leverage the expertise of consultants to help make process decisions. I’ll give you an example. In striving to improve their cybersecurity maturity, many organizations adopt a cybersecurity risk register, which is a document used to list the organization’s cybersecurity risks, record actions required to mitigate those risks, and identify who “owns” the risk. However, organizations usually don’t know best practices for using a risk register. This sort of tool can easily become complex and unruly, and people lose interest when extracting data from a register becomes difficult or consumes a lot of time reading.

A consultant can help train staff in processes that maximize a risk register’s utility. Furthermore, there’s often debate about who owns certain risks. A consultant can objectively arbitrate who owns each risk. They can identify who needs to do X, and who needs to do Y, ultimately saving time, improving staff efficiency, and greatly improving your chances of project success.

Your mention of a cybersecurity risk register naturally leads us to the topic of tools. What should managers know about purchasing or implementing new technology?
RG
: As I mentioned in the last blog, organizations often buy tools, yet rarely maximize their potential. So before managers give the green light to purchase new tools, they should consider ways of leveraging existing tools to perform more, and more effective, processes.

If a manager does purchase a new tool, they should purchase one that is easy to use. Long learning curves can be problematic, especially for smaller organizations. I recommend managers seek out tools that automate cybersecurity processes, making the processes more efficient.

For example, you may want to consider tools that perform continuous vulnerability scans or that automatically analyze data logs for anomalies. These tools may look expensive at first glance, but you have to consider how much it would cost to hire multiple staff members to look for vulnerabilities or anomalies.

And, of course, managers should make sure that a new tool will truly improve their organization’s safeguards against cyber-attack. Ask yourself and your staff: Will this tool really reduce our risk?

Finally, managers need to consider eliminating tools that aren’t working or being used. I once worked with an organization that had expensive cybersecurity tools that simply didn’t function well. When I asked why it kept them, I was told that the person responsible for them was afraid that a breach would occur if they were removed. Meanwhile, these tools were costing the organization around $60,000 a month. That’s real money. The lesson: let business goals, and not fear, dictate your technology decisions.

So, what’s next?
RG
: So far in this series we have covered the concepts of maturity and capacity. Next, we’re going to look at the concept of discovery. Chapter 5 will focus on internal audit strategies that you can use to determine, or discover, whether or not your organization is using tools and processes effectively.

Blog
External capacity: Cybersecurity playbook for management

It may be hard to believe some seasons, but every professional sports team currently has the necessary resources — talent, plays, and equipment — to win. The challenge is to identify and leverage them for maximum benefit. And every organization has the necessary resources to improve its cybersecurity. Chapter 3 in BerryDunn’s Cybersecurity Playbook for Management looks at how managers can best identify and leverage these resources, known collectively as internal capacity.

The previous two chapters focused on using maturity models to improve an organization’s cybersecurity. The next two are about capacity. What is the difference, and connection, between maturity and capacity, and why is it important? 
RG: Maturity refers to the “as is” state of an organization’s cybersecurity program compared to its desired “to be” state. Capacity refers to the resources an organization can use to reach the “to be” state. There are two categories of capacity: external and internal. External capacity refers to outside resources — people, processes, and tools — you can hire or purchase to improve maturity. (We’ll discuss external capacity more in our next installment.) Internal capacity refers to in-house people, processes, and tools you can leverage to improve maturity. 

Managers often have an unclear picture of how to use resources to improve cybersecurity. This is mainly because of the many demands found in today's business environments. I recommend managers conduct internal capacity planning. In other words, they need to assess the internal capacity needed to increase cybersecurity maturity. Internal capacity planning can answer three important questions:

1. What are the capabilities of our people?
2. What processes do we need to improve?
3. What tools do we have that can help improve processes and strengthen staff capability?

What does the internal capacity planning process look like?
RG
: Internal capacity planning is pretty easy to conduct, but there’s no standard model. It’s not a noun, like a formal report. It’s a verb — an act of reflection. It’s a subjective assessment of your team members’ abilities and their capacity to perform a set of required tasks to mature the cybersecurity program. These are not easy questions to ask, and the answers can be equally difficult to obtain. This is why you should be honest in your assessment and urge your people to be honest with themselves as well. Without this candor, your organization will spin its wheels reaching its desired “to be” state.

Let’s start with the “people” part of internal capacity. How can managers assess staff?RG: It’s all about communication. Talk to your staff, listen to them, and get a sense of who has the ability and desire for improving cybersecurity maturity in certain subject areas or domains, like Risk Management or Event and Incident Response. If you work at a small organization,  start by talking to your IT manager or director. This person may not have a lot of cybersecurity experience, but he or she will have a lot of operational risk experience. IT managers and directors tend to gravitate toward security because it’s a part of their overall responsibilities. It also ensures they have a voice in the maturing process.

In the end, you need to match staff expertise and skillsets to the maturity subject areas or domains you want to improve. While an effective manager already has a sense of staff expertise and skillsets, you can add a SWOT analysis to clarify staff strengths, weaknesses, opportunities, and threats.

The good news: In my experience, most organizations have staff who will take to new maturity tasks pretty quickly, so you don’t need to hire a bunch of new people.

What’s the best way to assess processes?
RG
: Again, it’s all about communication. Talk to the people currently performing the processes, listen to them, and confirm they are giving you honest feedback. You can have all the talent in the world, and all the tools in the world — but if your processes are terrible, your talent and tools won’t connect. I’ve seen organizations with millions of dollars’ worth of tools without the right people to use the tools, and vice versa. In both situations, processes suffer. They are the connective tissue between people and tools. And keep in mind, even if your current ones are good, most  tend to grow stale. Once you assess, you probably need to develop some new processes or improve the ones in place.

How should managers and staff develop new processes?
RG
: Developing new ones can be difficult  we’re talking change, right? As a manager, you have to make sure the staff tasked with developing them are savvy enough to make sure the processes improve your organization’s maturity. Just developing a new one, with little or no connection to maturity, is a waste of time and money. Just because measuring maturity is iterative, doesn’t mean your approach to maturing cybersecurity has to be. You need to take a holistic approach across a wide range of cybersecurity domains or subject areas. Avoid any quick, one-and-done processes. New ones should be functional, repeatable, and sustainable; if not, you’ll overburden your team. And remember, it takes time to develop new ones. If you have an IT staff that’s already struggling to keep up with their operational responsibilities, and you ask them to develop a new process, you’re going to get a lot of pushback. You and the IT staff may need to get creative — or look toward outside resources, which we’ll discuss in chapter 4.

What’s the best way to assess tools?
RG
: Many organizations buy many tools, rarely maximize their potential. And on occasion, organizations buy tools but never install them. The best way to assess tools is to select staff to first measure the organization’s inventory of tools, and then analyze them to see how they can help improve maturity for a certain domain or subject area. Ask questions: Are we really getting the maximum outputs those tools offer? Are they being used as intended?

I’ll give you an example. There’s a company called SolarWinds that creates excellent IT management tools. I have found many organizations use SolarWinds tools in very specific, but narrow, ways. If your organization has SolarWinds tools, I suggest reaching out to your IT staff to see if the organization is leveraging the tools to the greatest extent possible. SolarWinds can do so much that many organizations rarely leverage all its valuable feature.

What are some pitfalls to avoid when conducting internal capacity planning?
RG
: Don’t assign maturity tasks to people who have been with the organization for a really long time and are very set in their ways, because they may be reluctant to change. As improving maturity is a disruptive process, you want to assign tasks to staff eager to implement change. If you are delegating the supervision of the maturity project, don’t delegate it to a technology-oriented person. Instead, use a business-oriented person. This person doesn’t need to know a lot about cybersecurity — but they need to know, from a business perspective, why you need to implement the changes. Otherwise, your changes will be more technical in nature than strategic. Finally, don’t delegate the project to someone who is already fully engaged on other projects. You want to make sure this person has time to supervise the project.

Is there ever a danger of receiving incorrect information about resource capacity?
RG
: Yes, but you’ll know really quickly if a certain resource doesn’t help improve your maturity. It will be obvious, especially when you run the maturity model again. Additionally, there is a danger of staff advocating for the purchase of expensive tools your organization may not really need to manage the maturity process. Managers should insist that staff strongly and clearly make the case for such tools, illustrating how they will close specific maturity gaps.

When purchasing tools a good rule of thumb is: are you going to get three times the return on investment? Will it decrease cost or time by three times, or quantifiably reduce risk by three times? This ties in to the larger idea that cybersecurity is ultimately a function of business, not a function of IT. It also conveniently ties in with external capacity, the topic for chapter four.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Tapping your internal capacity for better results: Cybersecurity playbook for management

It’s one thing for coaching staff to see the need for a new quarterback or pitcher. Selecting and onboarding this talent is a whole new ballgame. Various questions have to be answered before moving forward: How much can we afford? Are they a right fit for the team and its playing style? Do the owners approve?

Management has to answer similar questions when selecting and implementing a cybersecurity maturity model, and form the basis of this blog – chapter 2 in BerryDunn’s Cybersecurity Playbook for Management.

What are the main factors a manager should consider when selecting a maturity model?
RG: All stakeholders, including managment, should be able to easily understand the model. It should be affordable for your organization to implement, and its outcomes achievable. It has to be flexible. And it has to match your industry. It doesn’t make a lot of sense to have an IT-centric maturity model if you’re not an extremely high-tech organization. What are you and your organization trying to accomplish by implementing maturity modeling? If you are trying to improve the confidentiality of data in your organization’s systems, then the maturity model you select should have a data confidentiality domain or subject area.

Managers should reach out to their peer groups to see which maturity models industry partners and associates use successfully. For example, Municipality A might look at what Municipality B is doing, and think: “How is Municipality B effectively managing cybersecurity for less money than we are?” Hint: there’s a good chance they’re using an effective maturity model. Therefore, Municipality A should probably select and implement that model. But you also have to be realistic, and know certain other factors—such as location and the ability to acquire talent—play a role in effective and affordable cybersecurity. If you’re a small town, you can’t compare yourself to a state capital.

There’s also the option of simply using the Cybersecurity Capability Maturity Model (C2M2), correct?
RG: Right. C2M2, developed by the U.S. Department of Energy, is easily scalable and can be tailored to meet specific needs. It also has a Risk Management domain to help ensure that an organization’s cybersecurity strategy supports its enterprise risk management strategy.

Once a manager has identified a maturity model that best fits their business or organization, how do they implement it?
RG: STEP ONE: get executive-level buy-in. It’s critical that executive management understands why maturity modeling is crucial to an organization's security. Explain to them how maturity modeling will help ensure the organization is spending money correctly and appropriately on cybersecurity. By sponsoring the effort, providing adequate resources, and accepting the final results, executive management plays a critical role in the process. In turn, you need to listen to executive management to know their priorities, issues, and resource constraints. When facilitating maturity modeling, don’t drive toward a predefined outcome. Understand what executive management is comfortable implementing—and what the business or organization can afford.

STEP TWO: Identify leads who are responsible for each domain or subject area of the maturity model. Explain to these leads why the organization is implementing maturity modeling, expected outcomes, and how their input is invaluable to the effort’s success. Generally speaking, the leads responsible for subject areas are very receptive to maturity modeling, because—unlike an audit—a maturity model is a resource that allows staff to advocate their needs and to say: “These are the resources I need to achieve effective cybersecurity.”

Third, have either management or these subject area leads communicate the project details to the lower levels of the organization, and solicit feedback, because staff at these levels often have unique insight on how best to manage the details.

The fourth step is to just get to work. This work will look a little different from one organization to another, because every organization has its own processes, but overall you need to run the maturity model—that is, use the model to assess the organization and discover where it measures up for each subject area or domain. Afterwards, conduct work sessions, collect suggestions and recommendations for reaching specific maturity levels, determine what it’s going to cost to increase maturity, get approval from executive management to spend the money to make the necessary changes, and create a Plan of Action and Milestones (POA&M). Then move forward and tick off each milestone.

Do you suggest selecting an executive sponsor or an executive steering committee to oversee the implementation?
RG: Absolutely. You just want to make sure the executive sponsors or steering committee members have both the ability and the authority to implement changes necessary for the modeling effort.

Should management consider hiring vendors to help implement their cybersecurity maturity models?
RG: Sure. Most organizations can implement a maturity model on their own, but the good thing about hiring a vendor is that a vendor brings objectivity to the process. Within your organization, you’re probably going to find erroneous assumptions, differing opinions about what needs to be improved, and bias regarding who is responsible for the improvements. An objective third party can help navigate these assumptions, opinions, and biases. Just be aware some vendors will push their own maturity models, because their models require or suggest organizations buy the vendors’ software. While most vendor software is excellent for improving maturity, you want to make sure the model you’re using fits your business objectives and is affordable. Don’t lose sight of that.

How long does it normally take to implement a maturity model?

RG: It depends on a variety of factors and is different for every organization. Keep in mind some maturity levels are fairly easy to reach, while others are harder and more expensive. It goes without saying that well-managed organizations implement maturity models more rapidly than poorly managed organizations.

What should management do after implementation?
RG: Run the maturity model again, and see where the organization currently measures up for each subject area or domain. Do you need to conduct a maturity model assessment every year? No, but you want to make sure you’re tracking the results year over year in order to make sure improvements are occurring. My suggestion is to conduct a maturity model assessment every three years.

One final note: make sure to maintain the effort. If you’re going to spend time and money implementing a maturity model, then make the changes, and continue to reassess maturity levels. Make sure the process becomes part of your organizations’ overall strategic plan. Document and institutionalize maturity modeling. Otherwise, the organization is in danger of losing this knowledge when the people who spearheaded the effort retire or pursue new opportunities elsewhere.

What’s next?
RG: Over the next couple of blogs, we’ll move away from talking about maturity modeling and begin talking about the role capacity plays in cybersecurity. Blog #3 will instruct managers on how to conduct an internal assessment to determine if their organizations have the people, processes, and technologies they need for effective cybersecurity.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Selecting and implementing a maturity model: Cybersecurity playbook for management

Good Practices Are Not Enough

When it comes to IT security, more than one CEO running a small organization has told me they have really good people taking care of “all that.” These CEOs choose to believe their people perform good practices. That may be true, but who defines good practices and how they administer them? And when? If “security is everyone’s job,” then nobody is responsible for getting specific things done. Good practices require consistency, and consistency requires structure.

From an audit perspective, a control not written down does not exist. Why? Because it can’t be tested, measured, or validated. An IT Auditor can’t assess controls if they were never defined. Verbal instruction carries by far the most risk. “I told him to do that,” doesn’t pass the smell test in court.

Why Does it Matter?

Because it’s not IT’s job to write policies. Their job is to implement IT decisions made by management. They’re not at the right level to make decisions that impact the entire organization. Why should small organizations concern themselves with developing policies and procedures? Here are two very good reasons:

1. Regulatory Requirements
2. Lawsuits

No matter how small your organization, if you have a corporate network (even cloud-based) and you store credit card transactions, personal health information, client financial information or valuable intellectual property, being aware of state and federal regulatory requirements for protecting that information is vital. It is the responsibility of management to research and develop a management framework for addressing risk.

Lawsuits happen when information is stolen and/or employees are terminated for inappropriate activities. If you have no policies that mandate what is and isn’t acceptable, and what the penalties are for violations, your terminated employee has grounds for a wrongful termination lawsuit: policy should not be written by the IT Department.

If confidential data you are responsible for is stolen and clients sue you, standing up in court and saying “We don’t have any written policies or procedures,” is a sure way to have both significant financial losses and a negative impact on your reputation. For a small organization, that could mean going out of business.

Even if data is stolen from a third-party vendor who stores your data, your organization owns the data and is responsible for ensuring the data is secure with the vendor and meets organizational requirements. Do you have a vendor management policy? If you work with vendors, you need one.

Consider, too, that every organization expects to grow its business. The longer management doesn’t pay attention to policies and procedures, the more difficult it becomes to develop and implement them.

Medium and Large Organizations Need to Pay Attention, too

A policy document provides a framework for defining activities and decision-making by everyone in the organization. A policy contains standards for the organization, and outlines penalties for non-performance. The organization’s management team or board of directors must drive their creation.
Policies also maintain accountability in the eyes of internal and external stakeholders. Even the smallest organization wants their customers and employees to have confidence the organization is protecting important information. By defining the necessary controls for running business operations that address risk and compliance requirements (and reviewing them annually), your management team demonstrates a commitment to good practices.

Procedures are the “How”

Procedures don’t belong in a policy. Departments need to be able to design their own procedures to meet policy requirements and definitions. HR will have procedures for employee privacy and financial information, finance must manage credit card, student, banking or client financial documentation, and IT will need to develop specific technical procedures to document their compliance with policy.

If all those procedures are in a policy, it makes for unwieldy policy documents that management must review and approve. Departments need to change and update their procedures quickly in order to remain effective. For example, a policy may mandate the minimum number of characters in a password, but IT needs to develop the procedures to implement that requirement on many platforms and devices.

What is a “Plan” Used For?

Consider that organizations commonly have a Business Continuity Plan as well as an Incident Response Plan. How is a “plan” different from a policy or procedure?

A plan (for example, an Information Security Plan, or Privacy Plan, etc.) is a collection of related procedures with a specific focus. I have seen these collections called “programs,” but most organizations use “plan” (plus, the Federal government uses that term). The term “program” implies a beginning and an end, as well as tending to be a little too generic (think “School Lunch Program”).

Three Ways Not to Develop Policies, Procedures and Plans

1.

Getting templates from the Internet. Doing a Google search delivers an overwhelming number of approaches, examples and material. Policy templates found online may not be applicable to your organization’s purpose, or require so much editing they defeat the template’s purpose. 

2.

Alternatively, going to organizational peers can endlessly replicate one poorly developed approach to documentation.

3.

Writing policies and procedures totally focused on meeting one regulatory requirement frequently necessitates a total re-write as soon as the next regulation comes along.

Consider the Unique Aspects of Your Organization

What electronic information does your organization consider valuable? During an assessment with a state university, we discovered that the farm research the agriculture school was performing was extremely valuable. While we started out with questions about student health and financial information, the university realized the research data was equally critical. The information might not have federal or state regulations attached to it, but if it is valuable to your organization, you need to protect it. By not taking a one-size fits all approach to our assessment, we were able to meet their specific needs.

Multiple Departments or Locations? Standardize.

Whether your organization is a university, non-profit organization, government agency, medical center or business, you frequently have sub-entities. Each sub-entity or location may have different terms for different functions. For example, at a recent engagement for another university, Information Security “Programs,” “Plans” and “Policies” meant different things on different campuses. This caused confusion on the part of all stakeholders. It also showed a lack of cohesion in the approach to security of the university as a whole. Standardizing language is one of the best ways to have everyone in the organization on the same page, even if the documents are unique to a location, agency or site. This makes planning, implementation, and system upgrade projects run more effectively.

Demonstrate Competence

No matter what terms your organization chooses, using consistent terms is a good way to demonstrate a thoughtful approach. Everyone needs to be talking the same language. Having documents that specify management decisions provides assurance to internal and external stakeholders. Good policies, procedures and plans can mean the difference between a manageable crisis and a business failure.

To receive IT security updates, please sign up here.

Blog
Policies, procedures, and plans—defining the language of your organization

Is your organization a service provider that hosts or supports sensitive customer data, (e.g., personal health information (PHI), personally identifiable information (PII))? If so, you need to be aware of a recent decision by the American Institute of Certified Public Accountants that may affect how your organization manages its systems and data.

In April, the AICPA’s Assurance Executive Committee decided to replace the five Trust Service Principles (TSPs) with Trust Services Criteria (TSC), requiring service organizations to completely rework their internal controls, and present SOC 2 findings in a revised format. This switch may sound frustrating or intimidating, but we can help you understand the difference between the principles and the criteria.

The SOC 2 Today
Service providers design and implement internal controls to protect customer data and comply with certain regulations. Typically, a service provider hires an independent auditor to conduct an annual Service Organization Control (SOC) 2 examination to help ensure that controls work as intended. Among other things, the resulting SOC 2 report assures stakeholders (customers and business partners) the organization is reducing data risk and exposure.

Currently, SOC 2 reports focus on five Trust Services Principles (TSP):

  • Security: Information and systems are protected against unauthorized access, unauthorized disclosure of information, and damage to systems that can compromise the availability, integrity, confidentiality, and privacy of information or systems — and affect the entity's ability to meet its objectives.

  • Availability: Information and systems are available for operation and use to meet the entity's objectives.

  • Processing Integrity: System processing is complete, valid, accurate, timely, and authorized to meet the entity's objectives.

  • Confidentiality: Information designated as confidential is protected to meet the entity's objectives.

  • Privacy: Personal information is collected, used, retained, disclosed, and disposed of to meet the entity's objectives.

New SOC 2 Format
The TSC directly relate to the 17 principles found in the Committee of Sponsoring Organization (COSO)’s 2013 Framework for evaluating internal controls, and include additional criteria related to COSO Principle 12. The new TSC are:

  • Control Environment: emphasis on ethical values, board oversight, authority and responsibilities, workforce competence, and accountability.
  • Risk Assessment: emphasis on the risk assessment process, how to identify and analyze risks, fraud-related risks, and how changes in risk impact internal controls.
  • Control Activities: Emphasis on how you develop controls to mitigate risk, how you develop technology controls, and how you deploy controls to an organization through the use of policies and procedures.
  • Information and Communication: Emphasis on how you communicate internal of the organization to internal and external parties.
  • Monitoring: Emphasis on how you evaluate internal controls and how you communicate and address any control deficiencies.

The AICPA has provided nearly 300 Points of Focus (POF), supporting controls that organizations should consider when addressing the TSC. The POF offer guidance and considerations for controls that address the specifics of the TSC, but they are not required.

Points of Focus
Organizations now have some work to do to meet the guidelines. The good news: there’s still plenty of time to make necessary changes. You can use the current TSP format before December 15, 2018. Any SOC 2 report presented after December 15, 2018, must incorporate the new TSC format. The AICPA has provided a mapping spreadsheet to help service organizations move from TSP to the TSC format.

Contact Chris Ellingwood to learn more about how we can help you gain control of your SOC 2 reporting efforts. 
 

Blog
The SOC 2 update — how will it affect you?

For professional baseball players who get paid millions to swing a bat, going through a slump is daunting. The mere thought of a slump conjures up frustration, anxiety and humiliation, and in extreme cases, the possibility of job loss.

The concept of a slump transcends sports. Just glance at the recent headlines about Yahoo, Equifax, Deloitte, and the Democratic National Committee. Data breaches occur on a regular basis. Like a baseball team experiencing a downswing, these organizations need to make adjustments, tough decisions, and major changes. Most importantly, they need to realize that cybersecurity is no longer the exclusive domain of Chief Information Security Officers and IT departments. Cybersecurity is the responsibility of all employees and managers: it takes a team.

When a cybersecurity breach occurs, people tend to focus on what goes wrong at the technical level. They often fail to see that cybersecurity begins at the strategic level. With this in mind, I am writing a blog series to outline the activities managers need to take to properly oversee cybersecurity, and remind readers that good cybersecurity takes a top-down approach. Consider the series a cybersecurity playbook for management. This Q&A blog — chapter 1 — highlights a basic concept of maturity modeling.

Let’s start with the basics. What exactly is a maturity model?
RG
: A maturity model is a framework that assesses certain elements in an organization, and provides direction to improve these elements. There are project management, quality management, and cybersecurity maturity models.

Cybersecurity maturity modeling is used to set a cybersecurity target for management. It’s like creating and following an individual development program. It provides definitive steps to take to reach a maturity level that you’re comfortable with — both from a staffing perspective, and from a financial perspective. It’s a logical road map to make a business or organization more secure.

What are some well-known maturity models that agencies and companies use?
RG
: One of the first, and most popular is the Program Review for Information Security Management Assistance (PRISMA), still in use today. Another is the Capability Maturity Model Integration (CMMI) model, which focuses on technology. Then there are some commercial maturity models, such as the Gartner Maturity Model, that organizations can pay to use.

The model I prefer is the Cybersecurity Capability Maturity Model (C2M2), developed by the U.S. Department of Energy. I like C2M2 because it directly maps to the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) compliance, which is a prominent industry standard. C2M2 is easily understandable and digestible, it scales to the size of the organization, and it is constantly updated to reflect the most recent U.S. government standards. So, it’s relevant to today’s operational environment.

Communication is one of C2M2’s strengths. Because there is a mechanism in the model requiring management to engage and support the technical staff, it facilitates communication and feedback at not just the operational level, but at the tactical level, and more significantly, the management level, where well-designed security programs start.

What’s the difference between processed-based and capability-based models?
RG
: Processed-based models focus on performance or technical aspects — for example, how mature are processes for access controls? Capability-based models focus on management aspects — is management adequately training people to manage access controls?

C2M2 combines the two approaches. It provides practical steps your organization can take, both operationally and strategically. Not only does it provide the technical team with direction on what to do on a daily basis to help ensure cybersecurity, it also provides management with direction to help ensure that strategic goals are achieved.

Looking at the bigger picture, what does an organization look like from a managerial point of view?
RG
: First, a mature organization communicates effectively. Management knows what is going on in their environment.

Most of them have very competent staff. However, staff members don’t always coordinate with others. I once did some security work for a company that had an insider threat. The insider threat was detected and dismissed from the company, but management didn’t know the details of why or how the situation occurred. Had there been an incident response plan in place (one of the dimensions C2M2 measures) — or even some degree of cybersecurity maturity in the company, they would’ve had clearly defined steps to take to handle the insider threat, and management would have been aware from an early stage. When management did find out about the insider threat, it became a much bigger issue than it had to be, and wasted time and resources. At the same time, the insider threat exposed the company to a high degree of risk. Because upper management was unaware, they were unable to make a strategic decision on how to act or react to the threat.

That’s the beauty of C2M2. It takes into account the responsibilities of both technical staff and management, and has a built-in communication plan that enables the team to work proactively instead of reactively, and shares cybersecurity initiatives between both management and technical staff.

Second, management in a mature organization knows they can’t protect everything in the environment — but they have a keen awareness of what is really important. Maturity modeling forces management to look at operations and identify what is critical and what really needs to be protected. Once management knows what is important, they can better align resources to meet particular challenges.

Third, in a mature organization, management knows they have a vital role to play in supporting the staff who address the day-to-day operational and technical tasks that ultimately support the organization’s cybersecurity strategy.

What types of businesses, not-for-profits, and government agencies should practice maturity modeling?
RG
: All of them. I’ve been in this industry a long time, and I always hear people say: “We’re too small; no one would take any interest in us.”

I conducted some work for a four-person firm that had been hired by the U.S. military. My company discovered that the firm had a breach and the four of them couldn’t believe it because they thought they were too small to be breached. It doesn’t matter what the size of your company is: if you have something someone finds very valuable, they’re going to try to steal it. Even very small companies should use cybersecurity models to reduce risk and help focus their limited resources on what is truly important. That’s maturity modeling: reducing risk by using approaches that make the most sense for your organization.

What’s management’s big takeaway?
RG
: Cybersecurity maturity modeling aligns your assets with your funding and resources. One of the most difficult challenges for every organization is finding and retaining experienced security talent. Because maturity modeling outlines what expertise is needed where, it can help match the right talent to roles that meet the established goals.

So what’s next?
RG
: In our next installment, we’ll analyze what a successful maturity modeling effort looks like. We’ll discuss the approach, what the outcome should be, and who should be involved in the process. We’ll discuss internal and external cybersecurity assessments, and incident response and recovery.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Maturity modeling: Cybersecurity playbook for management

This site uses cookies to provide you with an improved user experience. By using this site you consent to the use of cookies. Please read our Privacy Policy for more information on the cookies we use and how you can manage them.