Skip to Main Content

blogpost

Five IT risks everyone should be aware of

09.11.19

Editor’s note: while this blog is not technical in nature, you should read it if you are involved in IT security, auditing, and management of organizations that may participate in strategic planning and business activities where considerations of compliance and controls is required.

As we find ourselves in a fast-moving, strong business growth environment, there is no better time to consider the controls needed to enhance your IT security as you implement new, high-demand technology and software to allow your organization to thrive and grow. Here, five risks you need to take care of if you want to build or maintain strong IT security.

1. Third-party risk management―It’s still your fault

We rely daily on our business partners and vendors to make the work we do happen. With a focus on IT, third-party vendors are a potential weak link in the information security chain and may expose your organization to risk. However, though a data breach may be the fault of a third-party, you are still responsible for it. Potential data breaches and exposure of customer information may occur, leaving you to explain to customers and clients answers and explanations you may not have. 

Though software as a service (SaaS) providers, along with other IT third-party services, have been around for well over a decade now, we still neglect our businesses by not considering and addressing third-party risk. These third-party providers likely store, maintain, and access company data, which could potentially contain personally identifiable information (names, social security numbers, dates of birth, addresses), financial information (credit cards or banking information), and healthcare information of your customers. 

While many of the third-party providers have comprehensive security programs in place to protect that sensitive information, a study in 2017 found that 30% of data breaches were caused by employee error or while under the control of third-party vendors.1  This study reemphasizes that when data leaves your control, it is at risk of exposure. 

In many cases, procurement and contracting policies likely have language in contracts that already establish requirements for third-parties related to IT security; however the enforcement of such requirements and awareness of what is written in the contract is not enforced or is collected, put in a file, and not reviewed. What can you do about it?

Improved vendor management

It is paramount that all organizations (no matter their size) have a comprehensive vendor management program that goes beyond contracting requirements in place to defend themselves against third-party risk which includes:

  1. An inventory of all third-parties used and their criticality and risk ranking. Criticality should be assigned using a “critical, high, medium or low” scoring matrix. 
  2. At time of onboarding or RFP, develop a standardized approach for evaluating if potential vendors have sufficient IT security controls in place. This may be done through an IT questionnaire, review of a Systems and Organization Controls (SOC report) or other audit/certifications, and/or policy review. Additional research may be conducted that focuses on management and the company’s financial stability. 
  3. As a result of the steps in #2, develop a vendor risk assessment using a high, medium and low scoring approach. Higher risk vendors should have specific concerns addressed in contracts and are subject to more in depth annual due diligence procedures. 
  4. Reporting to senior management and/or the board annually on the vendors used by the organization, the services they perform, their risk, and ways the organization monitors the vendors. 

2. Regulation and privacy laws―They are coming 

2018 saw the implementation of the European Union’s General Data Privacy Regulation (GDPR) which was the first major data privacy law pushed onto any organization that possesses, handles, or has access to any citizen of EU’s personal information. Enforcement has started and the Information Commissioner’s Office has begun fining some of the world’s most famous companies, including substantial fines to Marriott International and British Airways of $125 million and $183 million Euros, respectively.2  Gone are the days where regulations lacked the teeth to force companies into compliance. 

With thanks to other major data breaches where hundreds of millions’ consumers private information was lost or obtained (e.g., Experian), more regulation is coming. Although there is little expectation of an American federal requirement for data protection, individual states and other regulating organizations are introducing requirements. Each new regulation seeks to protect consumer privacy but the specifics and enforcement of each differ. 

Expected to be most impactful in 2019 is the California Consumer Privacy Act,  which applies to organizations that handle, collect, or process consumer information and do business in the state of California (you do not have to be located in CA to be under the umbrella of enforcement).

In 2018, Maine passed the toughest law on telecommunications providers for selling consumer information. Massachusetts’ long standing privacy and data breach laws were amended with stronger requirements in January of 2019. Additional privacy and breach laws are in discussion or on the table for many states including Colorado, Delaware, Ohio, Oregon, Ohio, Vermont, and Washington, amongst others.      

Preparation and awareness are key

All organizations, no matter your line of business must be aware of and understand current laws and proposed legislation. New laws are expected to not only address the protection of customer data, but also employee information. All organizations should monitor proposed legislation and be aware of the potential enforceable requirements. The good news is that there are a lot of resources out there and, in most cases, legislative requirements allow for grace periods to allow organizations to develop a complete understanding of proposed laws and implement needed controls. 

3. Data management―Time to cut through the clutter 

We all work with people who have thousands of emails in their inbox (in some cases, dating back several years). Those users’ biggest fears may start to come to fruition―that their “organizational” approach of not deleting anything may come to an end with a simple email and data retention policy put in place by their employer. 

The amount of data we generate in a day is massive. Forbes estimates that we generate 2.5 quintillion bytes of data each day and that 90% of all the world’s data was generated in the last two years alone.3 While data is a gold mine for analytics and market research, it is also an increasing liability and security risk. 

Inc. Magazine says that 73% of the data we have available to us is not used.4 Within that data could be personally identifiable information (such as social security numbers, names, addresses, etc.); financial information (bank accounts, credit cards etc.); and/or confidential business data. That data is valuable to hackers and corporate spies and in many cases data’s existence and location is unknown by the organizations that have it. 

In addition to the security risk that all this data poses, it also may expose an organization to liability in the event of a lawsuit of investigation. Emails and other communications are a favorite target of subpoenas and investigations and should be deleted within 90 days (including deleted items folders). 

Take an inventory before you act

Organizations should first complete a full data inventory and understand what types of data they maintain and handle, and where and how they store that data. Next, organizations can develop a data retention policy that meets their needs. Utilizing backup storage media may be a solution that helps reduce the need to store and maintain a large amount of data on internal systems. 

4. Doing the basics right―The simple things work 

Across industries and regardless of organization size, the most common problem we see is the absence of basic controls for IT security. Every organization, no matter their size, should work to ensure they have controls in place. Some must-haves:

  • Established IT security policies
  • Routine, monitored patch management practices (for all servers and workstations)
  • Change management controls (for both software and hardware changes)
  • Anti-virus/malware on all servers and workstations
  • Specific IT security risk assessments 
  • User access reviews
  • System logging and monitoring 
  • Employee security training

Go back to the basics 

We often see organizations that focus on new and emerging technologies, but have not taken the time to put basic security controls in place. Simple deterrents will help thwarting hackers. I often tell my clients a locked car scares away most ill-willed people, but a thief can still smash the window.  

Smaller organizations can consider using third-party security providers, if they are not able to implement basic IT security measures. From our experience, small organizations are being held to the same data security and privacy expectations by their customers as larger competitors and need to be able to provide assurance that controls are in place.  

5. Employee retention and training 

Unemployment rates are at an all-time low, and the demand for IT security experts at an all-time high. In fact, Monster.com reported that in 2019 the unemployment rate for IT security professionals is 0%.5 

Organizations should be highly focused on employee retention and training to keep current employees up-to-speed on technology and security trends. One study found that only 15% of IT security professionals were not looking to switch jobs within one year.6  

Surprisingly, money is not the top factor for turnover―68% of respondents prioritized working for a company that takes their opinions seriously.6 

For years we have told our clients they need to create and foster a culture of security from the top down, and that IT security must be considered more than just an overhead cost. It needs to align with overall business strategy and goals. Organizations need to create designated roles and responsibilities for security that provide your security personnel with a sense of direction―and the ability to truly protect the organization, their people, and the data. 

Training and support goes a long way

Offering training to security personnel allows them to stay abreast of current topics, but it also shows those employees you value their knowledge and the work they do. You need to train technology workers to be aware of new threats, and on techniques to best defend and protect from such risks. 

Reducing turnover rate of IT personnel is critical to IT security success. Continuously having to retrain and onboard employees is both costly and time-consuming. High turnover impacts your culture and also hampers your ability to grow and expand a security program. 

Making the effort to empower and train all employees is a powerful way to demonstrate your appreciation and support of the employees within your organization—and keep your data more secure.  

We can help

Ensuring that you have a stable and established IT security program in place by considering the above risks will help your organization adapt to technology changes and create more than just an IT security program, but a culture of security minded employees. 

Our team of security and control experts can help your organization create and implement controls needed to consider emerging IT risks. For more information, contact the team
 

Sources:
[1] https://iapp.org/news/a/surprising-stats-on-third-party-vendor-risk-and-breach-likelihood/  
[2] https://resources.infosecinstitute.com/first-big-gdpr-fines/
[3] https://www.forbes.com/sites/bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read/#458b58860ba9
[4] https://www.inc.com/jeff-barrett/misusing-data-could-be-costing-your-business-heres-how.html
[5] https://www.monster.com/career-advice/article/tech-cybersecurity-zero-percent-unemployment-1016
[6] https://www.securitymagazine.com/articles/88833-what-will-improve-cyber-talent-retention

Related Services

Assurance

Related Professionals

Is your organization a service provider that hosts or supports sensitive customer data, (e.g., personal health information (PHI), personally identifiable information (PII))? If so, you need to be aware of a recent decision by the American Institute of Certified Public Accountants that may affect how your organization manages its systems and data.

In April, the AICPA’s Assurance Executive Committee decided to replace the five Trust Service Principles (TSPs) with Trust Services Criteria (TSC), requiring service organizations to completely rework their internal controls, and present SOC 2 findings in a revised format. This switch may sound frustrating or intimidating, but we can help you understand the difference between the principles and the criteria.

The SOC 2 Today
Service providers design and implement internal controls to protect customer data and comply with certain regulations. Typically, a service provider hires an independent auditor to conduct an annual Service Organization Control (SOC) 2 examination to help ensure that controls work as intended. Among other things, the resulting SOC 2 report assures stakeholders (customers and business partners) the organization is reducing data risk and exposure.

Currently, SOC 2 reports focus on five Trust Services Principles (TSP):

  • Security: Information and systems are protected against unauthorized access, unauthorized disclosure of information, and damage to systems that can compromise the availability, integrity, confidentiality, and privacy of information or systems — and affect the entity's ability to meet its objectives.

  • Availability: Information and systems are available for operation and use to meet the entity's objectives.

  • Processing Integrity: System processing is complete, valid, accurate, timely, and authorized to meet the entity's objectives.

  • Confidentiality: Information designated as confidential is protected to meet the entity's objectives.

  • Privacy: Personal information is collected, used, retained, disclosed, and disposed of to meet the entity's objectives.

New SOC 2 Format
The TSC directly relate to the 17 principles found in the Committee of Sponsoring Organization (COSO)’s 2013 Framework for evaluating internal controls, and include additional criteria related to COSO Principle 12. The new TSC are:

  • Control Environment: emphasis on ethical values, board oversight, authority and responsibilities, workforce competence, and accountability.
  • Risk Assessment: emphasis on the risk assessment process, how to identify and analyze risks, fraud-related risks, and how changes in risk impact internal controls.
  • Control Activities: Emphasis on how you develop controls to mitigate risk, how you develop technology controls, and how you deploy controls to an organization through the use of policies and procedures.
  • Information and Communication: Emphasis on how you communicate internal of the organization to internal and external parties.
  • Monitoring: Emphasis on how you evaluate internal controls and how you communicate and address any control deficiencies.

The AICPA has provided nearly 300 Points of Focus (POF), supporting controls that organizations should consider when addressing the TSC. The POF offer guidance and considerations for controls that address the specifics of the TSC, but they are not required.

Points of Focus
Organizations now have some work to do to meet the guidelines. The good news: there’s still plenty of time to make necessary changes. You can use the current TSP format before December 15, 2018. Any SOC 2 report presented after December 15, 2018, must incorporate the new TSC format. The AICPA has provided a mapping spreadsheet to help service organizations move from TSP to the TSC format.

Contact Chris Ellingwood to learn more about how we can help you gain control of your SOC 2 reporting efforts. 
 

Blog
The SOC 2 update — how will it affect you?

As the technology we use for work and at home becomes increasingly intertwined, security issues that affect one also affect the other and we must address security risks at both levels.

This year’s top security risks are the first in our series that are both prevalent to us as consumers of technology and to us as business owners and security administrators. Our homes and offices connect to devices, referred to the Internet of Things (IoT), that make our lives and jobs easier and more efficient, but securing those devices from outside access is becoming paramount to IT security.

Many of this year’s risks focus on deception. Through deception, hackers can get information and access to systems, which can harm our wallets and our businesses.

In our 2017 Top 10 IT Security Risks e-book we share with you how to understand these emerging risks, the consequences and impacts these risks may have on your business, and approaches to help mitigate the risks and their impact. Some of the key ways to address these risks are:           

  1. Do your homework — change your default passwords (the one that came with your wireless router, for instance), and also make sure that your Amazon Alexa, Google Home, or other smart devices have complex passwords. In addition, turning off devices when they are not in use, or when you are gone, helps secure your home.
  1. If you work from home, or have employees who do, set up and use secure connections with dual authentication methods to help protect your networks. Remote employees should be required to use the same security measures as on-site employees.
  1. Protect your smartphone at work and at play—smartphones have become one of our most important possessions, and we use the same device for both work and personal applications, yet we don’t protect them as well we should. Password protection is step one. Consider uploading new antivirus software to corporate smartphones and using container apps for corporate emails and documents. These apps allow users to securely connect to a company’s server and reduce the possible exposure of data.
  1. Train, inform, repeat. Create a vigilant workforce—through continuous and consistent training and information sharing, you can reduce the occurrence of phishing, hacking and other attacks against your systems.
  1. Conduct IT security risk assessments annually to help you identify gaps, fix them, and prepare for any incidents that may occur.
  1. Monitor and protect your reputation through tools to identify news on your company and understand the sources of the information.

Our 2017 Top 10 IT Security Risks takes a deeper look at the IoT and other risk issues that pose a threat this year, and what you can do to minimize your own and your organization’s IT security risks.

Blog
The 2017 top IT security risks: Everything is connected

During my lunch in sunny Florida while traveling for business, enjoying a nice reprieve from another cold Maine winter, I checked my social media account. I noticed several postings about people having nothing to do at work because their company’s systems were down, the result of a major outage at one of three Amazon Web Services (AWS)’ Data Centers and web hosting operations. Company sites were down directly or indirectly through a software as a service (SaaS) provider hosted at the AWS data center.

The crash lasted for four hours and affected hundreds of thousands sites, including Airbnb, Expedia, Netflix, Quora, Slack, and others. The impact of such crashes can be devastating to organizations that rely on their website for revenue, such as online retailers and users of SaaS providers that may rely on a hosted system to conduct day-to-day business.

We advise our clients who consider hosting services in the cloud to weigh the option seriously and understand potential challenges in doing so. Here are some steps you can take to prevent future outages and loss of valuable uptime:

  1. Know the risks and weigh them against the benefits.  Ask questions about the system you are thinking of having hosted. Is the system critical to business? Without the system, do you lose revenue and productivity? Is the company providing the SaaS service hosting their own systems, or are they hosted at a data center like AWS? Does the SaaS provider have failover sites at other, separate data centers that are geographically distant from another?
  2. Have a backup plan. If your business conducts e-commerce or needs SaaS service to function, consider hosting your web servers and other data at two different providers. Though costly, the downtime impact is highly reduced.
  3. Consider hosting yourself. In some cases, we advise against relying on a third-party hosted data center. We do this when the criticality of the function is so high that having your own full-time dedicated support personnel, with multiple internet service providers available, allows you to address outages in-house and reduce the risk of outages.
  4. Have a service level agreement. Having a service level agreement with the hosted third party establishes expectations for uptime and downtime. In many instances where uptime is critical, you may consider incorporating liquidated damage clauses (fines and penalties) for downtime. Often when revenue is involved, the hosted party will take deeper measures to ensure uptime.

These types of outages are rare, but significant and while most organizations should not be scrambling to host their own systems and cancel all hosted agreements, it’s a good idea to take a hard look at your cyber security and IT risk management plan. Then, like me, when the clouds clear and you are in warm and sunny Florida, you can take a long lunch and enjoy the day.

Blog
When the skies clear: Web-hosting outage hits Amazon data centers

Editor’s note: If you are a higher education CFO, CIO, CTO or other C-suite leader, this blog is for you.

The Gramm-Leach-Bliley Act (GLBA) has been in the news recently as the Federal Trade Commission (FTC) has agreed to extend a deadline for public comment regarding proposed changes to the Safeguards Rule. Here’s what you need to know.

GLBA, also known as the Financial Modernization Act, is a 1999 federal law providing rules to financial institutions for protecting consumer information. Colleges and universities fall under this act because they conduct financial activities (e.g., administration of financial aid, loans, and other financial services).

Under the Safeguards Rule financial Institutions must develop, implement, and maintain a comprehensive information security program that consists of safeguards to handle customer information.

Proposed changes

The FTC is proposing five modifications to the Safeguards Rule. The new act will:

  • Provide more detailed guidance to impacted institutions regarding how to develop and implement specific aspects of an overall information security program.
  • Improve the accountability of an institution’s information security programs.
  • Exempt small business from certain requirements.
  • Expand the definition of “financial institutions” to include entities engaged in activities that the Federal Reserve Board determines to be incidental to financial activities.
  • Propose to include the definition of “financial institutions” and related examples in the rule itself rather than cross-reference them from a related FTC rule (Privacy of Consumer Financial Information Rule).

Potential impacts for your institution

The Federal Register, Volume 84, Number 65, published the notice of proposed changes that once approved by the FTC would add more prescriptive rules that could have significant impact on your institution. For example, these rules would require institutions to:

  1. Expand existing security programs with additional resources.
  2. Produce additional documentation.
  3. Create and implement additional policies and procedures.
  4. Offer various forms of training and education for security personnel.

The proposed rules could require institutions to increase their commitment in time and staffing, and may create hardships for institutions with limited or challenging resources.

Prepare now

While these changes are not final and the FTC is requesting public comment, here are some things you can do to prepare for these potential changes:

  • Evaluate whether your institution is compliant to the current Safeguards Rule.
  • Identify gaps between current status and proposed changes.
  • Perform a risk assessment.
  • Ensure there is an employee designated to lead the information security program.
  • Monitor the FTC site for final Safeguard Rules updates.

In the meantime, reach out to us if you would like to discuss the impact GLBA will have on your institution or if you would like assistance with any of the recommendations above. You can view a comprehensive list of potential changes here.

Source: Federal Trade Commission. Safeguards Rule. Federal Register, Vol. 84, No. 65. FTC.gov. April 4, 2019. https://www.ftc.gov/enforcement/rules/rulemaking-regulatory-reform-proceedings/safeguards-rule

Blog
Higher ed: GLBA is the new four-letter word, but it's not as bad as you think

In light of the recent cyberattacks in higher education across the US, more and more institutions are finding themselves no longer immune to these activities. Security by obscurity is no longer an effective approach—all  institutions are potential targets. Colleges and universities must take action to ensure processes and documentation are in place to prepare for and respond appropriately to a potential cybersecurity incident.

BerryDunn’s Rick Gamache recently published several blog articles on incident response that are relevant to the recent cyberattacks. Below I have provided several of his points tailored to higher education leaders to help them prepare for cybersecurity incidents at their institutions.

What are some examples of incidents that managers need to prepare for?

Examples range from external breaches and insider threats to instances of malfeasance or incompetence. Different types of incidents lead to the same types of results—yet you can’t have a broad view of incidents. Managers should work with their teams to create incident response plans that reflect the threats associated with higher education institutions. A handful of general incident response plans isn’t going to cut it.

Managers need to work with their teams to develop a specific incident response plan for each specific type of incident. Why? Well, think of it this way: Your response to a careless employee should be different from your response to a malicious employee, for a whole host of legal reasons. Incident response is not a cookie-cutter process. In fact, it is quite the opposite. This is one of the reasons I highly suggest security teams include staff members outside of IT. When you’re responding to incidents, you want people who can look at a problem or situation from an external perspective, not just a technical or operational perspective within IT. These team members can help answer questions such as, what does the world see when they look at our institution? What institutional information might be valuable to, or targeted by, malicious actors? You’ll get some valuable fresh perspectives.

How short or long should the typical incident response plan be?

I often see good incident response plans no more than three or four pages in length. However, it is important that incident response plans are task oriented, so that it is clear who does what next. And when people follow an incident response plan, they should physically or digitally check off each activity, then record each activity.

What system or software do you recommend for recording incidents and responses?

There are all types of help desk software you can use, including free and open source software. I recommend using help desk software with workflow capabilities, so your team can assign and track tasks.

Any other tips for developing incident response plans?

First, managers should work with, and solicit feedback from across the academic and administrative areas within the institution when developing incident response plans. If you create these documents in a vacuum, they will be useless.

Second, managers and their teams should take their time and develop the most “solid” incident response plans possible. Don’t rush the process. The effectiveness of your incident response plans will be critical in assessing your institution’s ability to survive a breach. Because of this, you should be measuring your response plans through periodic testing, like conducting tabletop exercises.

Third, keep your students and external stakeholders in mind when developing these plans. You want to make sure external communications are consistent, accurate, and within the legal requirements for your institution. The last thing you want is students and stakeholders receiving conflicting messages about the incident. 

Are there any decent incident response plans in the public domain that managers and their teams can adapt for their own purposes?

Yes. My default reference is the National Institute of Standards and Technology (NIST). NIST has many special publications that describe the incident response process, how to develop a solid plan, and how to test your plan.

Should institutions have dedicated incident response teams?

Definitely. Institutions should identify and staff teams using internal resources. Some institutions may want to consider hiring a reputable third party to act as an incident response team. The key with hiring a third party? Don’t wait until an incident occurs! If you wait, you’re going to panic, and make panic-based decisions. Be proactive and hire a third party on retainer.

That said, institutions should consider hiring a third party on an annual basis to review incident response plans and processes. Why? Because every institution can grow complacent, and complacency kills. A third party can help gauge the strengths and weaknesses of your internal incident response teams, and provide suggestions for general or specific training. A third party can also educate your institution about the latest and greatest cyber threats.

Should managers empower their teams to conduct internal “hackathons” in order to test incident response?

Sure! It’s good practice, and it can be a lot of fun for team members. There are a few caveats. First, don’t call it a hackathon. The word can elicit negative or concerned reactions. Call it “active testing” or “continuous improvement exercises.” These activities allow team members to think creatively, and are opportunities for them to boost their cybersecurity knowledge. Second, be prepared for pushback. Some managers worry if team members gain more cybersecurity skills, then they’ll eventually leave the institution for another, higher-paying job. I think you should be committed to the growth of your team members―it’ll only make your institution more secure.

What are some best practices managers should follow when reporting incidents to their leadership?

Keep the update quick, brief, and to the point. Leave all the technical jargon out, and keep everything in an institutional context. This way leadership can grasp the ramifications of the event and understand what matters. Be prepared to outline how you’re responding and what actions leadership can take to support the incident response team and protect the institution. In the last chapter, I mentioned what I call the General Colin Powell method of reporting, and I suggest using that method when informing leadership. Tell them what you know, what you don’t know, what you think, and what you recommend. Have answers, or at least a plan.

How much institution-wide communication should there be about incidents?

That’s a great question, but a tough one to answer. Transparency is good, but it can also unintentionally lead to further incidents. Do you really want to let your whole institution know about an exploitable weakness? Also, employees can spread information about incidents on social media, which can actually lead to the spread of misinformation. If you are in doubt about whether or not to inform the entire institution about an incident, refer to your Legal Department. In general, institution-wide communication should be direct: We’ve had an incident; these are the facts; this is what you are allowed to say on social media; and this is what you’re not allowed to say on social media.

Another great but tough question: When do you tell the public about an incident? For this type of communication, you’re going to need buy-in from various sources: senior leadership, Legal, HR, and your PR team or external PR partners. You have to make sure the public messaging is consistent. Otherwise, citizens and the media will try to poke holes in your official story. And that can lead to even more issues.

What are the key takeaways for higher education leaders?

Here are key takeaways to help higher education leaders prepare for and respond appropriately to cybersecurity incidents:

  1. Understand your institution’s current cybersecurity environment. 
    Questions to consider: Do you have Chief Information Security Officer (CISO) and/or a dedicated cybersecurity team at your institution? Have you conducted the appropriate audits and assessments to understand your institution’s vulnerabilities and risks?
  2. Ensure you are prepared for cybersecurity incidents. 
    Questions to consider: Do you have a cybersecurity plan with the appropriate response, communication, and recovery plans/processes? Are you practicing your plan by walking through tabletop exercises? Do you have incident response teams?

Higher education continues to face growing threats of cybersecurity attacks – and it’s no longer a matter of if, but when. Leaders can help mitigate the risk to their institutions by proactively planning with incident response plans, communication plans, and table-top exercises. If you need help creating an incident response plan or wish to speak to us regarding preparing for cybersecurity threats, please reach out to us.
 

Blog
Cyberattacks in higher education—How prepared are you?

Editor’s note: read this if you are a Maine business owner or officer.

New state law aligns with federal rules for partnership audits

On June 18, 2019, the State of Maine enacted Legislative Document 1819, House Paper 1296, An Act to Harmonize State Income Tax Law and the Centralized Partnership Audit Rules of the Federal Internal Revenue Code of 1986

Just like it says, LD 1819 harmonizes Maine with updated federal rules for partnership audits by shifting state tax liability from individual partners to the partnership itself. It also establishes new rules for who can—and can’t—represent a partnership in audit proceedings, and what that representative’s powers are.

Classic tunes—The Tax Equity and Fiscal Responsibility Act of 1982

Until recently, the Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA) set federal standards for IRS audits of partnerships and those entities treated as partnerships for income tax purposes (LLCs, etc.). Those rules changed, however, following passage of the Bipartisan Budget Act of 2015 (BBA) and the Protecting Americans from Tax Hikes Act of 2015 (PATH Act). Changes made by the BBA and PATH Act included:

  • Replacing the Tax Matters Partner (TMP) with a Partnership Representative (PR);
  • Generally establishing the partnership, and not individual partners, as liable for any imputed underpayment resulting from an audit, meaning current partners can be held responsible for the tax liabilities of past partners; and
  • Imputing tax on the net audit adjustments at the highest individual or corporate tax rates.

Unlike TEFRA, the BBA and PATH Act granted Partnership Representatives sole authority to act on behalf of a partnership for a given tax year. Individual partners, who previously held limited notification and participation rights, were now bound by their PR’s actions.

Fresh beats—new tax liability laws under LD 1819

LD 1819 echoes key provisions of the BBA and PATH Act by shifting state tax liability from individual partners to the partnership itself and replacing the Tax Matters Partner with a Partnership Representative.

Eligibility requirements for PRs are also less than those for TMPs. PRs need only demonstrate “substantial presence in the US” and don’t need to be a partner in the partnership, e.g., a CFO or other person involved in the business. Additionally, partnerships may have different PRs at the federal and state level, provided they establish reasonable qualifications and procedures for designating someone other than the partnership’s federal-level PR to be its state-level PR.

LD 1819 applies to Maine partnerships for tax years beginning on or after January 1, 2018. Any additional tax, penalties, and/or interest arising from audit are due no later than 180 days after the IRS’ final determination date, though some partnerships may be eligible for a 60-day extension. In addition, LD 1819 requires Maine partnerships to file a completed federal adjustments report.

Partnerships should review their partnership agreements in light of these changes to ensure the goals of the partnership and the individual partners are reflected in the case of an audit. 

Remix―Significant changes coming to the Maine Capital Investment Credit 

Passage of LD 1671 on July 2, 2019 will usher in a significant change to the Maine Capital Investment Credit, a popular credit which allows businesses to claim a tax credit for qualifying depreciable assets placed in service in Maine on which federal bonus depreciation is claimed on the taxpayer's federal income tax return. 

Effective for tax years beginning on or after January 1, 2020, the credit is reduced to a rate of 1.2%. This is a significant reduction in the current credit percentages, which are 9% and 7% for corporate and all other taxpayers, respectively. The change intends to provide fairness to companies conducting business in-state over out-of-state counterparts. Taxpayers continue to have the option to waive the credit and claim depreciation recapture in a future year for the portion of accelerated federal bonus depreciation disallowed by Maine in the year the asset is placed in service. 

As a result of this meaningful reduction in the credit, taxpayers who have historically claimed the credit will want to discuss with their tax advisors whether it makes sense to continue claiming the credit for 2020 and beyond.
 

Blog
Maine tax law changes: Music to the ears, or not so much?

Read this if you are an Institutional Research (IR) Director, a Registrar, or are in the C-Suite.

In my last blog, I defined the what and the why of data governance, and outlined the value of data governance in higher education environments. I also asserted data isn’t the problem―the real culprit is our handling of the data (or rather, our deferral of data responsibility to others).

While I remain convinced that data isn’t the problem, recent experiences in the field have confirmed the fact that data governance is problematic. So much, in fact, that I believe data governance defies a “solid,” point-in-time solution. Discouraged? Don’t be. Just recalibrate your expectations, and pursue an adaptive strategy.

This starts with developing data governance guiding principles, with three initial points to consider: 

  1. Key stakeholders should develop your institution’s guiding principles. The team should include representatives from areas such as the office of the Registrar, Human Resources, Institutional Research, and other significant producers and consumers of institutional data. 
  2. The focus of your guiding principles must be on the strategic outcomes your institution is trying to achieve, and the information needed for data-driven decision-making.
  3. Specific guiding principles will vary from institution to institution; effective data governance requires both structure and flexibility.

Here are some baseline principles your institution may want to adopt and modify to suit your particular needs.

  • Data governance entails iterative processes, attention to measures and metrics, and ongoing effort. The institution’s governance framework should be transparent, practical, and agile. This ensures that governance is seen as beneficial to data management and not an impediment.
  • Governance is an enabler. The institution’s work should help accomplish objectives and solve problems aligned with strategic priorities.
  • Work with the big picture in mind. Start from the vantage point that data is an institutional asset. Without an institutional asset mentality it’s difficult to break down the silos that make data valuable to the organization.
  • The institution should identify data trustees and stewards that will lead the data governance efforts at your institution
    • Data trustees should have responsibility over data, and have the highest level of responsibility for custodianship of data.
    • Data stewards should act on behalf of data trustees, and be accountable for managing and maintaining data.
  • Data quality needs to be baked into the governance process. The institution should build data quality into every step of capture and entry. This will increase user confidence that there is data integrity. The institution should develop working agreements for sharing and accessing data across organizational lines. The institution should strive for processes and documentation that is consistent, manageable, and effective. This helps projects run smoothly, with consistent results every time.
  • The institution should pay attention to building security into the data usage cycle. An institution’s security measures and practices need to be inherent in the day-to-day management of data, and balanced with the working agreements mentioned above. This keeps data secure and protected for the entire organization.
  •  Agreed upon rules and guidelines should be developed to support a data governance structure and decision-making. The institution should define and use pragmatic approaches and practical plans that reward sustainability and collaboration, building a successful roadmap for the future. 

Next Steps

Are you curious about additional guiding principles? Contact me. In the meantime, keep your eyes peeled for a future blog that digs deeper into the roles of data trustees and stewards.
 

Blog
Governance: It's good for your data

Proposed House bill brings state income tax standards to the digital age

On June 3, 2019, the US House of Representatives introduced H.R. 3063, also known as the Business Activity Tax Simplification Act of 2019, which seeks to modernize tax laws for the sale of personal property, and clarify physical presence standards for state income tax nexus as it applies to services and intangible goods. But before we can catch up on today, we need to go back in time—great Scott!

Fly your DeLorean back 60 years (you’ve got one, right?) and you’ll arrive at the signing of Public Law 86-272: the Interstate Income Act of 1959. Established in response to the Supreme Court’s ruling on Northwestern States Portland Cement Co. v. Minnesota, P.L. 86-272 allows a business to enter a state, or send representatives, for the purposes of soliciting orders for the sale of tangible personal property without being subject to a net income tax.

But now, in 2019, personal property is increasingly intangible—eBooks, computer software, electronic data and research, digital music, movies, and games, and the list goes on. To catch up, H.R. 3063 seeks to expand on 86-272’s protection and adds “all other forms of property, services, and other transactions” to that exemption. It also redefines business activities of independent contractors to include transactions for all forms of property, as well as events and gathering of information.

Under the proposed bill, taxpayers meet the standards for physical presence in a taxing jurisdiction, if they:

  1.  Are an individual physically located in or have employees located in a given state; 
  2. Use the services of an agent to establish or maintain a market in a given state, provided such agent does not perform the same services in the same state for any other person or taxpayer during the taxable year; or
  3. Lease or own tangible personal property or real property in a given state.

The proposed bill excludes a taxpayer from the above criteria who have presence in a state for less than 15 days, or whose presence is established in order to conduct “limited or transient business activity.”

In addition, H.R. 3063 also expands the definition of “net income tax” to include “other business activity taxes”. This would provide protection from tax in states such as Texas, Ohio and others that impose an alternate method of taxing the profits of businesses.

H.R. 3063, a measure that would only apply to state income and business activity tax, is in direct contrast to the recent overturn of Quill Corp. v. North Dakota, a sales and use tax standard. Quill required a physical presence but was overturned by the decision in South Dakota v. Wayfair, Inc. Since the Wayfair decision, dozens of states have passed legislation to impose their sales tax regime on out of state taxpayers without a physical presence in the state.

If enacted, the changes made via H.R. 3063 would apply to taxable periods beginning on or after January 1, 2020. For more information: https://www.congress.gov/bill/116th-congress/house-bill/3063/text?q=%7B%22search%22%3A%5B%22hr3063%22%5D%7D&r=1&s=2
 

Blog
Back to the future: Business activity taxes!

Best practices for financial institution contracts with technology providers

As the financial services sector moves in an increasingly digital direction, you cannot overstate the need for robust and relevant information security programs. Financial institutions place more reliance than ever on third-party technology vendors to support core aspects of their business, and in turn place more reliance on those vendors to meet the industry’s high standards for information security. These include those in the Gramm-Leach-Bliley Act, Sarbanes Oxley 404, and regulations established by the Federal Financial Institutions Examination Council (FFIEC).

On April 2, 2019, the FDIC issued Financial Institution Letter (FIL) 19-2019, which outlines important requirements and considerations for financial institutions regarding their contracts with third-party technology service providers. In particular, FIL-19-2019 urges financial institutions to address how their business continuity and incident response processes integrate with those of their providers, and what that could mean for customers.

Common gaps in technology service provider contracts

As auditors of IT controls, we review lots of contracts between financial institutions and their technology service providers. When it comes to recommending areas for improvement, our top observations include:

  • No right-to-audit clause
    Including a right-to-audit clause encourages transparency and provides greater assurance that vendors are providing services, and charging for them, in accordance with their contract.
  • Unclear and/or inadequate rights and responsibilities around service disruptions
    In the event of a service incident, time and transparency are vital. Contracts that lack clear and comprehensive standards, both for the vendor and financial institution, regarding business continuity and incident response expose institutions to otherwise avoidable risk, including slow or substandard communications.
  • No defined recovery standards
    Explicitly defined recovery standards are essential to ensuring both parties know their role in responding and recovering from a disaster or other technology outage.

FIL-19-2019 also reminds financial institutions that they need to properly inform regulators when they undertake contracts or relationships with technology service providers. The Bank Service Company Act requires financial institutions to inform regulators in writing when receiving third-party services like sorting and posting of checks and deposits, computation and posting of interest, preparation and mailing of statements, and other functions involving data processing, Internet banking, and mobile banking services.

Writing clearer contracts that strengthen your institution

Financial institutions should review their contracts, especially those that are longstanding, and make necessary updates in accordance with FDIC guidelines. As operating environments continue to evolve, older contracts, often renewed automatically, are particularly easy to overlook. You also need to review business continuity and incident response procedures to ensure they address all services provided by third-parties.

Senior management and the Board of Directors hold ultimate responsibility for managing a financial institution’s relationship with its technology service providers. Management should inform board members of any and all services that the institution receives from third-parties to help them better understand your operating environment and information security needs.

Not sure what to look for when reviewing contracts? Some places to start include:

  • Establish your right-to-audit
    All contracts should include a right-to-audit clause, which preserves your ability to access and audit vendor records relating to their performance under contract. Most vendors will provide documentation of due diligence upon request, such as System and Organization Control (SOC) 1 or 2 reports detailing their financial and IT security controls.

    Many right-to-audit clauses also include a provision allowing your institution to conduct its own audit procedures. At a minimum, don’t hesitate to perform occasional walk-throughs of your vendor’s facilities to confirm that your contract’s provisions are being met.
  • Ensure connectivity with outsourced data centers
    If you outsource some or all of your core banking systems to a hosted data center, place added emphasis on your institution’s business continuity plan to ensure connectivity, such as through the use of multiple internet or dedicated telecommunications circuits. Data vendors should, by contract, be prepared to assist with alternative connectivity.
  • Set standards for incident response communications 
    Clear expectations for incident response are crucial  to helping you quickly and confidently manage the impact of a service incident on your customers and information systems. Vendor contracts should include explicit requirements for how and when vendors will communicate in the event of any issue or incident that affects your ability to serve your customers. You should also review and update contracts after each incident to address any areas of dissatisfaction with vendor communications.
  • Ensure regular testing of defined disaster recovery standards
    While vendor contracts don’t need to detail every aspect of a service provider’s recovery standards, they should ensure those standards will meet your institution’s needs. Contracts should guarantee that the vendor periodically tests, reviews, and updates their recovery standards, with input from your financial institution.

    Your data center may also offer regular disaster recovery and failover testing. If they do, your institution should participate in it. If they don’t, work with the vendor to conduct annual testing of your ability to access your hosted resources from an alternate site.

As financial institutions increasingly look to third-party vendors to meet their evolving technology needs, it is critical that management and the board understand which benefits—and related risks—those vendors present. By taking time today to align your vendor contracts with the latest FFIEC, FDIC, and NCUA standards, your institution will be better prepared to manage risk tomorrow.

For more help gaining control over risk and cybersecurity, see our blog on sustainable solutions for educating your Board of Directors and creating a culture of cybersecurity awareness.
 

Blog
Are your vendor contracts putting you at risk?

Focus on the people: How higher ed institutions can successfully make an ERP system change

The enterprise resource planning (ERP) system is the heart of an institution’s business, maintaining all aspects of day-to-day operations, from student registration to staff payroll. Many institutions have used the same ERP systems for decades and face challenges to meet the changing demands of staff and students. As new ERP vendors enter the marketplace with new features and functionality, institutions are considering a change. Some things to consider:

  1. Don’t just focus on the technology and make change management an afterthought. Transitioning to a new ERP system takes considerable effort, and has the potential to go horribly wrong if sponsorship, good planning, and communication channels are not in place. The new technology is the easy part of a transition—the primary challenge is often rooted in people’s natural resistance to change.  
  2. Overcoming resistance to change requires a thoughtful and intentional approach that focuses on change at the individual level. Understanding this helps leadership focus their attention and energy to best raise awareness and desire for the change.
  3. One effective tool that provides a good framework for successful change is the Prosci ADKAR® model. This framework has five distinct phases that align with ERP change:

These phases provide an approach for developing activities for change management, preparing leadership to lead and sponsor change and supporting employees through the implementation of the change.

The three essential steps to leveraging this framework:

  1. Perform a baseline assessment to establish an understanding of how ready the organization is for an ERP change
  2. Provide sponsorship, training, and communication to drive employee adoption
  3. Prepare and support activities to implement, celebrate, and sustain participation throughout the ERP transition

Following this approach with a change management framework such as the Prosci ADKAR® model can help an organization prepare, guide, and adopt ERP change more easily and successfully. 

If you’re considering a change, but need to prepare your institution for a healthy ERP transition using change management, chart yourself on this ADKAR framework—what is your organization’s change readiness? Do you have appropriate buy-in? What problems will you face?

You now know that this framework can help your changes stick, and have an idea of where you might face resistance. We’re certified Prosci ADKAR® practitioners and have experience guiding Higher Ed leaders like you through these steps. Get in touch—we’re happy to help and have the experience and training to back it up. Please contact the team with any questions you may have.

1Prosci ADKAR®from http://www.prosci.com

Blog
Perspectives of an Ex-CIO

Who has the time or resources to keep tabs on everything that everyone in an organization does? No one. Therefore, you naturally need to trust (at least on a certain level) the actions and motives of various personnel. At the top of your “trust level” are privileged users—such as system and network administrators and developers—who keep vital systems, applications, and hardware up and running. Yet, according to the 2019 Centrify Privileged Access Management in the Modern Threatscape survey, 74% of data breaches occurred using privileged accounts. The survey also revealed that of the organizations responding:

  • 52% do not use password vaulting—password vaulting can help privileged users keep track of long, complex passwords for multiple accounts in an encrypted storage vault.
  • 65% still share the use of root and other privileged access—when the use of root accounts is required, users should invoke commands to inherent the privileges of the account (SUDO) without actually using the account. This ensures “who” used the account can be tracked.
  • Only 21% have implemented multi-factor authentication—the obvious benefit of multi-factor authentication is to enhance the security of authenticating users, but also in many sectors it is becoming a compliance requirement.
  • Only 47% have implemented complete auditing and monitoring—thorough auditing and monitoring is vital to securing privileged accounts.

So how does one even begin to trust privileged accounts in today’s environment? 

1. Start with an inventory

To best manage and monitor your privileged accounts, start by finding and cataloguing all assets (servers, applications, databases, network devices, etc.) within the organization. This will be beneficial in all areas of information security such as asset management, change control and software inventory tracking. Next, inventory all users of each asset and ensure that privileged user accounts:

  • Require privileges granted be based on roles and responsibilities
  • Require strong and complex passwords (exceeding those of normal users)
  • Have passwords that expire often (30 days recommended)
  • Implement multi-factor authentication
  • Are not shared with others and are not used for normal activity (the user of the privileged account should have a separate account for non-privileged or non-administrative activities)

If the account is only required for a service or application, disable the account’s ability to login from the server console and from across the network

2. Monitor—then monitor some more

The next step is to monitor the use of the identified privileged accounts. Enable event logging on all systems and aggregate to a log monitoring system or a Security Information and Event Management (SIEM) system that alerts in real time when privileged accounts are active. Configure the system to alert you when privileged accounts access sensitive data or alter database structure. Report any changes to device configurations, file structure, code, and executable programs. If these changes do not correlate to an approved change request, treat them as incidents and investigate.  

Consider software that analyzes user behavior and identifies deviations from normal activity. Privileged accounts that are accessing data or systems not part of their normal routine could be the indication of malicious activity or a database attack from a compromised privileged account. 

3. Secure the event logs

Finally, ensure that none of your privileged accounts have access to the logs being used for monitoring, nor have the ability to alter or delete those logs. In addition to real time monitoring and alerting, the log management system should have the ability to produce reports for periodic review by information security staff. The reports should also be archived for forensic purposes in the event of a breach or compromise.

Gain further assistance (and peace of mind) 

BerryDunn understands how privileged accounts should be monitored and audited. We can help your organization assess your current event management process and make recommendations if improvements are needed. Contact our team.

Blog
Trusting privileged accounts in the age of data breaches

“The world is one big data problem,” says MIT scientist and visionary Andrew McAfee.

That’s a daunting (though hardly surprising) quote for many in data-rich sectors, including higher education. Yet blaming data is like blaming air for a malfunctioning wind turbine. Data is a valuable asset that can make your institution move.

To many of us, however, data remains a four-letter word. The real culprit behind the perceived data problem is our handling and perception of data and the role it can play in our success—that is, the relegating of data to a select, responsible few, who are usually separated into hardened silos. For example, a common assumption in higher education is that the IT team can handle it. Not so. Data needs to be viewed as an institutional asset, consumed by many and used by the institution for the strategic purposes of student success, scholarship, and more.

The first step in addressing your “big” data problem? Data governance.

What is data governance?

There are various definitions, but the one we use with our clients is “the ongoing and evolutionary process driven by leaders to establish principles, policies, business rules, and metrics for data sharing.”

Please note that the phrase “IT” does not appear anywhere in this definition.

Why is data governance necessary? For many reasons, including:

  1. Data governance enables analytics. Without data governance, it’s difficult to gain value from analytics initiatives which will produce inconsistent results. A critical first step in any data analytics initiative is to make sure that definitions are widely accepted and standards have been established. This step allows decision makers to have confidence in the data being analyzed to describe, predict, and improve operations.
     
  2. Data governance strengthens privacy, security, and compliance. Compliance requirements for both public and private institutions constantly evolve. The more data-reliant your world becomes, the more protected your data needs to be. If an organization does not implement security practices as part of its data governance framework, it becomes easier to fall out of compliance. 
     
  3. Data governance supports agility. How many times have reports for basic information (part-time faculty or student FTEs per semester, for example) been requested, reviewed, and returned for further clarification or correction? And that’s just within your department! Now add multiple requests from the perspective of different departments, and you’re surely going through multiple iterations to create that report. That takes time and effort. By strengthening your data governance framework, you can streamline reporting processes by increasing the level of trust you have in the information you are seeking. Understanding the value of data governance is the easy part/ The real trick is implementing a sustainable data governance framework that recognizes that data is an institutional asset and not just a four-letter word.

Stay tuned for part two of this blog series: The how of data governance in higher education. In the meantime, reach out to me if you would like to discuss additional data governance benefits for your institution.

Blog
Data is a four-letter word. Governance is not.

Not-for-profit board members need to wear many hats for the organization they serve. Every board member begins their term with a different set of skills, often chosen specifically for those unique abilities. As board members, we often assist the organization in raising money and as such, it is important for all members of the board to be fluent in the language of fundraising. Here are some basic definitions you need to know, and the differences between them.

Gifts with donor restriction

While many organizations can use all donations for their operating costs, many donors prefer to specify how―or when―they can use the donation. Gift restrictions come in several forms:

1.    Purpose-restricted gifts are, as their name implies, for a specific use. These can be in response to a request from your organization for that specific purpose or the donor can indicate its purpose when they make the gift. Consider how you solicit gifts from donors to be sure you don’t inadvertently apply restrictions. Not all gifts need to (or even should) be accepted by an organization, so take care in considering if specific restrictions are in line with your mission. 

2.    Time-restricted gifts can come with or without a restricted purpose. You can treat gifts for future periods as revenue today, though the funds would be considered restricted for use until the time restrictions have lapsed. These are often in the form of pledges of gifts for the future, but can also be actual donations provided today for use in coming years.

3.    Some donors prefer the earnings of their gift be available for use, while their actual donation be held in perpetuity. These are often in the form of endowments and specific restrictions may or may not be placed by the donor on the endowment’s earnings. Laws can differ from state-to-state for the treatment of those earnings, but your investment policy should govern the spending from these earnings.

The bottom line? Restricted-purpose gifts must be used for that restricted purpose.

Gifts without restriction are always welcome by organizations. The board has the ability to direct the spending of these gifts, and may designate funds for a future purpose, but unlike gifts with donor restrictions, the board does have the discretion to change their own designations.

Whether raising money or reviewing financial information, understanding fundraising language is key for board members to make the most out of donations. See A CPA’s guide to starting a capital campaign and Accounting 101 for development directors blogs for more information. Have questions or want to learn more? Please contact Emily Parker or Sarah Belliveau.

Blog
The language of fundraising: A primer for NFP board members

Of all the changes that came with the sweeping Tax Cuts and Jobs Act (TCJA) in late 2017, none has prompted as big a response from our clients as the changes TCJA makes to the qualified parking deduction. Then, last month, the IRS issued its long-waited guidance on this code section in the form of Notice 2018-99

We've taken a look at both the the original provisions, and the new guidance, and have collected the salient points and things we think you need to consider this tax season. For not-for-profit organizations, visit my article here. And for-profit companies can read here.  

Blog
IRS guidance on qualified parking: Our take

As a new year is upon us, many people think about “out with the old and in with the new”. For those of us who think about technology, and in particular, blockchain technology, the new year brings with it the realization that blockchain is here to stay (at least in some form). Therefore, higher education leaders need to familiarize themselves with some of the technology’s possible uses, even if they don’t need to grasp the day-to-day operational requirements. Here’s a high-level perspective of blockchain to help you answer some basic questions.

Are blockchain and bitcoin interchangeable terms?

No they aren’t. Bitcoin is an electronic currency that uses blockchain technology, (first developed circa 2008 to record bitcoin transactions). Since 2008, many companies and organizations utilize blockchain technology for a multitude of purposes.

What is a blockchain?

In its simplest terms, a blockchain is a decentralized, digital list (“chain”) of timestamped records (“blocks”) that are connected, secured by cryptography, and updated by participant consensus.

What is cryptography?

Cryptography refers to converting unencrypted information into encrypted information—and vice versa—to both protect data and authenticate users.

What are the pros of using blockchain?

Because blockchain technology is inherently decentralized, you can reduce the need for “middleman” entities (e.g., financial institutions or student clearinghouses). This, in turn, can lower transactional costs and other expenses, and cybersecurity risks—as hackers often like to target large, info-rich, centralized databases.

Decentralization removes central points of failure. In addition, blockchain transactions are generally more secure than other types of transactions, irreversible, and verifiable by the participants. These transaction qualities help prevent fraud, malware attacks, and other risks and issues prevalent today.

What are the cons of using blockchain technology?

Each blockchain transaction requires signature verification and processing, which can be resource-intensive. Furthermore, blockchain technology currently faces strong opposition from certain financial institutions for a variety of reasons. Finally, although blockchains offer a secure platform, they are not impervious to cyberattacks. Blockchain does not guarantee a hacker-proof environment.

How can blockchain benefit higher education institutions?

Blockchain technology can provide higher education institutions with a more secure way of making and recording financial transactions. You can use blockchains to verify and transfer academic credits and certifications, protect student personal identifiable information (PII) while simultaneously allowing students to access and transport their PII, decentralize academic content, and customize learning experiences. At its core, blockchain provides a fresh alternative to traditional methods of identity verification, an ongoing challenge for higher education administration.

As blockchain becomes less of a buzzword and begins to expand beyond the realm of digital currency, colleges and universities need to consider it for common challenges such as identity management, application processing, and student credentialing. If you’d like to discuss the potential benefits blockchain technology provides, please contact me.

Blog
Higher education and blockchain 101: It's not just for bitcoin anymore

All teams experience losing streaks, and all franchise dynasties lose some luster. Nevertheless, the game must go on. What can coaches do? The answer: be prepared, be patient, and be PR savvy. Business managers should keep these three P’s in mind as they read Chapter 8 in BerryDunn’s Cybersecurity Playbook for Management, which highlights how organizations can recover from incidents.

In the last chapter, we discussed incident response. What’s the difference between incident response and incident recovery?

RG: Incident response refers to detecting and identifying an incident—and hopefully eradicating the source or cause of the incident, such as malware. Incident recovery refers to getting things back to normal after an incident. They are different sides of the same resiliency coin.

I know you feel strongly that organizations should have incident response plans. Should organizations also have incident recovery plans?

RG: Absolutely. Have a recovery plan for each type of possible incident. Otherwise, how will your organization know if it has truly recovered from an incident? Having incident recovery plans will also help prevent knee-jerk decisions or reactions that could unintentionally cover up or destroy an incident’s forensic evidence.

In the last chapter, you stated managers and their teams can reference or re-purpose National Institute of Standards and Technology (NIST) special publications when creating incident response plans. Is it safe to assume you also suggest referencing or re-purposing NIST special publications when creating incident recovery plans?

RG: Yes. But keep in mind that incident recovery plans should also mesh with, or reflect, any business impact analyses developed by your organization. This way, you will help ensure that your incident recovery plans prioritize what needs to be recovered first—your organization’s most valuable assets.

That said, I should mention that cybersecurity attacks don’t always target an organization’s most valuable assets. Sometimes, cybersecurity attacks simply raise the “misery index” for a business or group by disrupting a process or knocking a network offline.

Besides having incident recovery plans, what else can managers do to support incident recovery?

RG: Similar to what we discussed in the last chapter, managers should make sure that internal and external communications about the incident and the resulting recovery are consistent, accurate, and within the legal requirements for your business or industry. Thus, having a good incident recovery communication plan is crucial. 

When should managers think about bringing in a third party to help with incident recovery?

RG: That’s a great question. I think this decision really comes down to the confidence you have in your team’s skills and experience. An outside vendor can give you a lot of different perspectives but your internal team knows the business. I think this is one area that it doesn’t hurt to have an outside perspective because it is so important and we often don’t perceive ourselves as the outside world does. 

This decision also depends on the scale of the incident. If your organization is trying to recover from a pretty significant or high-impact breach or outage, you shouldn’t hesitate to call someone. Also, check to see if your organization has cybersecurity insurance. If your organization has cybersecurity insurance, then your insurance company is likely going to tell you whether or not you need to bring in an outside team. Your insurance company will also likely help coordinate outside resources, such as law enforcement and incident recovery teams.

Do you think most organizations should have cybersecurity insurance? 

RG: In this day and age? Yes. But organizations need to understand that, once they sign up for cybersecurity insurance, they’re going to be scrutinized by the insurance company—under the microscope, so to speak—and that they’ll need to take their “cybersecurity health” very seriously.

Organizations need to really pay attention to what they’re paying for. My understanding is that many different types of cybersecurity insurance have very high premiums and deductibles. So, in theory, you could have a $1 million insurance policy, but a $250,000 deductible. And keep in mind that even a simple incident can cost more than $1 million in damages. Not surprisingly, I know of many organizations signing up for $10 million insurance policies. 

How can managers improve internal morale and external reputation during the recovery process?

RG: Well, leadership sets the tone. It’s like in sports—if a coach starts screaming and yelling, then it is likely that the players will start screaming and yelling. So set expectations for measured responses and reactions. 

Check in on a regular basis with your internal security team, or whoever is conducting incident recovery within your organization. Are team members holding up under pressure? Are they tired? Have you pushed them to the point where they are fatigued and making mistakes? The morale of these team members will, in part, dictate the morale of others in the organization.

Another element that can affect morale is—for lack of a better word—idleness resulting from an incident. If you have a department that can’t work due to an incident, and you know that it’s going to take several days to get things back to normal, you may not want department members coming into work and just sitting around. Think about it. At some point, these idle department members are going to grumble and bicker, and eventually affect the wider morale. 

As for improving external reputation?I don’t think it really matters, honestly, because I don’t think most people really, truly care. Why? Because everyone is vulnerable, and attacks happen all the time. At this point in time, cyberattacks seem to be part of the normal course and rhythm of business. Look at all the major breaches that have occurred over the past couple of years. There’s always some of immediate, short-term fallout, but there’s been very little long-term fallout. Now, that being said, it is possible for organizations to suffer a prolonged PR crisis after an incident. How do you avoid this? Keep communication consistent—and limit interactions between employees and the general public. One of the worst things that can happen after an incident is for a CEO to say, “Well, we’re not sure what happened,” and then for an employee to tweet exactly what happened. Mixed messages are PR death knells. 

Let’s add some context. Can you identify a business or group that, in your opinion, has handled the incident recovery process well?

RG: You know, I can’t, and for a very good reason. If a business or group does a really good job at incident recovery, then the public quickly forgets about the incident—or doesn’t even hear about it in the first place. Conversely, I can identify many businesses or groups that have handled the incident recovery process poorly, typically from a PR perspective.

Any final thoughts about resiliency?

RG: Yes. As you know, over the course of this blog series, I have repeated the idea that IT is not the same as security. These are two different concepts that should be tackled by two different teams—or approached in their appropriate context. Similarly, managers need to remember that resiliency is not an IT process—it’s a business process. You can’t just shove off resiliency responsibilities onto your IT team. As managers, you need to get directly involved with resiliency, just as you need to get directly involved with maturity, capacity, and discovery. 

So, we’ve reached the end of this blog series. Above all else, what do you hope managers will gain from it? 

RG: First, the perspective that to understand your organization’s cybersecurity, is to truly understand your organization and its business. And I predict that some managers will be able to immediately improve business processes once they better grasp the cybersecurity environment. Second, the perspective that cybersecurity is ultimately the responsibility of everyone within an organization. Sure, having a dedicated security team is great, but everyone—from the CEO to the intern—plays a part. Third, the perspective that effective cybersecurity is effective communication. A siloed, closed-door approach will not work. And finally, the perspective that cybersecurity is always changing, so that it’s a best practice to keep reading and learning about it. Anyone with questions should feel free to reach out to me directly.

Blog
Incident recovery: Cybersecurity playbook for management

As 2018 is about to come to a close, organizations with fiscal year ends after December 15, 2018, are poised to start implementing the new not-for-profit reporting standard. Here are three areas to address before the close of the fiscal year to set your organization up for a smooth and successful transition, and keep in compliance:

  1. Update and approve policies—organizations need to both change certain disclosures and add new ones. The policies in place at the end of the year will be pivotal in creating the framework within which to draft these new disclosures (for example, treatment of board designations, underwater endowments, and liquidity).
  2. Functional expense reporting—if you have not historically reported expenses by natural and functional classification, develop the methodology for cost allocation. If you already have a framework in place, revisit it to determine if this still fits your organization. Finally, determine where you will present this information in the financial statements.
  3. Internal investment costs—be sure you have a methodology to segregate the organization’s internal investment costs such as internal staff time (remember, this is the cost to generate the income, not account for it) and consider the overall disclosure.

While the implementation of the new reporting standard will not be without cost (both internal costs and audit costs), if your organization considers this an opportunity to better tell your story, the end result will be a much more useful financial narrative. Don’t forget to include the BerryDunn implementation whitepaper in your implementation strategy.

We at BerryDunn are helping organizations gain momentum with a personal touch, through our not-for-profit reporting checkup. This checkup includes initial recast of the prior financial statements to the new format, a personalized review of the checklist to identify opportunities for success, and consideration of the footnotes to be updated. Contact me and find out how you can join the list of organizations getting ahead of the new standard.

Blog
Three steps to ace the new not-for-profit reporting standard

IRS Notice 2018-67 Hits the Charts
Last week, in addition to The Eagles Greatest Hits (1971-1975) album becoming the highest selling album of all time, overtaking Michael Jackson’s Thriller, the IRS issued Notice 2018-67its first formal guidance on Internal Revenue Code Section 512(a)(6), one of two major code sections added by the Tax Cuts and Jobs Act of 2017 that directly impacts tax-exempt organizations. Will it too, be a big hit? It remains to be seen.

Section 512(a)(6) specifically deals with the reporting requirements for not-for-profit organizations carrying on multiple unrelated business income (UBI) activities. Here, we will summarize the notice and help you to gain an understanding of the IRS’s thoughts and anticipated approaches to implementing §512(a)(6).

While there have been some (not so quiet) grumblings from the not-for-profit sector about guidance on Code Section 512(a)(7) (aka the parking lot tax), unfortunately we still have not seen anything yet. With Notice 2018-67’s release last week, we’re optimistic that guidance may be on the way and will let you know as soon as we see anything from the IRS.

Before we dive in, it’s important to note last week’s notice is just that—a notice, not a Revenue Procedure or some other substantive legislation. While the notice can, and should be relied upon until we receive further guidance, everything in the notice is open to public comment and/or subject to change. With that, here are some highlights:

No More Netting
512(a)(6) requires the organization to calculate unrelated business taxable income (UBTI), including for purposes of determining any net operating loss (NOL) deduction, separately with respect to each such trade or business. The notice requires this separate reporting (or silo-ing) of activities in order to determine activities with net income from those with net losses.

Under the old rules, if an organization had two UBI activities in a given year, (e.g., one with $1,000 of net income and another with $1,000 net loss, you could simply net the two together on Form 990-T and report $0 UBTI for the year. That is no longer the case. From now on, you can effectively ignore activities with a current year loss, prompting the organization to report $1,000 as taxable UBI, and pay associated federal and state income taxes, while the activity with the $1,000 loss will get “hung-up” as an NOL specific to that activity and carried forward until said activity generates a net income.

Separate Trade or Business
So, how does one distinguish (or silo) a separate trade or business from another? The Treasury Department and IRS intend to propose some regulations in the near future, but for now recommend that organizations use a “reasonable good-faith interpretation”, which for now includes using the North American Industry Classification System (NAICS) in order to determine different UBI activities.

For those not familiar, the NAICS categorizes different lines of business with a six-digit code. For example, the NAICS code for renting* out a residential building or dwelling is 531110, while the code for operating a potato farm is 111211. While distinguishing residential rental activities from potato farming activities might be rather straight forward, the waters become muddier if an organization rents both a residential property and a nonresidential property (NAICS code 531120). Does this mean the organization has two separate UBI rental activities, or can both be grouped together as rental activities? The notice does not provide anything definitive, but rather is requesting public comments?we expect to see something more concrete once the public comment period is over.

*In the above example, we’re assuming the rental properties are debt-financed, prompting a portion of the rental activity to be treated as UBI.

UBI from Partnership Investments (Schedule K-1)
Notice 2018-67 does address how to categorize/group unrelated business income for organizations that receive more than one partnership K-1 with UBI reported. In short, if the Schedule K-1s the organization receives can meet either of the tests below, the organization may treat the partnership investments as a single activity/silo for UBI reporting purposes. The notice offers the following:

De Minimis Test
You can aggregate UBI from multiple K-1s together as long as the exempt organization holds directly no more than 2% of the profits interest and no more that 2% of the capital interest. These percentages can be found on the face of the Schedule K-1 from the Partnership and the notice states those percentages as shown can be used for this determination. Additionally, the notice allows organizations to use an average of beginning of year and end of year percentages for this determination.

Ex: If an organization receives a K-1 with UBI reported, and the beginning of year profit & capital percentages are 3%, and the end of year percentages are 1%, the average for the year is 2% (3% + 1% = 4%/2 = 2%). In this example, the K-1 meets the de minimis test.

There is a bit of a caveat here—when determining an exempt organization's partnership interest, the interest of a disqualified person (i.e. officers, directors, trustees, substantial contributors, and family members of any of those listed here), a supporting organization, or a controlled entity in the same partnership will be taken into account. Organizations need to review all K-1s received and inquire with the appropriate person(s) to determine if they meet the terms of the de minimis test.

Control Test
If an organization is not able to pass the de minimis test, you may instead use the control test. An organization meets the requirements of the control test if the exempt organization (i) directly holds no more than 20 percent of the capital interest; and (ii) does not have control or influence over the partnership.

When determining control or influence over the partnership, you need to apply all relevant facts and circumstances. The notice states:

“An exempt organization has control or influence if the exempt organization may require the partnership to perform, or may prevent the partnership from performing, any act that significantly affects the operations of the partnership. An exempt organization also has control or influence over a partnership if any of the exempt organization's officers, directors, trustees, or employees have rights to participate in the management of the partnership or conduct the partnership's business at any time, or if the exempt organization has the power to appoint or remove any of the partnership's officers, directors, trustees, or employees.”

As noted above, we recommend your organization review any K-1s you currently receive. It’s important to take a look at Line I1 and make sure your organization is listed here as “Exempt Organization”. All too often we see not-for-profit organizations listed as “Corporations”, which while usually technically correct, this designation is really for a for-profit corporation and could result in the organization not receiving the necessary information in order to determine what portion, if any, of income/loss is attributable to UBI.

Net Operating Losses
The notice also provides some guidance regarding the use of NOLs. The good news is that any pre-2018 NOLs are grandfathered under the old rules and can be used to offset total UBTI on Form 990-T.

Conversely, any NOLs generated post-2018 are going to be considered silo-specific, with the intent being that the NOL will only be applicable to the activity which gave rise to the loss. There is also a limitation on post-2018 NOLs, allowing you to use only 80% of the NOL for a given activity. Said another way, an activity that has net UBTI in a given year, even with post-2017 NOLs, will still potentially have an associated tax liability for the year.

Obviously, Notice 2018-67 provides a good baseline for general information, but the details will be forthcoming, and we will know then if they have a hit. Hopefully the IRS will not Take It To The Limit in terms of issuing formal guidance in regards to 512(a)(6) & (7). Until they receive further IRS guidance,  folks in the not-for-profit sector will not be able to Take It Easy or have any semblance of a Peaceful Easy Feeling. Stay tuned.

Blog
Tax-exempt organizations: The wait is over, sort of

Artificial Intelligence, or AI, is no longer the exclusive tool of well-funded government entities and defense contractors, let alone a plot device in science fiction film and literature. Instead, AI is becoming as ubiquitous as the personal computer. The opportunities of what AI can do for internal audit are almost as endless as the challenges this disruptive technology represents.

To understand how AI will influence internal audit, we must first understand what AI is.The concept of AI—a technology that can perceive the world directly and respond to what it perceives—is often attributed to Alan Turing, though the term “Artificial Intelligence” was coined much later in 1956 at Dartmouth College, in Hanover, New Hampshire. Turing was a British scientist who developed the machine that cracked the Nazis’ Enigma code. Turing thought of AI as a machine that could convince a human that it also was human. Turing’s humble description of AI is as simple as it is elegant. Fast-forward some 60 years and AI is all around us and being applied in novel ways almost every day. Just consider autonomous self- driving vehicles, facial recognition systems that can spot a fugitive in a crowd, search engines that tailor our online experience, and even Pandora, which analyzes our tastes in music.

Today, in practice and in theory, there are four types of AI. Type I AI may be best represented by IBM’s Deep Blue, a chess-playing computer that made headlines in 1996 when it won a match against Russian chess champion Gary Kasparov. Type I AI is reactive. Deep Blue can beat a chess champion because it evaluates every piece on the chessboard, calculates all possible moves, then predicts the optimal move among all possibilities. Type I AI is really nothing more than a super calculator, processing data much faster than the human mind can. This is what gives Type I AI an advantage over humans.

Type II AI, which we find in autonomous cars, is also reactive. For example, it applies brakes when it predicts a collision; but, it has a low form of memory as well. Type II AI can briefly remember details, such as the speed of oncoming traffic or the distance between the car and a bicyclist. However, this memory is volatile. When the situation has passed, Type II AI deletes the data from its memory and moves on to the next challenge down the road.

Type II AI's simple form of memory management and the ability to “learn” from the world in which it resides is a significant advancement. 
The leap from Type II AI to Type III AI has yet to occur. Type III AI will not only incorporate the awareness of the world around it, but will also be able to predict the responses and motivations of other entities and objects, and understand that emotions and thoughts are the drivers of behavior. Taking the autonomous car analogy to the next step, Type III AI vehicles will interact with the driver. By conducting a simple assessment of the driver’s emotions, the AI will be able to suggest a soothing playlist to ease the driver's tensions during his or her commute, reducing the likelihood of aggressive driving. Lastly, Type IV AI–a milestone that will likely be reached at some point over the next 20 or 30 years—will be self-aware. Not only will Type IV AI soothe the driver, it will interact with the driver as if it were another human riding along for the drive; think of “HAL” in Arthur C. Clarke’s 2001: A Space Odyssey.

So what does this all mean to internal auditors?
While it may be a bit premature to predict AI’s impact on the internal audit profession, AI is already being used to predict control failures in institutions with robust cybersecurity programs. When malicious code is detected and certain conditions are met, AI-enabled devices can either divert the malicious traffic away from sensitive data, or even shut off access completely until an incident response team has had time to investigate the nature of the attack and take appropriate actions. This may seem a rather rudimentary use of AI, but in large financial institutions or manufacturing facilities, minutes count—and equal dollars. Allowing AI to cut off access to a line of business that may cost the company money (and its reputation) is a significant leap of faith, and not for the faint of heart. Next generation AI-enabled devices will have even more capabilities, including behavioral analysis, to predict a user’s intentions before gaining access to data.

In the future, internal audit staff will no doubt train AI to seek conditions that require deeper analysis, or even predict when a control will fail. Yet AI will be able to facilitate the internal audit process in other ways. Consider AI’s role in data quality. Advances in inexpensive data storage (e.g., the cloud) have allowed the creation and aggregation of volumes of data subject to internal audit, making the testing of the data’s completeness, integrity, and reliability a challenging task considering the sheer volume of data. Future AI will be able to continuously monitor this data, alerting internal auditors not only of the status of data in both storage and motion, but also of potential fraud and disclosures.

The analysis won’t stop there.
AI will measure the performance of the data in meeting organizational objectives, and suggest where efficiencies can be gained by focusing technical and human resources to where the greatest risks to the organization exist in near real-time. This will allow internal auditors to develop a common operating picture of the day-to-day activities in their business environments, alerting internal audit when something doesn’t quite look right and requires further investigation.

As promising as AI is, the technology comes with some ethical considerations. Because AI is created by humans, it is not always vacant of human flaws. For instance, AI can become unpredictably biased. AI used in facial recognition systems has made racial judgments based on certain common facial characteristics. In addition, AI that gathers data from multiple sources that span a person’s financial status, credit status, education, and individual likes and dislikes could be used to profile certain groups for nefarious intentions. Moreover, AI has the potential to be weaponized in ways that we have yet to comprehend.

There is also the question of how internal auditors will be able to audit AI. Keeping AI safe from internal fraudsters and external adversaries is going to be paramount. AI’s ability to think and act faster than humans will challenge all of us to create novel ways of designing and testing controls to measure AI’s performance. This, in turn, will likely make partnerships with consultants that can fill knowledge gaps even more valuable. 

Challenges and pitfalls aside, AI will likely have a tremendous positive effect on the internal audit profession by simultaneously identifying risks and evaluating processes and control design. In fact, it is quite possible that the first adopters of AI in many organizations may not be the cybersecurity departments at all, but rather the internal auditor’s office. As a result, future internal auditors will become highly technical professionals and perhaps trailblazers in this new and amazing technology.

Blog
Artificial intelligence and the future of internal audit

The world of professional sports is rife with instability and insecurity. Star athletes leave or become injured; coaching staff make bad calls or public statements. The ultimate strength of a sports team is its ability to rebound. The same holds true for other groups and businesses. Chapter 7 in BerryDunn’s Cybersecurity Playbook for Management looks at how organizations can prepare for, and respond to, incidents.

The final two chapters of this Cybersecurity Playbook for Management focus on the concept of resiliency. What exactly is resiliency?
RG
: Resiliency refers to an organization’s ability to keep the lights on—to keep producing—after an incident. An incident is anything that disrupts normal operations, such as a malicious cyberattack or an innocent IT mistake.

Among security professionals, attitudes toward resiliency have changed recently. Consider the fact that the U.S. Department of Defense (DOD) has come out and said, in essence, that cyberwarfare is a war that it cannot win—because cyberwarfare is so complex and so nuanced. The battlefield changes daily and the opponents have either a lot of resources or a lot of time on their hands. Therefore, the DOD is placing an emphasis on responding and recovering from incidents, rather than preventing them.

That’s sobering.
RG
: It is! And businesses and organizations should take note of this attitude change. Protection, which was once the start and endpoint for security, has given way to resiliency.

When and why did this attitude change occur?
RG
: Several years ago, security experts started to grasp just how clever certain nation states, such as China and Russia, were at using malicious software. If you could point to one significant event, likely the 2013 Target breach is it.

What are some examples of incidents that managers need to prepare for?
RG
: Examples range from external breaches and insider threats to instances of malfeasance or incompetence. Different types of incidents lead to the same types of results—yet you can’t have a broad view of incidents. Managers should work with their teams to create incident response plans that reflect the threats associated with their specific line of business. A handful of general incident response plans isn’t going to cut it.

Managers need to work with their teams to develop a specific incident response plan for each specific type of incident. Why? Well, think of it this way: Your response to a careless employee should be different from your response to a malicious employee, for a whole host of legal reasons.

Incident response is not a cookie-cutter process. In fact, it is quite the opposite. This is one of the reasons I highly suggest that security teams include staff members with liberal arts backgrounds. I’m generalizing, but these people tend to be creative. And when you’re responding to incidents, you want people who can look at a problem or situation from a global or external perspective, not just a technical or operational perspective. These team members can help answer questions such as, what does the world see when they look at our organization? What organizational information might be valuable to, or targeted by, malicious actors? You’ll get some valuable fresh perspectives.

How short or long should the typical incident response plan be?
RG
: They can be as short as needed; I often see good incident response plans no more than three or four pages in length. However, it is important that incident response plans are task oriented, so that it is clear who does what next. And when people follow an incident response plan, they should physically or digitally check off each activity, then record each activity.

What system or software do you recommend for recording incidents and responses?
RG
: There are all types of help desk software you can use, including free and open source software. I recommend using help desk software with workflow capabilities so your team can assign and track tasks.

Any other tips for developing incident response plans?
RG
: First, managers should work with, and solicit feedback from, different data owners and groups within the organization—such as IT, HR, and Legal—when developing incident response plans. If you create these documents in a vacuum, they will be useless.

Second, managers and their teams should take their time and develop the most “solid” incident response plans possible. Don’t rush the process. The effectiveness of your incident response plans will be critical in assessing your organization’s ability to survive a breach. Because of this, you should be measuring your response plans through periodic testing, like conducting tabletop exercises.

Third, keep your organization’s customers in mind when developing these plans. You want to make sure external communications are consistent, accurate, and within the legal requirements for your business or industry. The last thing you want is customers receiving conflicting messages about the incident. This can cause unnecessary grief for you, but can also cause an unmeasurable loss of customer confidence.

Are there any decent incident response plans in the public domain that managers and their teams can adapt for their own purposes?
RG
: Yes. My default reference is the National Institute of Standards and Technology (NIST). NIST has many special publications that describe the incident response process, how to develop a solid plan, and how to test your plan.

Should organizations have dedicated incident response teams?
RG: Definitely. Larger organizations usually have the resources and ability to staff these teams internally. Smaller organizations may want to consider hiring a reputable third party to act as an incident response team. The key with hiring a third party? Don’t wait until an incident occurs! If you wait, you’re going to panic, and make panic-based decisions. Be proactive and hire a third party on retainer.

That said, even larger organizations should consider hiring a third party on an annual basis to review incident response plans and processes. Why? Because every organization can grow complacent, and complacency kills. A third party can help gauge the strengths and weaknesses of your internal incident response teams, and provide suggestions for general or specific training. A third party can also educate your organization about the latest and greatest cyber threats.

Should managers empower their teams to conduct internal “hackathons” in order to test incident response?
RG
: Sure! It’s good practice, and it can be a lot of fun for team members. There are a few caveats. First, don’t call it a “hackathon.” The word can elicit negative reactions from upper management—whose support you really need. Call it “active testing” or “continuous improvement exercises.” These activities allow team members to think creatively, and are opportunities for them to boost their cybersecurity knowledge. Second, be prepared for pushback. Some managers worry if team members gain more cybersecurity skills, then they’ll eventually leave the organization for another, higher-paying job. I think you should be committed to the growth of your team members; it’ll only make your organization more secure.

What are some best practices managers should follow when reporting incidents to their leadership?
RG
: Keep the update quick, brief, and to the point. Leave all the technical jargon out, and keep everything in a business context. This way leadership can grasp the ramifications of the event and understand what matters. Be prepared to outline how you’re responding and what actions leadership can take to support the incident response team and protect the business. In the last chapter, I mentioned what I call the General Colin Powell method of reporting, and I suggest using that method when informing leadership. Tell them what you know, what you don’t know, what you think, and what you recommend. Have answers, or at least a plan.

Above all else, don’t scare leadership. If you present them with panic, you’re going to get panic back. Be a calm voice in the storm. Management will respond better, as will your team.

Another thing to keep in mind is different business leaders have different responses to this sort of news. An elected official, for example, might react differently than the CEO of a private company, simply due to possible political fallout. Keep this context in mind when reporting incidents. It can help you craft the message.

How much organization-wide communication should there be about incidents?
RG
: That’s a great question, but a tough one to answer. Transparency is good, but it can also unintentionally lead to further incidents. Do you really want to let your whole organization know about an exploitable weakness? Also, employees can spread information about incidents on social media, which can actually lead to the spread of misinformation. If you are in doubt about whether or not to inform the entire organization about an incident, refer to your Legal Department. In general, organization-wide communication should be direct: We’ve had an incident; these are the facts; this is what you are allowed to say on social media; and this is what you’re not allowed to say on social media.

Another great but tough question: When do you tell the public about an incident? For this type of communication, you’re going to need buy-in from various sources: leadership, Legal, HR, and your PR team or external PR partners. You have to make sure the public messaging is consistent. Otherwise, citizens and the media will try to poke holes in your official story. And that can lead to even more issues.

So what’s next?
RG
: Chapter 8 will focus on how managers can help their organizations recover from a cybersecurity incident.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Incident response: Cybersecurity playbook for management

Cloud services are becoming more and more omnipresent, and rapidly changing how companies and organizations conduct their day-to-day business.

Many higher education institutions currently utilize cloud services for learning management systems (LMS) and student email systems. Yet there are some common misunderstandings and assumptions about cloud services, especially among higher education administrative leaders who may lack IT knowledge. The following information will provide these leaders with a better understanding of cloud services and how to develop a cloud services strategy.

What are cloud services?

Cloud services are internet-based technology services provided and/or hosted by offsite vendors. Cloud services can include a variety of applications, resources, and services, and are designed to be easily scalable, cost effective, and fully managed by the cloud services vendor.

What are the different types?

Cloud services are generally categorized by what they provide. Today, there are four primary types of cloud services:

Cloud Service Types 

Cloud services can be further categorized by how they are provided:

  1. Private cloud services are dedicated to only one client. Security and control is the biggest value for using a private cloud service.
  2. Public cloud services are shared across multiple clients. Cost effectiveness is the best value of public cloud services because resources are shared among a large number of clients.
  3. Hybrid cloud services are combinations of on-premise software and cloud services. The value of hybrid cloud services is the ability to adopt new cloud services (private or public) slowly while maintaining on-premise services that continue to provide value.

How do cloud services benefit higher education institutions?

Higher education administrative leaders should understand that cloud services provide multiple benefits.
Some examples:

Cloud-Services-for-Higher-Education


What possible problems do cloud services present to higher education institutions?

At the dawn of the cloud era, many of the problems were technical or operational in nature. As cloud services have become more sophisticated, the problems have become more security and business related. Today, higher education institutions have to tackle challenges such as cybersecurity/disaster recovery, data ownership, data governance, data compliance, and integration complexities.

While these problems and questions may be daunting, they can be overcome with strong leadership and best-practice policies, processes, and controls.

How can higher education administrative leaders develop a cloud services strategy?

You should work closely with IT leadership to complete this five-step planning checklist to develop a cloud services strategy: 

1. 

Identify new services to be added or consolidated; build a business case and identify the return on investment (ROI) for moving to the cloud, in order to answer:

• 

What cloud services does your institution already have?

• 

What cloud services does your institution already have?

• 

What services should you consider replacing with cloud services, and why?

• 

How are data decisions being made?

2. 

Identify design, technical, network, and security requirements (e.g., private or public; are there cloud services already in place that can be expanded upon, such as a private cloud service), in order to answer:

• 

Is your IT staff ready to migrate, manage, and support cloud services?

• 

Do your business processes align with using cloud services?

• 

Do cloud service-provided policies align with your institution’s security policies?

• 

Do you have the in-house expertise to integrate cloud services with existing on-premise services?

3. 

Decide where data will be stored; data governance (e.g., on-premise, off-premise data center, cloud), in order to answer:

• 

Who owns the data in the institution’s cloud, and where?

• 

Who is accountable for data decisions?

4. 

Integrate with current infrastructure; ensure cloud strategy easily allows scalability for expansion and additional services, in order to answer:

• 

What integration points will you have between on-premise and cloud applications or services, and can the institution easily implement, manage, and support them?

5. 

Identify business requirements — budget, timing, practices, policies, and controls required for cloud services and compliance, in order to answer:

• 

Will your business model need to change in order to support a different cost model for cloud services (i.e., less capital for equipment purchases every three to five years versus a steady monthly/yearly operating cost model for cloud services)?

• 

Does your institution understand the current state and federal compliance and privacy regulations as they relate to data?

• 

Do you have a contingency plan if its primary cloud services provider goes out of business?

• 

Do your contracts align with institutional, state, and federal guidelines?

Need assistance?

BerryDunn’s higher education team focuses on advising colleges and universities in improving services, reducing costs, and adding value. Our team is well qualified to assist in understanding the cloud “skyscape.” If your institution seeks to maximize the value of cloud services or develop a cloud services strategy, please contact me.

Blog
Cloud services 101: An almanac for higher education leaders

Any sports team can pull off a random great play. Only the best sports teams, though, can pull off great plays consistently — and over time. The secret to this lies in the ability of the coaching staff to manage the team on a day-to-day basis, while also continually selling their vision to the team’s ownership. Chapter Six in BerryDunn’s Cybersecurity Playbook for Management looks at how managers can achieve similar success through similar actions.

The title of this chapter is “The Workflow.” What are we talking about today?
RG
: In previous chapters, we’ve walked managers through cybersecurity concepts like maturity, capacity, and discovery. Today, we’re going to discuss how you can foster a consistent and repeatable cybersecurity program — the cybersecurity workflow, if you will. And for managers, this is where game planning begins. To achieve success, they need to effectively oversee their team on a day-to-day basis, and continually sell the cybersecurity program to the business leadership for whom they work — the board or CEO.

Let’s dive right in. How exactly do managers oversee a cybersecurity program on a day-to-day basis?
RG
: Get out of the way, and let your team do its work. By this point, you should know what your team is capable of. Therefore, you need to trust your team. Yet you should always verify. If your team recommends purchasing new software, have your team explain, in business terms, the reasons for the purchase. Then verify those reasons. Operationalizing tools, for example, can be difficult and costly, so make sure they put together a road map with measurable outcomes before you agree to buy any tools — even if they sound magical!

Second, empower your team by facilitating open dialogue. If your team brings you bad news, listen to the bad news — otherwise, you’ll end up alienating people. Know that your team is going to find things within your organization’s “auditable universe” that are going to make you uncomfortable from a cybersecurity point of view. Nevertheless, you need to encourage your team to share the information, so don’t overreact.

Third, give your team a communication structure that squelches a crisis-mode mentality — “Everything’s a disaster!” In order to do that, make sure your team gives every weakness or issue they discover a risk score, and log the score in a risk register. That way, you can prioritize what is truly important.

Fourth, resolve conflicts between different people or groups on your team. Take, for example, conflict between IT staff and security staff, (read more here). It is a common issue, as there is natural friction between these groups, so be ready to deal with it. IT is focused on running operations, while security is focused on protecting operations. Sometimes, protection mechanisms can disrupt operations. Therefore, managers need to act as peacemakers between the two groups. Don’t show favoritism toward one group or another, and don’t get involved in nebulous conversations regarding which group has “more skin in the game.” Instead, focus on what is best for your organization from a business perspective. The business perspective ultimately trumps either IT or security concerns.

Talk about communication for a moment. Managers often come from business backgrounds, while technical staff often come from IT backgrounds. How do you foster clear communication across this divide?
RG
: Have people talk in simple terms. Require everyone on your team use plain language to describe what they know or think. I recommend using what I call the Colin Powell method of reporting:

1. Tell me what you know.
2. Tell me what you don’t know.
3. Tell me what you think.
4. Tell me what you recommend.

When you ask team members questions in personal terms — “Tell me what you know”—you tend to receive easy-to-understand, non-jargon answers.

Something that we really haven’t talked about in this series is cybersecurity training. Do you suggest managers implement regular cybersecurity training for their team?
RG
: This is complicated, and my response will likely be be a little controversial to many. Yes, most organizations should require some sort of cybersecurity training. But I personally would not invest a lot of time or money into cybersecurity training beyond the basics for most users and specific training for technical staff. Instead, I would plan to spend more money on resiliency — responding to, and recovering from, a cybersecurity attack or incident. (We’ll talk about resiliency more in the next two chapters.) Why? Well, you can train people all day long, but it only takes one person to be malicious, or to make an innocent mistake, that leads to a cybersecurity attack or incident. Let’s look at my point from a different perspective. Pretend you’re the manager of a bank, and you have some money to spend on security. Are you going to spend that money on training your employees how to identify a robber? Or are you going to spend that money on a nice, state-of-the-art vault?

Let’s shift from talking about staff to talking about business leadership. How do managers sell the cybersecurity program to them?
RG
: Use business language, not technical language. For instance, a CEO may not necessarily care much about the technical behavior of a specific malware, but they are going to really care about the negative effects that malware can have on the business.

Also, keep the conversation short, simple, and direct. Leadership doesn’t have time to hear about all you’re doing. Leadership wants progress updates and a clear sense of how the cybersecurity program is helping the business. I suggest discussing three to four high-priority security risks, and summarizing how you and your team are addressing those risks.

And always remember that in times of crisis, those who keep a cool head tend to gain the most support. Therefore, when talking to the board or CEO, don’t be the bearer of “doom and gloom.” Be calm, positive, empowering, and encouraging. Provide a solution. And make leadership part of the solution by reminding them that they, too, have cybersecurity responsibilities, such as communicating the value of the cybersecurity program to the organization — internal PR, in other words.

How exactly should a manager communicate this info to leadership? Do you suggest one-on-one chats, reports, or presentations?
RG
: This all depends on leadership. You know, some people are verbal learners; some people are visual learners. It might take some trial and error to figure out the best medium for conveying your information, and that’s OK. Remember: cybersecurity is an ongoing process, not a one-and-done event. However, if you are going to pursue the one-on-one chat route, just be prepared, materials-wise. If leadership asks for a remediation plan, then you better have that remediation plan ready to present!

What is one of the biggest challenges that managers face when selling cybersecurity programs to leadership?RG: One of the biggest challenges is addressing questions about ROI, because there often are no quantifiable financial ROIs for cybersecurity. But organizations have to protect themselves. So the question is, how much money is your organization willing to spend to protect itself? Or, in other words, how much risk can your organization reduce — and does this reduction justify the cost?

One possible way to communicate the value of cybersecurity to leadership is to compare it to other necessary elements within the organization, such as HR. What is the ROI of HR? Who knows? But do you really want your organization to lack an HR department? Think of all the possible logistic and legal issues that could swamp your organization without an HR department. It’s terrifying to think about! And the same goes for cybersecurity.

We’ve talked about how managers should communicate with their team and with business leadership. What about the organization as a whole?
RG
: Sure! Regular email updates are great, especially if you keep them “light,” so to speak. Don’t get into minutia. That said, I also think a little bit of secrecy goes a long way. Organizations need to be aware of, and vigilant toward, insider threats. Loose lips sink ships, you know? Gone are the days when a person works for an organization for 30+ years. Employees come and go pretty frequently. As a result, the concept of company loyalty has changed. So make sure your organization-wide updates don’t give away too much cybersecurity information.

So what’s next?
RG:
Chapter 7 will focus on how managers can help their organizations respond to a cybersecurity attack or incident.

Blog
The workflow: Cybersecurity playbook for management

For over four years the business community has been discussing the impact Accounting Standards Codification (ASC) 606, Revenue from Contracts with Customers, will have on financial reporting. As you evaluate the impact this standard will have on a manufacturers’ financial reporting practices, there are certain provisions of ASC 606 you should consider.

Then: Prior to ASC 606, manufacturers generally recognize revenue when persuasive evidence of an arrangement exists, delivery has occurred, the fees are fixed or determinable, and collection is reasonably assured. For most, this typically occurs when a product ships and the title to the product transfers to the customer.

Now: Under ASC 606, effective for annual reporting periods beginning after December 15, 2018 for non-public entities (December 15, 2017 for public entities), an entity should recognize revenue to depict the transfer of promised goods or services to customers in an amount that reflects the consideration to which the entity expects to be entitled in exchange for those goods or services. Under this core principle, an entity should:

  1. Identify its contracts with its customers,
  2. Identify performance obligations (promises) in the contract,
  3. Determine the transaction price,
  4. Allocate the transaction price to the performance obligations in the contract; and
  5. Recognize revenue when (or as) the entity satisfies the performance obligation. 

Who does it impact, and how?

For some manufacturers, ASC 606 will not impact their financial reporting practices since they satisfy their performance obligation when the product is shipped and the title has transferred to the customer. However, entities who manufacture highly specialized products may be required to recognize revenue over time if the entity’s performance creates an asset without an alternative use to the entity, and the entity has an enforceable right to compensation for performance completed to date.

Limitations

To determine if a product has an alternative use, the entity must assess whether it is restricted contractually from redirecting the asset for another use during production, or if there are practical limitations on the entity’s ability to redirect the product for another use. A contractual limitation must be substantive for it to be determined to not have an alternative use, e.g., the customer can enforce rights for delivery of the product. A restriction is not substantive if the product is largely interchangeable with other products the entity could transfer between customers without incurring a significant loss.

A practical limitation exists if the entity’s ability to redirect the product for another use results in significant economic losses, either from significant rework costs or having to sell the product at a loss. The alternative use assessment should be done at contract inception based on the product in its completed state, and not during the production process. Therefore, the point in time during production when a product becomes customized and not generic is irrelevant. If it is determined there is no alternative use, the entity has satisfied this criterion and must evaluate its enforceable right to compensation for performance completed to date.

Definitions and Distinctions

ASC 606 defines a contract as “an agreement between two or more parties that creates enforceable rights and obligations”. Accordingly, the definition of a contract may include, but not be limited to, a Purchase Order, Agreement for the Sale of Goods, Bill of Sale, Independent Contractor Agreement, etc. In applying this definition to business operations and revenue recognition, an entity must consider its individual business practices, and possibly individual customer arrangements in determining enforceability.

Once it is determined that the entity has an enforceable right to a payment, the amount of payment must also be considered. The amount that would “compensate” an entity for performance to date should be the estimated selling price of the goods or services transferred to date (for example, recovery of costs incurred plus a reasonable profit margin) rather than compensation for only the entity’s potential loss of profit if the contract were to be terminated. Accordingly, a payment that only covers the entity’s costs incurred to date or for the entity’s potential loss of profit if the contract was terminated does not allow for the recognition of revenue over time.

Compensation for a reasonable profit margin need not equal the profit margin expected if the contract was fulfilled as promised. Once the “enforceable right to compensation for performance completed to date” requirement has been met, an entity will then assess the appropriate method of recognizing revenue over a period of time using input or output methods, as provided under ASC 606.

For manufacturers of highly specialized products there may not be a simple answer for determining appropriate revenue recognition policies for each customer contract and evaluating the impact can be a challenging endeavor.

Next steps

If you would like guidance in analyzing the impact ASC 606 will have on a manufacturer’s financial reporting practices, including the potential impact it may have on bank covenants, borrowing base calculations, etc., please contact one of our dedicated commercial industry practice professionals.
 

Blog
New revenue recognition rules: Evaluating the impact on manufacturers

The late science fiction writer (and college professor) Isaac Asimov once said: “I do not fear computers. I fear the lack of them.” Had Asimov worked in higher ed IT management, he might have added: “but above all else, I fear the lack of computer staff.”

Indeed, it can be a challenge for higher education institutions to recruit and retain IT professionals. Private companies often pay more in a good economy, and in certain areas of the nation, open IT positions at colleges and universities outnumber available, qualified IT workers. According to one study from 2016, almost half of higher education IT workers are at risk of leaving the institutions they serve, largely for better opportunities and more supportive workplaces. Understandably, IT leadership fears an uncertain future of vacant roles—yet there are simple tactics that can help you improve the chances of filling open positions.

Emphasize the whole package

You need to leverage your institution’s strengths when recruiting IT talent. A focus on innovation, project leadership, and responsibility for supporting the mission of the institution are important attributes to promote when recruiting. Your institution should sell quality of life, which can be much more attractive than corporate culture. Many candidates are attracted to the energy and activity of college campuses, in addition to the numerous social and recreational outlets colleges provide.

Benefit packages are another strong asset for recruiting top talent. Schools need to ensure potential candidates know the amount of paid leave, retirement, and educational assistance for employees and employee family members. These added perks will pique the interest of many candidates who might otherwise have only looked at salary during the process.

Use the right job title

Some current school vacancies have very specific job titles, such as “Portal Administrator” or “Learning Multimedia Developer.” However, this specificity can limit visibility on popular job posting sites, reducing the number of qualified applicants. Job titles, such as “Web Developer” and “Java Developer,” can yield better search results. Furthermore, some current vacancies include a number or level after the job title (e.g., “System Administrator 2”), which also limits visibility on these sites. By removing these indicators, you can significantly increase the applicant pool.

Focus on service, not just technology

Each year, institutions deploy an increasing number of Software as a Service (SaaS) and hosted applications. As higher education institutions invest more in these applications, they need fewer personnel for day-to-day technology maintenance support. In turn, this allows IT organizations to focus limited resources on services that identify and analyze technology solutions, provide guidance to optimize technology investments, and manage vendor relationships. IT staff with soft skills will become even more valuable to your institution as they engage in more people- and process-centric efforts.

Fill in the future

It may seem like science fiction, but by revising your recruiting and retention tactics, your higher education institution can improve its chances of filling IT positions in a competitive job market. In a future blog, I’ll provide ideas for cultivating staff from your institution via student workers and upcoming graduates. If you’d like to discuss additional staffing tactics, send me an email.

Blog
No science fiction: Tactics for recruiting and retaining higher education IT positions

A professional sports team is an ever-changing entity. To have a general perspective on the team’s fluctuating strengths and weaknesses, a good coach needs to trust and empower their staff to discover the details. Chapter 5 in BerryDunn’s Cybersecurity Playbook for Management looks at how discovery can help managers understand their organization’s ever-changing IT environment. 

What is discovery, and how does it connect to capacity?
RG: Discovery is the process of mapping your organization’s capacity—people, processes, and tools—so you understand what your organization’s IT environment has. In other words, it’s the auditing of your IT environment.

Of course, the most valuable thing within your IT environment, other than the people who access it, is the “thing” that drives your business. Often this thing is data, but it could be proprietary processes or machinery. For the purposes of this blog, we’ll focus on data. Discovery naturally answer questions such as:

• What in our IT environment is important to our business?
• How is it being used?
• Who has access to it, and how can we better protect it? 

How can managers tackle discovery?
RG: First, you need to understand discovery requires accepting the fact that the environment is always evolving. Discovery is not a one-and-done process—it, never ends. People introduce new things, like updated software, into IT environments all the time. Your IT environment is an always-shifting playing field. Think of Amazon’s Alexa devices. When someone plugs one into your internal wireless network, they’ve just expanded your attack surface for a hacker by introducing a new device with its own set of vulnerabilities.

Second, you have to define the “auditable universe” by establishing manageable boundaries in direct proportion to your discovery team’s capabilities. I often see solicitations for proposals that ask for discovery of all assets in an IT environment. That could include a headquarters building, 20 satellite offices, and remote workers, and is going to take a long time to assess. I recently heard of a hospital discovering 41,000 internet-connected devices on their network—mostly Internet of Things (IoT) resources, such as heart monitors. Originally, the hospital had only been aware of about one-third of these devices. Keeping your boundaries realistic and manageable can prevent your team from being overwhelmed.

Third, your managers should refrain from getting directly involved with discovery because it’s a pretty technical and time-consuming process. You should task a team to conduct discovery, and provide the discovery team with adequate tools. There are a lot of good tools that can help map networks and manage assets; we’ll talk about them later in this blog. Managers should mainly concern themselves with the results of discovery and trust in the team’s ability to competently map out the IT environment. Remember, the IT environment is always evolving, so even as the results roll in, things are changing.

Who should managers select for the discovery team?
RG: Ideally, various groups of people. For instance, it makes sense for HR staff to conduct the people part of discovery. Likewise, it makes sense for data owners—staff responsible for certain data—to conduct the process part of discovery, and for IT staff to conduct the tool part.

However, I should point out that if you have limited internal resources, then the IT staff can conduct all three parts of discovery, working closely with all stakeholders. IT staff will have a pretty good sense of where data is held within the organization’s IT environment, and they will develop an understanding of what is important to the organization.

Could an organization’s security staff conduct discovery?
RG: Interestingly enough, security staff don’t always have day-to-day interactions with data. They are more focused on overall data protection strategies and tactics. Therefore, it makes more sense to leverage other staff, but the results of discovery (e.g., knowing where data resides, understanding the sensitivity of data) need to be shared with security staff. Ultimately, this knowledge will help security staff better protect your data.

What about hiring external resources to conduct discovery?
RG: It depends on what you’re trying to do. If the goal of discovery is to comply with some sort of regulatory standard or framework, then yes, hiring external resources makes sense. These resources could come in and, using the discovery process, conduct a formal assessment. It may also make sense to hire external resources if you’re short-staffed, or if you have a complex environment with undocumented data repositories, processes, and tools. Yet in each of these scenarios, the external resources will only be able to provide a point-in-time baseline. 

Otherwise, I recommend leveraging your internal staff. An internal discovery team should be able to handle the task if adequately staffed and resourced, and team members will learn a lot in the process. And as discovery never really ends, do you want to have to perpetually hire external resources?

People make up a big part of capacity. Should the discovery team focus on people and their roles in this process?
RG: Yes! It sounds odd that people and their roles are included in discovery, but it is important to know who is using and touching your data. At a minimum, the discovery team needs to conduct background checks. (This is one example of where HR staff need to be part of the discovery process.)

How can the discovery team best map processes?
RG: The discovery team has to review each process with the respective data owner. Now, if you are asking the data owners themselves to conduct discovery, then you should have them illustrate their own workflows. There are various process mapping tools, such as Microsoft Visio, that data owners can use for this.

The discovery team needs to acknowledge that data owners often perform their processes correctly through repetition—the problems or potential vulnerabilities stem from an inherently flawed or insecure process, or having one person in charge of too many processes. Managers should watch out for this. I’ll give you a perfect example of the latter sort of situation. I once helped a client walk through the process of system recovery.

During the process we discovered that the individual responsible for system recovery also had the ability to manipulate database records and to print checks. In theory, that person could have been able to cut themselves a check and then erase its history from the system. That’s a big problem!

Other times, data owners perform their processes correctly, but inadvertently use compromised or corrupted tools, such as free software downloaded from the internet. The discovery team has to identify needed policy and procedure changes to prevent these situations from happening.

Your mention of vulnerable software segues nicely to the topic of tools. How can the discovery team best map the technologies the organization uses?
RG: Technology is inherently flawed. You can’t go a week without hearing about a new vulnerability in a widely used system or application. I suggest researching network scanning tools for identifying hosts within your network; vulnerability testing tools for identifying technological weaknesses or gaps; and penetration testing tools for simulating cyber-attacks to assess cybersecurity defenses.

Let’s assume a manager has tasked a team to conduct discovery. What’s the next step?
RG: If you recall, in the previous blog I discussed the value of adopting a cybersecurity risk register, which is a document used to list the organization’s cybersecurity risks, record required risk mitigation actions, and identify who “owns” the risk. The next step is for your discovery team to start completing the risk register. The manager uses this risk register, and subsequent discussions with the team, to make corresponding business decisions to improve cybersecurity, such as purchasing new tools—and to measure the progress of mitigating any vulnerabilities identified in the discovery process. A risk register can become an invaluable resource planning tool for managers.

For discovery purposes, what’s the best format for a cybersecurity risk register?
RG: There are very expensive programs an organization can use to create a risk register. Some extremely large banking companies use the RSA Archer GRC platform. However, you can build a very simple risk register in Excel. An Excel spreadsheet would work well for small and some mid-sized organizations, but there are other relatively inexpensive solutions available. I say this because managers should aim for simplicity. You don’t want the discovery team getting bogged down by a complex risk register.

Finally, what are some discovery resources and reference guides that managers should become familiar with and utilize?
RG: I recommend the National Institute of Standards and Technology (NIST) Special Publication series. They outline very specific and detailed discovery methodologies you can use to improve your discovery process.

So what’s next?
RG: Chapter 6 will focus on synthesizing maturity, capacity, and discovery to create a resilient organization from a cybersecurity point of view.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Discovery: Cybersecurity playbook for management

Over the course of its day-to-day operations, every organization acquires, stores, and transmits Protected Health Information (PHI), including names, email addresses, phone numbers, account numbers, and social security numbers.

Yet the security of each organization’s PHI varies dramatically, as does its need for compliance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Organizations that meet the definition of a covered entity or business associate under HIPAA must comply with requirements to protect the privacy and security of health information.

Noncompliance can have devastating consequences for an organization, including:

  • Civil violations, with fines ranging from $100 to $50,000 per violation
  • Criminal penalties, with fines ranging from around $50,000 to $250,000, plus imprisonment

All it takes is just one security or privacy breach. As breaches of all kinds continue to rise, this may be the perfect time to evaluate the health of your organization’s HIPAA compliance. To keep in compliance and minimize your risk of a breach, your organization should have:

  • An up-to-date and comprehensive HIPAA security and privacy plan
  • Comprehensive HIPAA training for employees
  • Staff who are aware of all PHI categories
  • Sufficiently encrypted devices and strong password policies

HIPAA Health Check: A Thorough Diagnosis

If your organization doesn’t have these safeguards in place, it’s time to start preparing for the worst — and undergo a HIPAA health check.

Organizations need to understand what they have in place, and where they need to bolster their practice. Here are a variety of fact-finding methods and tools we recommend, including (but not limited to):

  • Administrative, technical, and physical risk analyses
  • Policy, procedure, and business documentation reviews
  • Staff surveys and interviews
  • IT audits and testing of data security

Once you have diagnosed your organization’s “as-is” status, you need to move your organization toward the “to-be” status — that is, toward HIPAA compliance — by:

  • Prioritizing your HIPAA security and privacy risks
  • Developing tactics to mitigate those risks
  • Providing tools and tactics for security and privacy breach prevention and minimization
  • Creating or updating policies, procedures, and business documents, including a HIPAA security and privacy plan

As each organization is different, there are many factors to consider as you go through these processes, and customize your approach to the HIPAA-compliance needs of your organization.

The Road to Wellness

An ounce of prevention is worth a pound of cure. Don’t let a security or privacy breach jump-start the compliance process. Reach out to us for a HIPAA health check. Contact us if you have any questions on how to get your organization on the road to wellness.

Blog
How healthy is your organization's HIPAA compliance?

With the rise of artificial intelligence, most malware programs are starting to think together. Fortinet recently released a report that highlights some terms we need to start paying attention to:

Bot
A “bot” is an automated program that, in this case, runs against IP addresses to find specific vulnerabilities and exploit them. Once it finds the vulnerability, it has the ability to insert malware such as ransomware or Trojans (a type of malware disguised as legitimate software) into the vulnerable device. These programs adapt to what they find in order to infect a system and then make themselves invisible.

Swarmbot
Now, think about thousands of different bots, attacking one target at the same time. That’s a swarm, or in the latest lingo, a swarmbot. Imagine a swarmbot attacking any available access into your network. This is a bot on steroids.

Hivenet
A “hivenet” is a self-learning cluster of compromised devices that share information and customize attacks. Hivenets direct swarmbots based on what they learn during an attack. They represent a significant advance in malware development, and are now considered by some to be a kind of artificial intelligence. The danger lies is in a hivenet’s ability to think during an attack.

Where do they run? Everywhere.
Bots and hives can run on any compromised internet-connected devices. This includes webcams, baby cams, DVRs, home routers, refrigerators, drones, “smart” TVs, and, very, very soon, (if not already) mobile phones and tablets. Anything that has an IP address and is not secured is vulnerable.

With some 2.9 billion botnet communications per quarter that we know of, attacks aren’t just theory anymore — they’re inevitable.

Organizations have heating and cooling systems, physical security systems, security cameras and multiple types of devices now accessible from the internet. Even community water, electric and telecommunications systems are vulnerable to attack — if they are accessible.

What can you do? Take care of your business—at home and at work.
At home, how many devices do you own with an IP address? In the era of smart homes, it can add up quickly. Vendors are fast to jump on the “connect from anywhere” bandwagon, but not so fast to secure their devices. How many offered updates to the device’s software in the last year? How would you know? Do any of the products address communications security? If the answer is “none,” you are at risk.

When assessing security at work, all organizations need to consider smart devices and industrial control systems that are Internet accessible, including phone systems, web conferencing devices, heating and cooling systems, fire systems, even elevators. What has an IP address? Vulnerable areas have expanded exponentially in the name of convenience and cost saving. Those devices may turn out to be far more expensive than their original price tag  remember the Target data breach? A firewall will not be sufficient protection if a compromised vendor has access.

Evaluate the Risks of Internet Accessibility
It may be great if you can see who is ringing your doorbell at home from your office, but only if you are sure you are the only one who can do that. Right now, my home is very “stupid,” and I like it that way. I worry about my wireless garage door opener, but at least someone has to be at my house to compromise it. My home firewall is commercial grade because most small office/home office routers are abysmally insecure, and are easily hacked. Good security costs money.

It may be more convenient for third-party vendors to access your internal equipment from their offices, but how secure are their offices? (There is really no way to know, except by sending someone like me in). Is your organization monitoring outgoing traffic from your network through your firewall? That’s how you discover a compromised device. Someone needs to pay attention to that traffic. You may not host valuable information, but if you have 300 unsecured devices, you can easily become part of a swarm.

Be Part of the Solution
Each one of us needs to eliminate or upgrade the devices that can become bots. At home, check your devices and install better security, in the same way you would upgrade locks on doors and windows to deter burglars. Turn off your computers when they are not in use. Ensure your anti-virus software is current on every device that has an operating system. Being small is no longer safe. Every device will matter.

Blog
Swarmbots, hivenets, and other stinging insects

Just as sports teams need to bring in outside resources — a new starting pitcher, for example, or a free agent QB — in order to get better and win more games, most organizations need to bring in outside resources to win the cybersecurity game. Chapter 4 in our Cybersecurity Playbook for Management looks at how managers can best identify and leverage these outside resources, known as external capacity.

In your last blog, you mentioned that external capacity refers to outside resources — people, processes, and tools — you hire or purchase to improve maturity. So let’s start with people. What advice would you give managers for hiring new staff?
RG: I would tell them to search for new staff within their communities of interest. For instance, if you’re in financial services, use the Financial Services Information Sharing and Analysis Center (FS-ISAC) as a resource. If you’re in government, look to the Multi-State Information Sharing and Analysis Center (MS-ISAC). Perhaps more importantly, I would tell managers what NOT to do.

First, don’t get caught up in the certification trap. There are a lot of people out there who are highly qualified on paper, but who don’t have a lot of the real-world experience. Make sure you find people with relevant experience.

Second, don’t blindly hire fresh talent. If you need to hire a security strategist, don’t hire someone right out of college just getting started. While they might know security theories, they’re not going to know much about business realities.

Third, vet your prospective hires. Run national background checks on them, and contact their references. While there is a natural tendency to trust people, especially cybersecurity professionals, you need to be smart, as there are lots of horror stories out there. I once worked for a bank in Europe that had hired new security and IT staff. The bank noticed a pattern: these workers would work for six or seven months, and then just disappear. Eventually, it became clear that this was an act of espionage. The bank was ripe for acquisition, and a second bank used these workers to gather intelligence so it could make a takeover attempt. Every organization needs to be extremely cautious.

Finally, don’t try to hire catchall staff. People in management often think: “I want someone to come in and rewrite all of our security policies and procedures, and oversee strategic planning, and I also want them to work on the firewall.” It doesn’t work that way. A security strategist is very different from a firewall technician — and come with two completely different areas of focus. Security strategists focus on the high-level relationship between business processes and outside threats, not technical operations. Another point to consider: if you really need someone to work on your firewall, look at your internal capacity first. You probably already have staff who can handle that. Save your budget for other resources.

You have previously touched upon the idea that security and IT are two separate areas.
RG
: Yes. And managers need to understand that. Ideally, an organization should have a Security Department and an IT Department. Obviously, IT and Security work hand-in-glove, but there is a natural friction between the two, and that is for good reason. IT is focused on running operations, while security is focused on protecting them. Sometimes, protection mechanisms can disrupt operations or impede access to critical resources.

For example, two-factor authentication slows down the time to access data. This friction often upsets both end users and IT staff alike; people want to work unimpeded, so a balance has to be struck between resource availability and safeguarding the system itself. Simply put, IT sometimes cares less about security and more about keeping end users happy — and while that it is important, security is equally important.

What’s your view on hiring consultants instead of staff?
RG
: There are plenty of good security consultants out there. Just be smart. Vet them. Again, run national background checks, and contact their references. Confirm the consultant is bonded and insured. And don’t give them the keys to the kingdom. Be judicious when providing them with administrative passwords, and distinguish them in the network so you can keep an eye on their activity. Tell the consultant that everything they do has to be auditable. Unfortunately, there are consultants who will set up shop and pursue malicious activities. It happens — particularly when organizations hire consultants through a third-party hiring agency. Sometimes, these agencies don’t conduct background checks on consultants, and instead expect the client to.

The consultant also needs to understand your business, and you need to know what to expect for your money. Let’s say you want to hire a consultant to implement a new firewall. Firewalls are expensive and challenging to implement. Will the consultant simply implement the firewall and walk away? Or will the consultant not only implement the firewall, but also teach and train your team in using and modify the firewall? You need to know this up front. Ask questions and agree, in writing, the scope of the engagement — before the engagement begins.

What should managers be aware of when they hire consultants to implement new processes?
RG
: Make sure that the consultant understands the perspectives of IT, security, and management, because the end result of a new process is always a business result, and new processes have to make financial sense.

Managers need to leverage the expertise of consultants to help make process decisions. I’ll give you an example. In striving to improve their cybersecurity maturity, many organizations adopt a cybersecurity risk register, which is a document used to list the organization’s cybersecurity risks, record actions required to mitigate those risks, and identify who “owns” the risk. However, organizations usually don’t know best practices for using a risk register. This sort of tool can easily become complex and unruly, and people lose interest when extracting data from a register becomes difficult or consumes a lot of time reading.

A consultant can help train staff in processes that maximize a risk register’s utility. Furthermore, there’s often debate about who owns certain risks. A consultant can objectively arbitrate who owns each risk. They can identify who needs to do X, and who needs to do Y, ultimately saving time, improving staff efficiency, and greatly improving your chances of project success.

Your mention of a cybersecurity risk register naturally leads us to the topic of tools. What should managers know about purchasing or implementing new technology?
RG
: As I mentioned in the last blog, organizations often buy tools, yet rarely maximize their potential. So before managers give the green light to purchase new tools, they should consider ways of leveraging existing tools to perform more, and more effective, processes.

If a manager does purchase a new tool, they should purchase one that is easy to use. Long learning curves can be problematic, especially for smaller organizations. I recommend managers seek out tools that automate cybersecurity processes, making the processes more efficient.

For example, you may want to consider tools that perform continuous vulnerability scans or that automatically analyze data logs for anomalies. These tools may look expensive at first glance, but you have to consider how much it would cost to hire multiple staff members to look for vulnerabilities or anomalies.

And, of course, managers should make sure that a new tool will truly improve their organization’s safeguards against cyber-attack. Ask yourself and your staff: Will this tool really reduce our risk?

Finally, managers need to consider eliminating tools that aren’t working or being used. I once worked with an organization that had expensive cybersecurity tools that simply didn’t function well. When I asked why it kept them, I was told that the person responsible for them was afraid that a breach would occur if they were removed. Meanwhile, these tools were costing the organization around $60,000 a month. That’s real money. The lesson: let business goals, and not fear, dictate your technology decisions.

So, what’s next?
RG
: So far in this series we have covered the concepts of maturity and capacity. Next, we’re going to look at the concept of discovery. Chapter 5 will focus on internal audit strategies that you can use to determine, or discover, whether or not your organization is using tools and processes effectively.

Blog
External capacity: Cybersecurity playbook for management

As a leader in a higher education institution, you'll be familiar with this paradox: Every solution can lead to more problems, and every answer can lead to more questions. It’s like navigating an endless maze. When it comes to mobile apps, the same holds true. So, the question: Should your institution have a mobile app? The Answer? Absolutely.

Devices, not computers, are how millenials communicate, gather, inform, and engage. Millennials, on average, spend 90 hours per month on mobile apps, not including web searches and website visits.

Students are no exception. A 2016 Nielsen study showed that 98% of millennials aged 18 – 24, and 97% of millennials aged 25 – 34, owned a smartphone, while a 2017 comScore report stated that one out of five millennials no longer use desktop devices, including laptops. Mobile apps have quickly filled the desktop void, and as students grow more reliant on mobile technology, colleges and universities are in the mix, creating apps to bolster student engagement.

So should you create an app? Here are some questions you should answer before creating a mobile app. Welcome to the labyrinth! But don’t be frustrated—answer these questions to help you avoid dead ends and overspending.

1. Is a mobile app part of your IT Strategy? Including a mobile app in your IT strategy minimizes confusion at all levels about the objectives of mobile app implementation. It also helps dictate whether an institution needs multiple mobile apps for various functions, or a primary app that connects users with other functionality. If an institution has multiple campuses, should you align all campuses with a single app, or if will each campus develop their own?

2. What will the app do? Mobile apps can perform a multitude of functions, but for the initial implementation, select a few key functions in one main area, such as academics or student life. Institutions can then add functionality in the future as mobile adoption grows, and demand for more functions increases.

3. Who will use the app? Mobile apps certainly improve engagement throughout the student life cycle—from prospect to student to alumni—but they also present opportunities for increased faculty, staff, and community engagement. And while institutions should identify the immediate audience of the app, they should also identify future users, based upon functionality.

4. Who will manage the app? Institutions should determine who is going to manage the mobile app, and how. The discussion should focus on access, content, and functionality. Is the institution going to manage everything in house, from development to release to support, or will a mobile app vendor provide this support under contract? Depending on your institution, these discussions will vary.

5. What data will the app use? Like any new software system, an app is only as good as its supporting data. It’s important to assess the systems to integrate with the mobile app, and determine if the systems’ data is up-to-date and ready for integration. Consider the use of application program interfaces, or APIs. APIs allow apps and platforms to interact with one another. They can enable social media, news, weather, and entertainment apps to connect with your institution’s app, enhancing the user experience with more content for users.

6. How much data security does your app need? Depending on the functionality of the app you create, you will need varying degrees of security, including user authentication safeguards and other protections to keep information safe.

7. How much can you spend for the app? Your institution should decide how much you will spend on initial app development, with an eye toward including maintenance and development costs for future functionality. Complexity increases costs, so you will need to  budget accordingly. Include budget planning for updates and functionality improvements after launch.

You will also need to establish a timeline for the project and roll out. And note that apps deployed toward the end of the academic year experience less adoption than apps deployed at the beginning of the academic year.

Once your institution answers these questions, you will be off to a good start. And as I stated earlier, every answer to a question can lead to more questions. If your institution needs help navigating the mobile app labyrinth, please reach out to me

Blog
The mobile app labyrinth: Seven questions higher education institutions should ask

It may be hard to believe some seasons, but every professional sports team currently has the necessary resources — talent, plays, and equipment — to win. The challenge is to identify and leverage them for maximum benefit. And every organization has the necessary resources to improve its cybersecurity. Chapter 3 in BerryDunn’s Cybersecurity Playbook for Management looks at how managers can best identify and leverage these resources, known collectively as internal capacity.

The previous two chapters focused on using maturity models to improve an organization’s cybersecurity. The next two are about capacity. What is the difference, and connection, between maturity and capacity, and why is it important? 
RG: Maturity refers to the “as is” state of an organization’s cybersecurity program compared to its desired “to be” state. Capacity refers to the resources an organization can use to reach the “to be” state. There are two categories of capacity: external and internal. External capacity refers to outside resources — people, processes, and tools — you can hire or purchase to improve maturity. (We’ll discuss external capacity more in our next installment.) Internal capacity refers to in-house people, processes, and tools you can leverage to improve maturity. 

Managers often have an unclear picture of how to use resources to improve cybersecurity. This is mainly because of the many demands found in today's business environments. I recommend managers conduct internal capacity planning. In other words, they need to assess the internal capacity needed to increase cybersecurity maturity. Internal capacity planning can answer three important questions:

1. What are the capabilities of our people?
2. What processes do we need to improve?
3. What tools do we have that can help improve processes and strengthen staff capability?

What does the internal capacity planning process look like?
RG
: Internal capacity planning is pretty easy to conduct, but there’s no standard model. It’s not a noun, like a formal report. It’s a verb — an act of reflection. It’s a subjective assessment of your team members’ abilities and their capacity to perform a set of required tasks to mature the cybersecurity program. These are not easy questions to ask, and the answers can be equally difficult to obtain. This is why you should be honest in your assessment and urge your people to be honest with themselves as well. Without this candor, your organization will spin its wheels reaching its desired “to be” state.

Let’s start with the “people” part of internal capacity. How can managers assess staff?RG: It’s all about communication. Talk to your staff, listen to them, and get a sense of who has the ability and desire for improving cybersecurity maturity in certain subject areas or domains, like Risk Management or Event and Incident Response. If you work at a small organization,  start by talking to your IT manager or director. This person may not have a lot of cybersecurity experience, but he or she will have a lot of operational risk experience. IT managers and directors tend to gravitate toward security because it’s a part of their overall responsibilities. It also ensures they have a voice in the maturing process.

In the end, you need to match staff expertise and skillsets to the maturity subject areas or domains you want to improve. While an effective manager already has a sense of staff expertise and skillsets, you can add a SWOT analysis to clarify staff strengths, weaknesses, opportunities, and threats.

The good news: In my experience, most organizations have staff who will take to new maturity tasks pretty quickly, so you don’t need to hire a bunch of new people.

What’s the best way to assess processes?
RG
: Again, it’s all about communication. Talk to the people currently performing the processes, listen to them, and confirm they are giving you honest feedback. You can have all the talent in the world, and all the tools in the world — but if your processes are terrible, your talent and tools won’t connect. I’ve seen organizations with millions of dollars’ worth of tools without the right people to use the tools, and vice versa. In both situations, processes suffer. They are the connective tissue between people and tools. And keep in mind, even if your current ones are good, most  tend to grow stale. Once you assess, you probably need to develop some new processes or improve the ones in place.

How should managers and staff develop new processes?
RG
: Developing new ones can be difficult  we’re talking change, right? As a manager, you have to make sure the staff tasked with developing them are savvy enough to make sure the processes improve your organization’s maturity. Just developing a new one, with little or no connection to maturity, is a waste of time and money. Just because measuring maturity is iterative, doesn’t mean your approach to maturing cybersecurity has to be. You need to take a holistic approach across a wide range of cybersecurity domains or subject areas. Avoid any quick, one-and-done processes. New ones should be functional, repeatable, and sustainable; if not, you’ll overburden your team. And remember, it takes time to develop new ones. If you have an IT staff that’s already struggling to keep up with their operational responsibilities, and you ask them to develop a new process, you’re going to get a lot of pushback. You and the IT staff may need to get creative — or look toward outside resources, which we’ll discuss in chapter 4.

What’s the best way to assess tools?
RG
: Many organizations buy many tools, rarely maximize their potential. And on occasion, organizations buy tools but never install them. The best way to assess tools is to select staff to first measure the organization’s inventory of tools, and then analyze them to see how they can help improve maturity for a certain domain or subject area. Ask questions: Are we really getting the maximum outputs those tools offer? Are they being used as intended?

I’ll give you an example. There’s a company called SolarWinds that creates excellent IT management tools. I have found many organizations use SolarWinds tools in very specific, but narrow, ways. If your organization has SolarWinds tools, I suggest reaching out to your IT staff to see if the organization is leveraging the tools to the greatest extent possible. SolarWinds can do so much that many organizations rarely leverage all its valuable feature.

What are some pitfalls to avoid when conducting internal capacity planning?
RG
: Don’t assign maturity tasks to people who have been with the organization for a really long time and are very set in their ways, because they may be reluctant to change. As improving maturity is a disruptive process, you want to assign tasks to staff eager to implement change. If you are delegating the supervision of the maturity project, don’t delegate it to a technology-oriented person. Instead, use a business-oriented person. This person doesn’t need to know a lot about cybersecurity — but they need to know, from a business perspective, why you need to implement the changes. Otherwise, your changes will be more technical in nature than strategic. Finally, don’t delegate the project to someone who is already fully engaged on other projects. You want to make sure this person has time to supervise the project.

Is there ever a danger of receiving incorrect information about resource capacity?
RG
: Yes, but you’ll know really quickly if a certain resource doesn’t help improve your maturity. It will be obvious, especially when you run the maturity model again. Additionally, there is a danger of staff advocating for the purchase of expensive tools your organization may not really need to manage the maturity process. Managers should insist that staff strongly and clearly make the case for such tools, illustrating how they will close specific maturity gaps.

When purchasing tools a good rule of thumb is: are you going to get three times the return on investment? Will it decrease cost or time by three times, or quantifiably reduce risk by three times? This ties in to the larger idea that cybersecurity is ultimately a function of business, not a function of IT. It also conveniently ties in with external capacity, the topic for chapter four.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Tapping your internal capacity for better results: Cybersecurity playbook for management

It’s one thing for coaching staff to see the need for a new quarterback or pitcher. Selecting and onboarding this talent is a whole new ballgame. Various questions have to be answered before moving forward: How much can we afford? Are they a right fit for the team and its playing style? Do the owners approve?

Management has to answer similar questions when selecting and implementing a cybersecurity maturity model, and form the basis of this blog – chapter 2 in BerryDunn’s Cybersecurity Playbook for Management.

What are the main factors a manager should consider when selecting a maturity model?
RG: All stakeholders, including managment, should be able to easily understand the model. It should be affordable for your organization to implement, and its outcomes achievable. It has to be flexible. And it has to match your industry. It doesn’t make a lot of sense to have an IT-centric maturity model if you’re not an extremely high-tech organization. What are you and your organization trying to accomplish by implementing maturity modeling? If you are trying to improve the confidentiality of data in your organization’s systems, then the maturity model you select should have a data confidentiality domain or subject area.

Managers should reach out to their peer groups to see which maturity models industry partners and associates use successfully. For example, Municipality A might look at what Municipality B is doing, and think: “How is Municipality B effectively managing cybersecurity for less money than we are?” Hint: there’s a good chance they’re using an effective maturity model. Therefore, Municipality A should probably select and implement that model. But you also have to be realistic, and know certain other factors—such as location and the ability to acquire talent—play a role in effective and affordable cybersecurity. If you’re a small town, you can’t compare yourself to a state capital.

There’s also the option of simply using the Cybersecurity Capability Maturity Model (C2M2), correct?
RG: Right. C2M2, developed by the U.S. Department of Energy, is easily scalable and can be tailored to meet specific needs. It also has a Risk Management domain to help ensure that an organization’s cybersecurity strategy supports its enterprise risk management strategy.

Once a manager has identified a maturity model that best fits their business or organization, how do they implement it?
RG: STEP ONE: get executive-level buy-in. It’s critical that executive management understands why maturity modeling is crucial to an organization's security. Explain to them how maturity modeling will help ensure the organization is spending money correctly and appropriately on cybersecurity. By sponsoring the effort, providing adequate resources, and accepting the final results, executive management plays a critical role in the process. In turn, you need to listen to executive management to know their priorities, issues, and resource constraints. When facilitating maturity modeling, don’t drive toward a predefined outcome. Understand what executive management is comfortable implementing—and what the business or organization can afford.

STEP TWO: Identify leads who are responsible for each domain or subject area of the maturity model. Explain to these leads why the organization is implementing maturity modeling, expected outcomes, and how their input is invaluable to the effort’s success. Generally speaking, the leads responsible for subject areas are very receptive to maturity modeling, because—unlike an audit—a maturity model is a resource that allows staff to advocate their needs and to say: “These are the resources I need to achieve effective cybersecurity.”

Third, have either management or these subject area leads communicate the project details to the lower levels of the organization, and solicit feedback, because staff at these levels often have unique insight on how best to manage the details.

The fourth step is to just get to work. This work will look a little different from one organization to another, because every organization has its own processes, but overall you need to run the maturity model—that is, use the model to assess the organization and discover where it measures up for each subject area or domain. Afterwards, conduct work sessions, collect suggestions and recommendations for reaching specific maturity levels, determine what it’s going to cost to increase maturity, get approval from executive management to spend the money to make the necessary changes, and create a Plan of Action and Milestones (POA&M). Then move forward and tick off each milestone.

Do you suggest selecting an executive sponsor or an executive steering committee to oversee the implementation?
RG: Absolutely. You just want to make sure the executive sponsors or steering committee members have both the ability and the authority to implement changes necessary for the modeling effort.

Should management consider hiring vendors to help implement their cybersecurity maturity models?
RG: Sure. Most organizations can implement a maturity model on their own, but the good thing about hiring a vendor is that a vendor brings objectivity to the process. Within your organization, you’re probably going to find erroneous assumptions, differing opinions about what needs to be improved, and bias regarding who is responsible for the improvements. An objective third party can help navigate these assumptions, opinions, and biases. Just be aware some vendors will push their own maturity models, because their models require or suggest organizations buy the vendors’ software. While most vendor software is excellent for improving maturity, you want to make sure the model you’re using fits your business objectives and is affordable. Don’t lose sight of that.

How long does it normally take to implement a maturity model?

RG: It depends on a variety of factors and is different for every organization. Keep in mind some maturity levels are fairly easy to reach, while others are harder and more expensive. It goes without saying that well-managed organizations implement maturity models more rapidly than poorly managed organizations.

What should management do after implementation?
RG: Run the maturity model again, and see where the organization currently measures up for each subject area or domain. Do you need to conduct a maturity model assessment every year? No, but you want to make sure you’re tracking the results year over year in order to make sure improvements are occurring. My suggestion is to conduct a maturity model assessment every three years.

One final note: make sure to maintain the effort. If you’re going to spend time and money implementing a maturity model, then make the changes, and continue to reassess maturity levels. Make sure the process becomes part of your organizations’ overall strategic plan. Document and institutionalize maturity modeling. Otherwise, the organization is in danger of losing this knowledge when the people who spearheaded the effort retire or pursue new opportunities elsewhere.

What’s next?
RG: Over the next couple of blogs, we’ll move away from talking about maturity modeling and begin talking about the role capacity plays in cybersecurity. Blog #3 will instruct managers on how to conduct an internal assessment to determine if their organizations have the people, processes, and technologies they need for effective cybersecurity.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Selecting and implementing a maturity model: Cybersecurity playbook for management

For professional baseball players who get paid millions to swing a bat, going through a slump is daunting. The mere thought of a slump conjures up frustration, anxiety and humiliation, and in extreme cases, the possibility of job loss.

The concept of a slump transcends sports. Just glance at the recent headlines about Yahoo, Equifax, Deloitte, and the Democratic National Committee. Data breaches occur on a regular basis. Like a baseball team experiencing a downswing, these organizations need to make adjustments, tough decisions, and major changes. Most importantly, they need to realize that cybersecurity is no longer the exclusive domain of Chief Information Security Officers and IT departments. Cybersecurity is the responsibility of all employees and managers: it takes a team.

When a cybersecurity breach occurs, people tend to focus on what goes wrong at the technical level. They often fail to see that cybersecurity begins at the strategic level. With this in mind, I am writing a blog series to outline the activities managers need to take to properly oversee cybersecurity, and remind readers that good cybersecurity takes a top-down approach. Consider the series a cybersecurity playbook for management. This Q&A blog — chapter 1 — highlights a basic concept of maturity modeling.

Let’s start with the basics. What exactly is a maturity model?
RG
: A maturity model is a framework that assesses certain elements in an organization, and provides direction to improve these elements. There are project management, quality management, and cybersecurity maturity models.

Cybersecurity maturity modeling is used to set a cybersecurity target for management. It’s like creating and following an individual development program. It provides definitive steps to take to reach a maturity level that you’re comfortable with — both from a staffing perspective, and from a financial perspective. It’s a logical road map to make a business or organization more secure.

What are some well-known maturity models that agencies and companies use?
RG
: One of the first, and most popular is the Program Review for Information Security Management Assistance (PRISMA), still in use today. Another is the Capability Maturity Model Integration (CMMI) model, which focuses on technology. Then there are some commercial maturity models, such as the Gartner Maturity Model, that organizations can pay to use.

The model I prefer is the Cybersecurity Capability Maturity Model (C2M2), developed by the U.S. Department of Energy. I like C2M2 because it directly maps to the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) compliance, which is a prominent industry standard. C2M2 is easily understandable and digestible, it scales to the size of the organization, and it is constantly updated to reflect the most recent U.S. government standards. So, it’s relevant to today’s operational environment.

Communication is one of C2M2’s strengths. Because there is a mechanism in the model requiring management to engage and support the technical staff, it facilitates communication and feedback at not just the operational level, but at the tactical level, and more significantly, the management level, where well-designed security programs start.

What’s the difference between processed-based and capability-based models?
RG
: Processed-based models focus on performance or technical aspects — for example, how mature are processes for access controls? Capability-based models focus on management aspects — is management adequately training people to manage access controls?

C2M2 combines the two approaches. It provides practical steps your organization can take, both operationally and strategically. Not only does it provide the technical team with direction on what to do on a daily basis to help ensure cybersecurity, it also provides management with direction to help ensure that strategic goals are achieved.

Looking at the bigger picture, what does an organization look like from a managerial point of view?
RG
: First, a mature organization communicates effectively. Management knows what is going on in their environment.

Most of them have very competent staff. However, staff members don’t always coordinate with others. I once did some security work for a company that had an insider threat. The insider threat was detected and dismissed from the company, but management didn’t know the details of why or how the situation occurred. Had there been an incident response plan in place (one of the dimensions C2M2 measures) — or even some degree of cybersecurity maturity in the company, they would’ve had clearly defined steps to take to handle the insider threat, and management would have been aware from an early stage. When management did find out about the insider threat, it became a much bigger issue than it had to be, and wasted time and resources. At the same time, the insider threat exposed the company to a high degree of risk. Because upper management was unaware, they were unable to make a strategic decision on how to act or react to the threat.

That’s the beauty of C2M2. It takes into account the responsibilities of both technical staff and management, and has a built-in communication plan that enables the team to work proactively instead of reactively, and shares cybersecurity initiatives between both management and technical staff.

Second, management in a mature organization knows they can’t protect everything in the environment — but they have a keen awareness of what is really important. Maturity modeling forces management to look at operations and identify what is critical and what really needs to be protected. Once management knows what is important, they can better align resources to meet particular challenges.

Third, in a mature organization, management knows they have a vital role to play in supporting the staff who address the day-to-day operational and technical tasks that ultimately support the organization’s cybersecurity strategy.

What types of businesses, not-for-profits, and government agencies should practice maturity modeling?
RG
: All of them. I’ve been in this industry a long time, and I always hear people say: “We’re too small; no one would take any interest in us.”

I conducted some work for a four-person firm that had been hired by the U.S. military. My company discovered that the firm had a breach and the four of them couldn’t believe it because they thought they were too small to be breached. It doesn’t matter what the size of your company is: if you have something someone finds very valuable, they’re going to try to steal it. Even very small companies should use cybersecurity models to reduce risk and help focus their limited resources on what is truly important. That’s maturity modeling: reducing risk by using approaches that make the most sense for your organization.

What’s management’s big takeaway?
RG
: Cybersecurity maturity modeling aligns your assets with your funding and resources. One of the most difficult challenges for every organization is finding and retaining experienced security talent. Because maturity modeling outlines what expertise is needed where, it can help match the right talent to roles that meet the established goals.

So what’s next?
RG
: In our next installment, we’ll analyze what a successful maturity modeling effort looks like. We’ll discuss the approach, what the outcome should be, and who should be involved in the process. We’ll discuss internal and external cybersecurity assessments, and incident response and recovery.

To find out when we post our next cybersecurity playbook article, please sign up to receive updates here.

Blog
Maturity modeling: Cybersecurity playbook for management

Because we’ve been through this process many times, we’ve learned a few lessons and determined some best practices. Here are some tips to help you promote a positive post go-live experience.

The road to go-live is paved with good intentions. When an organization identifies a need to procure a new or upgraded system, that road can be long. It requires extensive planning, building a business case, defining requirements, procuring the system, testing it, and implementing it. Not to mention preparing your team to start using it. You’ve worked really hard to get to this point, and it feels like you’re about to cross the finish line. Well, grab some Gatorade because you’re not quite there yet. Post go-live is your cool-down, and it’s an important part of the race.

Preparation is key.
If you haven’t built a go-live plan into your overall implementation plan, you may see stress levels rise significantly in the days and weeks leading up to go-live. Like a runner prepares for a big race, a project lead must adequately prepare the team to begin using the new system, while still handling unexpected obstacles.  While there are many questions you should ask as you prepare to go live, you need to gain buy-in on the plan from the beginning and manage it to ensure follow-through.

Have your contract and deliverables handy.
Your system vendor implementation team will look to hand you off to their support team soon after go-live. It is crucial that you review all of the deliverables outlined in your contract to ensure all of the agreed-upon functionality is up and running, and all contracted deliverables have been provided and approved. Don’t transition to support until you’ve had enough time to see the system through significant processes (e.g. payroll, month-end close). In the period immediately after go-live, the vendor implementation team is your best resource to help address these issues, so it’s a good idea to have easy access to them.

Encourage use and feedback.
Functional leads and project champions need to continue communications past go-live to encourage use and provide a mechanism for addressing feedback. Employing change management best practices will go a long way in ensuring you use the system properly — and to its best capabilities.

Plan ahead for expanded use and future issues. 
Because a system implementation can be extremely resource-intensive, it is common to suppress or forgo functionality to implement at a later date (e.g., citizen and vendor self-service). In addition, we sometimes see issues arise during significant operational milestones (e.g., renewal processing, year-end close). Have a plan in place to decide how you will address known and unknown issues that arise.

While there is no silver bullet to solve all of the potential go-live woes, you can promote a smooth transition from a legacy system to a new system by implementing these tips. The time you spend up front will help offset many headaches down the road, promote end-user engagement, and ensure you’re getting the most from your investment.

Blog
We're live! Now what?

We all know them. In fact, you might be one of them — people who worry the words “go live” will lead to job loss (theirs). This feeling is not entirely irrational. When an organization is ready to go live from an existing legacy system to a new enterprise system, stress levels rise and doubts emerge: What can go wrong? How much time will be lost? Are we really ready for this?

We’re here to help. Here is a list of go-live essentials to help you mitigate stress and assess your readiness. While not all-encompassing, it’s a good place to start. Here’s what you need:

  1. A detailed project plan which specifies all of the implementation tasks
    A project plan is one of the most important parts of an implementation. A detailed plan that identifies all of the implementation tasks along with an assigned resource for each task is critical to success. The implementation vendor and the organization should develop this plan together to get buy-in from both teams.
  1. A completed system configuration
    New system configuration is one of the most time-consuming aspects of a technology implementation. If you don’t complete the implementation in a timely manner, it will impact your go-live date. Configure the new system based upon the best practices of the system — not how the existing system was — for timely implementation.
  1. External system interface identification
    While replacement of some external systems may be a goal of an implementation, there may be situations where external systems are not replaced or the organization has to send and/or receive data from external organizations. And while new systems have advanced interface technology capabilities, the external systems may not share these capabilities. Therefore it is imperative that you identify external system interfaces to avoid gaps in functionality.
  1. Testing, testing, testing
    End-to-end testing or User Acceptance Testing (UAT) is often overlooked. It involves completing testing scenarios for each module to ensure appropriate system configuration. While the timing of UAT may vary, allow adequate time to identify solutions to issues that may result from UAT.
  1. Data conversion validation
    When you begin using a new system, it’s best to ensure you’re working with clean, up-to-date data. Identify data conversion tasks in the project plan and include multiple data conversion passes. You must also determine if the existing data is actually worth converting. When you complete the data conversion, check for accuracy.
  1. End user training
    You must train all end users to ensure proper utilization across the organization. Don’t underestimate the amount of time needed for end user training. It is also important to provide a feedback mechanism for end users to determine if the training was successful.
  1. A go-live cutover plan
    The overall project plan may indicate go-live as an activity. List specific activities to complete as part of go-live. You can build these tasks into the project plan or maintain them as a separate checklist to promote a smooth transition.
  1. Support structure
    Establish an internal support structure when preparing for go-live to help address issues that may arise. Most organizations take time to configure and test the system and provide training to end users prior to go-live. Questions will arise as part of this process — establish a process to track and address these questions.

Technology implementations can significantly impact your organization, and it’s common for stress levels to rise during the go-live process. But with the right assessment and preparation, you can lessen their impact and reduce staff stress. Our experienced, objective advisors work with public and private sector organizations across the country to oversee large enterprise projects from inception to successful completion. Please reach out to us to learn more about preparing for your next big project.

Blog
Don't worry, just assess: Eight tips for reducing go-live stress

As we begin the second year of Uniform Guidance, here’s what we’ve learned from year one, and some strategies you can use to approach various challenges, all told from a runner's point of view.

A Runner’s Perspective

As I began writing this article, the parallels between strategies that I use when competing in road races — and the strategies that we have used in navigating the Uniform Guidance — started to emerge. I’ve been running competitively for six years, and one of the biggest lessons I’ve learned is that implementing real-time adjustments to various challenges that pop up during a race makes all the difference between crossing — or falling short of — the finish line. This lesson also applies to implementing Uniform Guidance. On your mark, get set, go!

Challenge #1: Unclear Documentation

Federal awarding agencies have been unclear in the documentation within original awards, or funding increments, making it hard to know which standards to follow: the previous cost circulars, or the Uniform Guidance?

Racing Strategy: Navigate Decision Points

Take the time to ask for directions. In a long race, if you’re apprehensive about what’s ahead, stop and ask a volunteer at the water station, or anywhere else along the route.

If there is a question about the route you need to take in order to remain compliant with the Uniform Guidance, it’s your responsibility to reach out to the respective agency single audit coordinators or program officials. Unlike in a race, where you have to ask questions on the fly, it’s best to document your Uniform Guidance questions and answers via email, and make sure to retain your documentation.  Taking the time to make sure you’re headed in the right direction will save you energy, and lost time, in the long run.    

Challenge #2: Subrecipient Monitoring

The responsibilities of pass-through entities (PTEs) have significantly increased under the Uniform Guidance with respect to subaward requirements. Under OMB Circular A-133, the guidance was not very explicit on what monitoring procedures needed to be completed with regard to subrecipients. However, it was clear that monitoring to some extent was a requirement.

Racing Strategy: Keep a Healthy Pace

Take the role of “pacer” in your relationships with subrecipients. In a long-distance race, pacers ensure a fast time and avoid excessive tactical racing. By taking on this role, you can more efficiently fulfill your responsibilities under the Uniform Guidance.

Under the Uniform Guidance, a PTE must:

  • Perform risk assessments on its subrecipients to determine where to devote the most time with its monitoring procedures.
  • Provide ongoing monitoring, which includes site visits, provide technical assistance and training as necessary, and arrange for agreed-upon procedures to the extent needed.
  • Verify subrecipients have been audited under Subpart F of the Uniform Guidance, if they meet the threshold.
  • Report and follow up on any noncompliance at the subrecipient level.
  • The time you spend determining the energy you need to expend, and the support you need to lend to your subrecipients will help your team perform at a healthy pace, and reach the finish line together.

Challenge #3: Procurement Standards

The procurement standards within the Uniform Guidance are similar to those under OMB Circular A-102, which applied to state and local governments. They are likely to have a bigger impact on those entities that were subject to OMB Circular A-110, which applied to higher education institutions, hospitals, and other not-for-profit organizations.

Racing Strategy: Choose the Right Equipment

Do your research before procuring goods and services. In the past, serious runners had limited options when it came to buying new shoes and food to boost energy. With the rise of e-commerce, we can now purchase everything faster and cheaper online than we can at our local running store. But is this really an improvement?

Under A-110, we were guided to make prudent decisions, but the requirements were less stringent. Now, under Uniform Guidance, we must follow prescribed guidelines.

Summarized below are some of the differences between A-110 and the Uniform Guidance:

A-110 UNIFORM GUIDANCE
Competition
Procurement transaction shall be conducted in a manner to provide, to the maximum extent practical, open and free competition.
Competition
Procurement transaction must be conducted in a manner providing full and open competition consistent with the standards of this section.
 
Procurement
Organizations must establish written procurement procedures, which avoid purchasing unnecessary items, determine whether lease or purchase is most economical and practical, and in solicitation provide requirements for awards.
Procurement
Organizations must use one of the methods provided in this section:
  1. Procurement by Micro Purchase (<$3,000)
  2. Procurement by Small Purchase Procedures (<$150,000)
  3. Procurement by Sealed Bids
  4. Procurement by Competitive Proposal
  5. Procurement by Noncompetitive Proposal

While the process is more stringent under the Uniform Guidance, you still have the opportunity to choose the vendor or product best suited to the job. Just make sure you have the documentation to back up your decision.

A Final Thought
Obviously, this article is not an all-inclusive list of the changes reflected in the Uniform Guidance. Yet we hope that it does provide direction as you look for new grant awards and revisit internal policies and procedures.

And here’s one last tip: Do you know the most striking parallel that I see between running a race and implementing the Uniform Guidance? The value of knowing yourself.

It’s important to know what your challenges are, and to have the self-awareness to see when and where you will need help. And if you ever need someone to help you navigate, set the pace, or provide an objective perspective on purchasing equipment, let us know. We’re with you all the way to the finish line.

Grant Running.jpg

Blog
A runner's guide to uniform guidance, year two

With the most recent overhaul to the Form 990, Return of Organization Exempt From Income Tax, the IRS has made clear its intention to increase the transparency of a not-for-profit organization’s mission and activities and to promote active governance. To point, the IRS asks whether a copy has been provided to an organization’s board prior to filing and requires organizations to describe the process, if any, its board undertakes to review the 990.

This lack of ambiguity aside, it is just good governance to have an understanding of the information included in your organization’s Form 990. After all, it is available to anyone who wants a copy. But the volume of information included in a typical return can be daunting.

Where do you even start? Let’s take a look at the key components of a Form 990 that warrant at least a read-through:

  • Income and expense activity (Page 1 and Schedule D) – Does this agree to, or reconcile to, the financial reporting of the organization?
  • Narratives on Page 2 – Does it accurately describe your mission and “tell your story”?
  • Questions in Part VI about governance, management, and disclosures – If any governance or policy questions are answered in the negative, have you given consideration to implementing changes?
  • Part VII – Board information and key employee/contractor compensation – Is the list complete? Does the information agree with compensation set by the board? Does it seem appropriate in light of responsibilities and the organization’s activities

Depending on how questions were answered earlier in the Form 990, several schedules may be required. Key schedules include:

  • Schedule C – Political and lobbying expenditures
  • Schedule F – Foreign transactions and investments reported (alternative investments may have pass-through foreign activity)
  • Schedule J – Detailed compensation reporting for employees whose package exceeds $150,000
  • Schedule L – Transactions with officers, board members, and key employees (conflict-of-interest disclosures)

In addition to the Form 990, an organization may be required to file a Form 990-T, Exempt Organization Business Income Tax Return, if it earns unrelated business income. In general, it’s good practice to review the Form 990 with the organization’s management or tax preparer to be able to ask questions as they arise.

Filing and reviewing the Form 990 can be more than a compliance exercise. It’s an opportunity for a good conversations about your mission, policies, and compensation—a “health check-up” that can benefit more areas than just compliance. Understanding your not-for-profit’s operations and being an engaged and informed board member are essential to effectively fulfilling your fiduciary responsibilities.

Blog
Good governance: Understanding your organization's Form 990

This site uses cookies to provide you with an improved user experience. By using this site you consent to the use of cookies. Please read our Privacy Policy for more information on the cookies we use and how you can manage them.