Community IT Innovators Nonprofit Technology Topics

IT Incident Response for Nonprofits Pt 2

March 11, 2022 Community IT Innovators Season 3 Episode 12
Community IT Innovators Nonprofit Technology Topics
IT Incident Response for Nonprofits Pt 2
Show Notes Transcript

You get a frantic call from a staff member – your nonprofit organization has been hacked. Now what? What do you do first? What should your IT security incident response be?

Join Community IT Innovators CTO Matt Eshleman and President and CEO Johan Hammerstrom for a worst-case scenario walk through part 2. They discuss several possible breaches – email, Office 365, servers, etc. Matt shares his experience responding to credential hacks and other situations, and presents a checklist you should include in your security policies.

Forewarned is forearmed, and if you haven’t been hacked yet, it never hurts to know what you would do. Participants who had experienced a hack were invited to share experience with us, and what they wish they’d done differently in their response to more quickly get their organization back on their feet.

We have several webinars and resources on creating your security policy and training your staff available here. This webinar is specific to planning your response in the event you experience a breach.

Presenters: 
President and CEO Johan Hammerstrom has always been interested in using technology as a force for good that can improve our world.  He started at Community IT in 1999 as a Network Administrator.  Since that time, Johan has been a Network Engineer, a Team Lead, the Director of Services, Vice President of Services, Chief Operating Officer, and beginning July 2015 President and CEO. Working directly with over 200 nonprofit organizations, to help them plan around and use technology to accomplish their missions, has been one of the most positive and rewarding experiences of his life.

As the Chief Technology Officer at Community IT, Matthew Eshleman is responsible for  shaping Community IT’s strategy in assessing and recommending technology solutions to clients. With a deep background in network infrastructure technology he fundamentally understands how technology works and interoperates both in the office and in the cloud.
Matt joined Community IT as an intern in the summer of 2000 and after finishing his dual degrees in Computer Science and Computer Information Systems at Eastern Mennonite University he rejoined Community IT as a network administrator in January of 2001. Matt has steadily progressed up at Community IT and while working full time received his MBA from the Carey School of Business at Johns Hopkins University. Matt is a frequent speaker on Cybersecurity for Nonprofits.

_______________________________
Start a conversation :)

Thanks for listening.


IT Security Incident Response for Nonprofits 

October 2018



Johan Hammerstrom: Good afternoon, and welcome to the October, 2018 Community IT Innovators webinar. Thank you for joining us for today's webinar on security incident response. 


My name is Johan Hammerstrom, and I'm the president and CEO of Community IT and the moderator for this series. Before we begin, I just want to tell you a little bit more about our company. Community IT is a 100% employee owned company. Our team of nearly 40 staff is dedicated to helping nonprofit organizations advance their missions through the effective use of technology. We are technology experts and we have been consistently named one of the top 200 managed service providers in North America according to MSP Mentor. 


Today, it's my great pleasure to welcome our chief technology officer Matthew Eshleman, who will be the presenter for today's webinar. Hello, Matt.


Matthew:  Hello, Johan. Thanks for the introduction and it's good to be back following up our webinar last month on encryption with this one on security incident response. I think there's going to be a lot of great content covered today and I'm really looking forward to getting started.


Johan Hammerstrom:  Great. Thank you Matt. One last note before we begin, if you have any questions for us, don't hesitate to send them in at any time. We do our best to answer all questions during the webinar and you can use the chat feature in Go2meeting to submit your questions. We will also be sending out a link to the recording and slides after the webinar today. So let's go ahead and get started.



Agenda


Matthew:  So for our agenda today, we'll be covering a couple different things. 


We'll start off by just some basic definitions of 

  • what we mean when we talk about a breach as the precipitating event for a security incident response. 
  • We'll talk a little bit about the security background and the environment that we find ourselves in and influencing the approach that we're taking. 
  • Talk a little bit about compliance; that’s a big driver that will impact the security response.
  • Then we'll talk through our security incident response checklist. 


This is going to be a pretty practical guide on how we think of security incident responses.


We'll talk about our own process and practices and then relate that to organizations as they think about their own response and capabilities or requirements.


Johan Hammerstrom:  Thank you, Matt. That's great. I think the first question to start with is: what is a breach?


Matthew:  So what is a breach? 


The technical definition that we have here on the screen is from the US CERT team. They have a very precise definition of a security incident. That's the act of violating an explicit or implied security policy, according to the NIST special publication 800-61 which is linked at the end of our presentation. That's actually a really helpful background reading document, as you think about how you want to start defining or architecting that. It's going to be overblown for most organizations. It's really geared toward government agencies, big enterprises. But, that's a great resource.


That definition relies on the existence of a security policy that, while generally understood, varies among organizations.


There are great policy templates. They're rather generic, and it's really important to tailor them towards your unique needs as an organization. 


When we think about security incidents, and we had a lot of great feedback on the survey on the front end of this webinar, those are 

  • attempts to gain unauthorized access to a system or its data, your file system database, 
  • unwanted disruption or denial of service. This comes up quite prominently in some other areas with website denial of service attacks, 
  • disruption to accessing the systems,
  • unauthorized usage of the system for processing or storage of data.


We don't see it quite as much anymore, but hacks or attacks in the past would host movie files or some other stuff. An attacker would just use your computer for space and then make changes to system hardware, firmware, software without the owner's knowledge, instruction or consent. 


So that's the big official definition that can be used. We find that it's helpful to focus on these  four top level categories here. 


A slightly more accurate or less complex definition is a security incident involves unauthorized or unexpected access or use of an organization's IT systems. 


That's really a relevant definition for the organizations that we're working with, small and midsize organizations, where we're really focused on unauthorized access, compromised user accounts, phishing attacks, that  kind of thing, as a way to define what we're talking about when we are referring to a security incident.



Breach Examples


Some of the breach examples that are good use cases to play through your scenarios would be: 

  • what happens when we have a compromised user account? What does that mean for us as an organization? What if that compromised user account is an intern? Do we have a different approach if that compromised user account is our CFO or maybe our CEO? 
  • Breach examples could be malware on a computer either used to steal credentials, encrypt data, steal information. Again, as another vector. What is our response? What risk does that place on us as an organization?
  • Ransomware in a file system, maybe two years ago, this would've been the number one concern, maybe a little less common this past year, but still a very real and very damaging example of a breach where you lose control of your data. Maybe nothing left, but you don't have access or control to your data. 
  • Forwarding emails or files outside of an organization. This is actually an attack that we're seeing quite frequently when we have compromised accounts. Then we notice that email is being sent to forward to some random Gmail address that's outside of the organization. Maybe a password's been reset. That attacker no longer has persistent access, but they're still receiving information. 
  • Unauthorized access to a database particularly for those organizations that are doing donation processing. What risk does that present? 
  • And then, your public facing website, what happens whenever that is either defaced or manipulated without the knowledge or consent of the organization?


Johan Hammerstrom:  I heard a story, Matt, about a large manufacturing company and someone hacked into their system. They weren't aware of it, but they found an Excel spreadsheet where they stored all the routing and account numbers for all of their suppliers and the hackers basically changed those numbers without anybody noticing. The manufacturer thought they were paying their suppliers for all the raw materials they were buying and it turns out all the money was going into overseas accounts. It was many months before they realized what had happened and they only found out because their suppliers weren't getting paid and it was a big mess. 


Unauthorized access to a database can be as simple as an Excel spreadsheet that you're storing critical financial information on. It just goes to show the extent to which organized crime is behind hacking now and the extent to which it's become rather sophisticated in its objective that it's trying to achieve, which is really getting money.


Matthew:  Yeah. And I think that's a great example. I think that highlights the risk to organizations. Hacking isn’t something that is going to be targeted at somebody else that has more resources. Any organization is going to have some financial processing. We certainly see that with all the spear phishing emails that are going to CFOs and accountants and accounts payable people. If they can get into your system and maybe update some spreadsheets that include payment information, that's even better than having to talk to somebody. I think it shows the level of importance of securing your systems, maintaining the integrity of those file systems and understanding what's happening on your network.


We have some other security presentations that we've talked about that topic in terms of defending your network in more depth. So I encourage you to reference those as well. 


Let's go ahead and go on and talk a little bit more about the security landscape that we are facing.



Security Environment


So as the first point here is that a data breach can almost be assumed at this point. For these presentations, we're not even including the laundry list of all the data breaches that have occurred in the last week, month, year, because it's  just so numerous and overwhelming. 


So the new mindset of the model is, we need to assume breach. With that mindset, how can we better protect ourselves in this world? 


Along with that drumbeat of data breaches is a higher expectation of privacy and security from the stakeholders. As a consumer or a member of an organization, I'm entrusting them with information about myself. And so I do have a high standard for expecting that information to be kept private.  


I think that flows into increasing external policy requirements that are going to be applicable very broadly. GDPR is currently the European regulation on general data protection and is  applied to all organizations. That touches organizations not only based in the EU, but also organizations that are here in the US that deal with EU data subjects, either being people that live in the EU or EU residents that reside elsewhere. 


So that has increased the compliance playing field. We have HIPAA for those organizations that are doing healthcare, PCI compliance for organizations that are dealing with any type of credit card processing. So there are these external policy compliance requirements that are established and I don't think we're going to see that compliance standard lowered over time.


The graph here is from the Verizon data breach information report, a report that Verizon does every year. It's actually really fascinating reading from a big utility company that you may not expect to be able to write such incisive commentary. But they've been doing this report for, I think, well over 10 years and they're able to track trends. 


This graph represents the sources of the data breaches that are being reported. We can see that hacking is still the top or most common threat. But reduced in 2017 along with malware and  that certainly mirrors our own experience.


Of the 4,000 endpoints that we manage, I think we've maybe had one or two cases of malware on an endpoint, whereas we've had multiple compromised accounts or  other things that would fall into this hacking category. Spear phishing which would be represented in the social category, we see there from the purple line. So the prevalence of social based attacks that rely not on sophisticated creation of malware or some other special technique, but are more coercion or trickery to get access into systems, that's really becoming more prevalent because it's probably easier to do. It costs less, it's less complicated than trying to write some malware. You can just trick somebody into giving up their username and password.


And I think that's born out in the next slides that highlight the top 20 action varieties in breaches. 

  • The number one category is the use of stolen credentials. 
  • The second one is RAM scraper, that's essentially just malware that is stealing data contents out of ram. So that's the second most prevalent case 
  • and then we have phishing up here as the number three. For anybody who's been on the receiving end, yes, it's a huge threat. It's a big problem and it's really very hard to defend against technically. But, that's where user training comes in. That's a conversation for another day, but I think simply to say here that stolen credentials, malware, social attacks, these are really the avenues that attackers are using to get into an organization.


Johan Hammerstrom:  Matt, we had a question about if you could just provide a definition of phishing and then also spear phishing?


Matthew:  Sure. Phishing is whenever you receive an email that appears to be legit, but is in fact masking the identity of the original sender in order to get you to  take some action: click on a link, open up an attachment. 


So that's a general category. Whenever you receive a message that at first glance appears legitimate, but then when you take a closer look, it isn't from who it says it's supposed to be. The link being displayed is not the link that you're actually going to. Those are all hallmarks of what's considered a phishing email. 


Spear phishing takes that to the next level and combines what is a rather generic approach of phishing into specific knowledge about the person that's receiving the message.


Spear phishing attacks may play two people in an organization off each other. So you may have a specifically crafted message that appears to be coming from the executive director of the organization directed toward the accountant at the organization. And, I think this goes to show the financial incentive and motivation that these adversaries have. They're doing research. It's not just some anonymous system that's doing this. These are real people sitting behind real computers doing this work. They're going to your website, they see your executive director is Sue, they see the CFO is Mary and they're creating these identities to try to trick the person into making some decision, usually around a financial transaction.


Johan Hammerstrom:  Yep. Great. Thank you.


Matthew:  I think the other hallmark that we're aware of in the security landscape is just how quickly these attacks move. This is from the CrowdStrike global threat report. CrowdStrike is a vendor that makes next gen endpoint detection response.


Their analysis says that, from the point when you have a compromise on the network, somebody clicks on a malicious link that installs some malware on their systems, the adversary is moving laterally within that network in under two hours. 


From the time somebody clicks on something and something bad happens on one computer, you've got less than two hours to try to evict that person from the network before they start working and spreading throughout the network. 


Detection at this point is in the weeks and months category, while the adversaries are  getting started within a couple of hours.


Johan Hammerstrom:  Yeah. I'm going to talk about this slide. This is what I'm calling the security focus cycle. This webinar, the title is “incident response,” and we'll get to our checklist of things that are important to do when you're faced with a security incident. 


In the same way that, if we did a webinar on disaster recovery, we would start by talking about what you can do to prepare for the disaster and not just launch into, “Well here's what you need to do now that the disaster has happened.” You can think about security incidents in the same way. It's really important that your incident response begins now, before you're actually faced with the incident. 


It’s a combination of human nature and all the other priorities that we have in our organizations, but it's pretty common for our security focus to follow this chart on this slide where there's very low focus on security until an incident happens.


And the incident may be that the ED's email account has been hacked, or it's been discovered that someone is doing a “living off the land'' attack. They've embedded themselves on the network. Someone in the organization has made a wire transfer that they shouldn't have made. Something happens that brings security into focus. It could be that the files on the network have been encrypted. So there's an immediate pain that the organization is feeling and when that happens, the immediate response is to make the pain go away, whatever it takes. How can we get the infiltrator out of the email account? How can we remove them from the network? How can we restore our files? How can we get our money back? That's  the immediate reaction.


And then once the immediate pain goes away, there's a period of time and it could be a couple days, it could be a week or two. It could be a month even, where there's a general sense that, well, we need to make sure that this doesn't happen again. 


But then after that, there's inevitably a return to the status quo, so it's really important in preparing for incident response that an organization's security focus be ratcheted up in advance and be maintained at a high level throughout. 


If that doesn't happen, then you're just putting a bandaid over the solution and it's only a matter of time before there's another security incident. Also importantly, you may be making the immediate pain go away, but that's not the same thing as responding effectively to the security incident. So, the first step really in responding to a future security incident is developing and creating security awareness now. Security awareness begins with, oh, all of these things. Do you want to talk through this, Matt?


Matthew:  Yeah. For those of you that have attended these webinars before, this may be a familiar graphic. 



Cybersecurity Readiness


This is representative of our view of all of the elements of a 

Successful Security Plan:


  • one that's rooted in written and updated policies. 
  • From there we can build on security training and awareness. These are   fundamentally people issues.
  • Building on that, we want to have the pillars of strong passwords, 
  • Working antivirus, 
  • a backup and disaster recovery plan and then 
  • a patching strategy.
  • Then, once you get those foundational things, it may be appropriate to look at encryption as part of your security plan.
  • Then layer on some predictive intelligence or AI tools on top of that. 


But, fundamentally things are rooted in a policy and procedures approach.


Johan Hammerstrom:  Yeah. 

Security readiness really gets built by starting with the policy and then using that to create training for staff and for the organization as a whole, and then putting the tools in place to prevent or predict or detect security incidents as they're happening. 

But as we've seen the nature of security incidents become more and more credential-based, where someone's credentials have been stolen and the hacker is basically logging in to their email account, their permission, there's no flag other than funny things starting to happen. 


Many of the security breaches that we've dealt with over the past year were not detected by some fancy tool. They were detected by an alert staff person who noticed something strange going on with their email or with one of their co-worker’s email accounts. That's why training is so important and really comes out of policy and if you have those building blocks in place, then you can add on more advanced things like encryption and artificial intelligence.


Matthew: As we mentioned, the main thrust of this presentation is really on creating that policy. So let's go ahead and talk about why we need an incident response policy.



Why Do I Need an Incident Response Policy?


I've been talking policy, policy, policy. However, it's very critical because it provides for a systematic response so the appropriate actions are taken. Incident response is a really stressful and chaotic time and process in an organization. It feels really threatening; something has been  taken from the organization and there's a lot of stress to get it resolved and quickly. 


So it's very important that you have a well-articulated plan in place so you understand the actions that need to be taken and you're not trying to figure this all out in the moment.


We want to move away from a knee jerk reaction to a deliberate response. The process that we're going to talk through will help build that framework for you and your organization to really think through: what are the systems that are in place? What are the risks? What are all these elements that we need to get beforehand ? 


We know, going back to the beginning of the presentation, that we actually need to assume breach. We know something bad is going to happen to us. We're not going to just stick our head in the sand and ignore it. Something bad is going to happen. So how can we make sure that we're protected and have a good process in place to respond?



These are what we would consider the five elements of an incident response policy.


There are going to be some template resources provided as links at the end of the presentation that can fill out some of this in a little bit more detail, but these are the five common elements that you'll find in most of the templates that are out there. 


  1. The first one is to define the stakeholders and so we'll talk through that in a little bit more detail, both from what it looks like for you and your organization and then also what it looks like for us as a service provider being responsible for technically responding to these breaches. 
  2. You want to have a way to categorize your risk. Since you already should have an information systems list or database, or some way of articulating all the systems that you have, that's a great place to assign some risk levels. What are the critical systems? What  systems are not? Which don't need to be in that bunker mentality? 
  3. And then talk about those response steps. What are we going to do whenever executive emails get compromised? What are we going to do if the file server gets compromised? What are we going to do when our database gets compromised? 
  4. That can be informed by your reporting requirements. This is the legal compliance component. We are not lawyers nor do we pretend to be, even in webinars. This is something that is critical for you to work with your legal counsel on to understand what your obligations are from a legal standpoint, if and when your organization does experience one of these issues.
  5. Finally, incorporate the lessons learned. As we've built up our incident response plan it's definitely undergone multiple revisions. It's something that we do after responding to each breach to make sure that we're updating our systems and processes and make sure that we're incorporating the new lessons that we learn from every response so that we have a really thorough and systematic response when responding to issues.


The stakeholders at Community IT, whenever we're in the response role, it's pretty straightforward. We have two roles, we have a client communication role and then we have a technical lead. 


We've found it really helpful to segment out those roles so the person who’s in communication with the client is able to focus on the client communication, managing the response. Then we have a separate person or team of people that are involved in the actual technical response. We want to keep those people focused on the technical work and then have a dedicated person focusing on the client communication. At Community IT, we have a two tiered technical response.


For an organization that is on the receiving end of a data breach, there could be a number of different players. 


  • The primary contact or the businesses that are working with the IT partner, 
  • your legal counsel should or will likely need to be involved. They may be able to free up funds to respond to breaches. They may want to handle the communication between your organization and your IT provider, so that those communications are privileged. 
  • The business system owner of the system that's been compromised will probably need to be involved. 
  • You may have a board reporting requirement that may be part of that.
  • If you're an organization that has HIPAA compliance or maybe other compliance mandates, you may even have this compliance officer role that would need to be involved. 

These may be unique for your organization, but then, it's important to articulate all of the different roles that are going to be involved as part of a security incident response.


Johan Hammerstrom:  Yeah, it's impossible to stress the importance of this too much because like you said, Matt, in the midst of an incident if you don't have, could find roles, it's easy for either, A. people to step on each other toes, or B. have things fall through the cracks. 


For example, if an organization's fundraising database has been compromised, you probably want to have someone who's assessing the level of compromise, restoring services if they have been taken down, and restoring data, if it's been corrupted in any way. 


So that's one role, but you also need to communicate to your donors, to your financial stakeholders for the organization. Who's going to be responsible for that and what message will go out? 


That's another thing. If credit card information was included in the breach, then there's some PCI compliance issues to worry about. Who's going to take ownership of that? 


With any given system, you could imagine there's a different set of key stakeholders and it's good to know in advance who those are and to have that conversation. 



Matthew:  Picking up on what Johan was saying in terms of different systems having different stakeholders, there also would be different levels of risk associated with all of these. 



Risk Levels


There could be risks to operations and productivity. Maybe it is just, “Hey, we can't access these files because they've been cryptoed.” That could just be an impact of productivity operations. 


It could be a risk to privacy, of data being publicly exposed.


You could have compliance risks associated with maybe credit card numbers being taken or person identifiable information. 


Unique business owners will also have a unique risk profile. Defining that upfront can help inform your incident response practices, if and when that system does get compromised.



Response


At Community IT, once we've been made aware of a security issue from the client's perspective or perhaps through some of our own monitoring tools, 


  • Our very first step is to notify the client. 


We want to make the organization aware they have a security incident on their hands just to start the conversation, but then also for the organization themselves to start their incident response plan. It may involve bringing in legal counsel to handle communication so that we keep things privileged. That would be the first step is just to notify. 


  • Once we do the notification we're already working on remediation because the breakout time is so quick. We are really driven to try to stop an infection or a breach from spreading.


So once an attacker gets a toe hold in the network, then they can start to move laterally. So it's very critical that we stop that immediate infection or incursion in the network as soon as possible. So no further damage can be caused. 


  • Once we feel like we've stopped the infection, then we will go ahead and start remediation.


After we do the remediation process, 

  • then we really want to understand the infection vector. 


There's  new and evolving techniques that we encounter every day and every week in how these breaches occur , so we want to understand what system failed. Was it something that we were aware of and didn't close the door? Perhaps a user account got compromised because they were reusing passwords and they didn't have multifactor authentication enabled. That's one scenario. Maybe, it’s something more sophisticated, a vulnerability on a web application or some other public resource that was compromised.


So we really want to understand how that incursion occurred so we can remediate it as well. 


  • And then finally, and I think this is something that we've become more aware of is that we need to exhaustively investigate related systems and accounts. 


As Johan mentioned earlier, sometimes these breaches are hard to identify. They're not very apparent. A compromised user account may not be apparent right away. So once we do identify in the case of a compromised user count, we then also want to look a lot more broadly to make sure that there's no other compromised accounts.


We're not just putting out one fire, but instead we're taking a look back. As part of our lessons learned, what we’ve found is that whenever there is one compromised user account there are often others, and others that people aren't aware of.


So we've been really intentional about remediating not only the initial or known account that was compromised, but also really digging into the network to make sure that there aren't other accounts that have been compromised or other systems that have been infected with viruses that are just laying dormant.


The incident response has this loop. Once we've put out the initial fire, we stopped the immediate pain. We also want to be really thorough to make sure that there's nothing that's been left behind and we are doing everything we can to completely remediate the network and evict whatever adversary got in.


Johan Hammerstrom:  And I think it's important to note, Matt, that obviously these steps, the response, involves quite a bit of communication and if the communication system has been compromised itself, then we don't use that system. 


So if we think that if we have reason to believe that a client's email has been compromised, we don't email them that they've been compromised. We call them on the phone and let them know. So it's important as much as possible to refrain from using the compromised systems until they've been secured.


Matthew:  Yes. That's a good thing to highlight. It would typically be over the phone, via  pre-established direct call numbers to the primary contacts that are in that role.



Reporting


Once there's been an assessment of what has been attacked or which system has been compromised, then there is this question of reporting. 


We, in our role as technical response people, we're not doing reporting. It's the organization's responsibility. This is driven by an organization's compliance and legal policies, potentially handled by legal counsel. 


Johan, I think you had some insights into this area in terms of reporting and what organizations should be aware of in timeliness or completeness.


Johan Hammerstrom:  Yeah. Those are generally driven by the compliance requirements that the organization might be under. It's important to be aware of that early on. For example, HIPAA, personal health information is protected by HIPAA compliance requirements. 


In the event of a suspected breach, what's known as the business associate is required to notify the organization within 24 hours. They in turn have some timelines involved in assessing the breach and then making a report to DHS. So it's important to be aware not only what your organization's requirements are in terms of reporting to other authorities, but also what your partners’ requirements are in reporting to you. They need to be aware that they need to comply with the regulations in terms of reporting timelines. 


Related to that, this is a good time to mention one of the questions that came in. 


It was a comment actually, that you also want to contact your insurance company. In many cases you'll need to file a claim due to exposure of credit card information. So, understand what your insurance company requires, understand the level of your cyber security insurance and make sure that you're complying with their requirements as well.



Lessons Learned


Johan Hammerstrom:  All right. So how can you incorporate the lessons learned? Failure is a part of life. We all get to experience it, unfortunately from time to time. But if we can learn from our failures and our mistakes, then we can improve. Ideally, if you recall that security chart I showed you, at the end of some painful experience, your level of security focus has gone up a little bit.


Matthew:  Yeah. I think that's a lesson that we have learned. While incorporating good security practices can be a big ask, whenever there's a low sense of urgency, like in the aftermath of a security issue, there's often a lot more willingness to adopt security controls and improve the organization's posture. 


But if you wait too long, a lot of that sense of urgency goes away. So, it is important to incorporate these lessons learned from responding to one of these incidents relatively soon after  that fire has been put out so that that feeling is fresh and there's still incentive to act. 


If you wait too long, we  tend to fall back into a slump of out of sight out of mind. 


Some typical lessons learned after a breach would be to 

  • update that system's inventory. Maybe a new system had been added by marketing, and got compromised. We weren't really aware of it. We weren't aware of the impact to the organization. We weren't aware of the risk profile. So let's go ahead and review all the systems to make sure that our system's inventory list is up to date. We have an adequate risk profile. We've got an assigned business owner. We know what's going to go on there. 
  • It's a good time to implement multifactor authentication. I can't let one presentation go without touting this as a very effective way to reduce compromised accounts. Implement multifactor authentication.


So hey, remember how painful that process was when we had to remediate those accounts? All right, we're going to use this as an opportunity to improve our security by implementing multi factor authentication. 


  • And then, let's go ahead and edit our incident response policy. Maybe we have a new technology support provider. Maybe based on their incident response, we need to find a new technology support provider. Maybe our legal counsel’s changed. 


Let's go ahead and use this as an opportunity to update that response policy so that we're in a better position to respond the next time that something like this happens.



Action Steps


Action steps and what you should be thinking about doing as you move away from this webinar is 


  • spend some time in the next week to review the incident response templates that are going to be included here at the bottom. There’s the very thorough NIST document that provides a really great depth of information and a really great framework. But it's immense and maybe overkill. 

There are some other ones from SANS and also Thycotic which is a security vendor that we use for our (Carolyn 43:55 unintelligible). Its management software has a really solid template that can be used as a baseline for your own organization's unique needs. 


  • Understanding compliance requirements. Maybe that should have been the first thing, but understand the actual legal framework for your organization. Do you have mandated reporting? Do you have GDPR compliance requirements? Do you have PCI compliance requirements? Do you actually understand the level and mandates that it imposes on your organization? How may that change your security response policy? 


  • And then finally, talk to your IT partners. Unless you have a very large, in-house IT team it's likely that you will want or need an external IT firm that has expertise in this area. One that can provide you with a high degree of confidence that an issue has been resolved. There may be different or multiple partners to handle the systems that you have at your organization. 


Those are really three fundamental steps to consider doing in the next week. Review the templates if you don't already have something in place already. Understand your compliance requirements to make sure you've got a good understanding. Maybe your reporting requirements are non-existent or maybe you just have a good faith response if your organization is compromised, but it's important to understand those requirements ahead of the process instead of after the fact.


As Johan mentioned, both the slides and the recordings will be made available to all registered attendees and will be up on our website, so you can come back to these resources if you don't have time to jot these down now. If there's any additional questions, I think we can go ahead and take them at this time. Is that right Johan?


Johan Hammerstrom:  Yeah, we have some great questions. We want to take some time to get to those. And yes, as Matt said, we will be posting the recording of this webinar, as well as the slides on our YouTube and Slideshare accounts. We will be posting links to both of those on our website and we'll include links to these resources on our website as well. 


All of that will be emailed out to you after the webinar, but you can also go to communityit.com and navigate to the webinar page. All of that information will be there as well. 


We'll work our way back through the questions. 



Q and A


There was a question as to whether or not bots can be used to install forwarding rules to a compromised email account. There was this organization that had an email account in Office 365 that was compromised. The password was changed almost immediately, but then they didn't discover until several months later that a forwarding rule had been set up. I presume that forwarded all of the email this account received to an adversary’s account.


Matthew:  We've encountered a number of cases where yes, there were strange forwarding rules established and the user didn't remember or identify any specific account compromise. A  username and password combination would've had to be made public at some point for that to happen. I don't think simply by clicking on some malicious link that an auto forwarding rule would be created. However, I don't want to rule out anything. 


I will say that now in Office 365, and this is maybe in Gmail as well, but I know in Office 365, depending on your version, there are now some automated alert templates that actually address this specific scenario because it's been such a problem.


You can configure an alert to a specified administrator IT address if a new forwarding rule is created. And so you can get notified because the real challenge is you have a compromised account, a forwarding rule was created.  You don't really notice anything because there are no bounces. It's just all your emails going out the door. 


Even if you change your password, even if you turn on multifactor authentication, that forwarding rule is still in place. In Office 365, there's some actually really handy power shell commands that you can run. That'll come through all your entire directory and return any forwarding rules that are in place. So that's helpful. That's actually part of our response process. 


Those are particularly tricky to nail down and figure out because nothing appears wrong other than, maybe a bounced message every once in a while.


Johan Hammerstrom:  You talked about early on in the incident response process, assume the worst. If you discover that your credentials have been hacked, if you discover that someone has access to your email account, assume that they've taken all of your email. That’s very stressful and frightening to assume, but I think you have to assume the worst. Assume that they've found ways to compromise your email. These forwarding rules are a great example of that. 


That speaks to the need to do a thorough and exhaustive search of all your systems to see what sorts of exfiltration methods they may have put in place to continue essentially breaching the system even after they've been denied direct access to it.


So that's an important step that leads into the next question, which has to do with encryption. And I'll, I'll mention at the outset that Matt did a great webinar on encryption back in June and anyone who didn't get a chance to attend that webinar, I strongly encourage you to go to our YouTube channel, go to our website and find it. It really was a great primer on encryption and how it works.


Encryption, I think, is still considered to be the final frontier of effective security. So even if someone gets access to, say your Dropbox account, if you've encrypted all of your files on the Dropbox account, they may have all the files, but they're going to look like gibberish to them. You're still the only one who has the key to decrypt those files. 


The question is, when you add encryption hardware, does the hardware make it more difficult to connect devices together, like routers, printers, cable boxes, et cetera?


More broadly, encryption always involves a trade-off between security and convenience. What's your sense right now, Matt, of where the right place to draw that line is?


Matthew:  There's a lot of different places and a lot of different standards, and places where encryption can be applied. 


There's different scenarios for different cases. Most websites now encourage the use of HTTPS as opposed to HTTP. That ensures the integrity of the communication between you and a website. 


There can be encryption for your computer. Filevault for Mac or BitLocker for Windows ensures that the contents of the disc are encrypted. So if that disc is taken, if your computer's stolen and somebody takes the hard drive out and puts it in another computer, they can't read it.


A lot of the big email providers are now encrypting the tunnel between themselves whenever a message is sent. So in Gmail, you'll notice this or Office 365. Messages are sent essentially through SSL, TLS is the actual encrypt standard. The tunnel is encrypted, so that doesn't carry any additional necessary configuration overhead. 


We are starting to see BitLocker or desktop encryption become a lot more popular because there are now tools in place that make it relatively fast. There's not very much overhead associated with it and a lot of the encryption is handled actually by a special chip that's on the computer. So that may add a few dollars to the cost, but is rather inconsequential.


Once we get beyond encrypting the actual devices, encrypting the communication, then we're talking about encrypting the data within a system. That can add some overhead. But there are pretty sophisticated tools and technology that a lot of providers are making available that allow you to encrypt columns or data in your sql database. If you have a need to store data with medical information, you can encrypt those fields so that information is not readable if it  leaves the trusted system.


Johan Hammerstrom:  Yeah. When it comes to your data and your information systems, there's really not a one size fits all approach. Some of your data is very sensitive and needs to be highly secured and you need to make accessing it as inconvenient as possible. But other data and systems may not require that level of access. 


Just like in your house, you might put your coin collection, your checkbook in a fire safe. Well, it's hard to get to those items. You wouldn't want to put your TV remote control in the fire safe as well because every time you wanted to turn on the TV, you'd have to unlock the safe and that's annoying. The same is true for your data. Some of your data is probably worth putting a lot of barriers to accessing it and making sure that it's really secure. Other aspects of your data probably don't matter too much. 


Increasingly with email, it's probably best to assume that email is a compromised system and if you have some information that really needs to be secure and confidential, you don't transmit it by email. If you do, you use a native encryption system so that you can ensure that if your email gets hacked, all they're seeing is encrypted emails. If they don't have the key to decrypt the email, it's not going to do them any good.


All right. So one last question, Matt. This goes back to the stakeholders concept. As part of your incident response plan and policy you want to identify in advance the different stakeholders that you might have. We brainstormed a fairly long and extensive list of stakeholders. 


The question is, for smaller nonprofits, the list of stakeholders that we mentioned might be the whole organization or even larger than the number of staff in the organization. What do you recommend for smaller nonprofits in those cases?


Matthew:  These are organizational policies that need to come from the top. Ideally, in a small organization, the executive director is aware of and involved in making these decisions so that they're not surprised. They're going to be ultimately accountable. 


So, even in a small organization of just a few people, this is not something that just the admin assistant who works with the IT provider has to do. This is something that the executive director needs to be responsible for developing. 


Organizations that have a little bit larger executive teams, that's where this would naturally fall, at an executive team level with an operations person and/or a finance person being actively involved in the creation and development of these policies. 


The person handling the communication, the business owner, and the person interacting with legal counsel may all be the same person, and that's fine. But it is important to actually define those roles and who's going to fill them even if it's just the same person.


Johan Hammerstrom:  Yeah. I think it's the multiple hats issue that a lot of small nonprofits face, where a small number of staff people have to wear multiple hats. I can envision a situation where, maybe the executive director is taking the lead on constituent outreach and PR, and the CFO or the director of operations is taking the lead on compliance and contacting the insurance company, contacting the bank, if necessary. Delegating the responsibilities in that way. 


It's important to emphasize that when there is an incident, especially in a small organization, it becomes an all hands event because of the rapid response required to deal with it effectively. It needs to become something that everyone is involved with on some level.




Well, thank you, Matt. Before we go, I want to just mention that this is the last Community IT webinar of 2018. We always take a break in December, but we will be back in January of 2019. 


Next month our partners at Build consulting are going to be presenting their final webinar of 2018, which is going to be on selecting nonprofit software. Spoiler alert: the technology comes last. You’ll want to join Build consulting for an ask the experts panel discussion next month. 

And just a quick program note that is going to be on the second Wednesday of November. We decided not to do it the day before Thanksgiving, at four o'clock. So anyways, be sure to sign up for that. Yeah, the second Wednesday of November, November 14th, at four o'clock. 


All right. Well, thank you Matt, for your time. This was great and I wish everyone out there a happy Halloween. Have a good month and we look forward to you joining us next month.


Matthew:  Thank you. Take care.