Implementing an Incident Response Process With Teeth
0 likes | 276 Views
Implementing an Incident Response Process With Teeth. Del A. Russ, GCFA, CISSP August 18, 2011. Welcome. Speaker: Del A. Russ, GCFA, CISSP Title: Implementing an Electronic Incident Response Process with Teeth
Implementing an Incident Response Process With Teeth
An Image/Link below is provided (as is) to download presentationDownload Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.Content is provided to you AS IS for your information and personal use only. Download presentation by click this link.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.During download, if you can't get a presentation, the file might be deleted by the publisher.
E N D
Presentation Transcript
Implementing an Incident Response Process With Teeth
Del A. Russ, GCFA, CISSP August 18, 2011
Welcome Speaker: Del A. Russ, GCFA, CISSP Title: Implementing an Electronic Incident Response Process with Teeth Abstract: Most mature Infosec organizations have an Electronic Incident Response Process of some sort, and many are well-founded upon best practices that have existed for years. Even with decent processes in place, however, real-time IR engagements can still be painful experiences for those involved, plagued with miscommunications, disorder, preventable mistakes, and/or poor decision making that ultimately drives the duration and cost of responding through the roof. How your IR process is actually implemented makes a big difference! This presentation will drill into the root causes for why most IR processes lack the “teeth” necessary to promote fast, accurate, balanced, and authoritative responses, and demonstrate tactics that can be directly applied to help guarantee improved handling of incidents from the time of initial response to full follow-through! The material is suitable for Infosec professionals at all levels, technical or managerial. Bio: Del A. Russ is currently employed as a Senior IT Security Analyst at Xerox Corporation, where he has been involved in numerous Information Security programs since 2001. Mr. Russ founded Xerox Information Management’s Computer Forensics Program in 2005, and the Xerox Electronic Incident Response Program (EIRP) which he managed from 2007-2010. He has participated directly in the handling of hundreds of electronic security incidents, at all levels of complexity and severity. Del’s other expertise is in Threat Management programs and solutions, including Network Based Vulnerability Scanning (NBVS) , Data Leak Protection (DLP), Intrusion Detection Systems (IDS), Log Monitoring Systems (LMS), and other related areas. Prior to entering the Information Security field, he spent ten years in Software Engineering and IT Consulting, primarily with Computer Science Corporation (CSC). Mr. Russ holds a Bachelor of Science Degree in Computer Science from the State University of New York at Buffalo, and a Minor study in Psychology. He holds GCFA and CISSP professional certifications.
Topics Incident Response Problems 10 Big New Teeth for Your IR Process Taking Control - What to Implement New Operational Definition of an “Incident” Observed Benefits & Caveats Open Discussion / Q&A
Incident Response Problems
Are we having fun yet? You may have been here before… Even following the basics of a good IR process: High stress, antagonistic participation Slow - difficult to make confident decisions Lack of trust, biased decision making Exceeding/ignoring authority Costly technical mistakes Disorganized data collection Frequent over-engagement vs. risk involved Frequent under-engagement vs. risk involved
Post-Mortem Root Causes to Target: Ambiguity / Inconsistency Confusion Disorderly Procedures Lack of Control Mistrust Incomplete Fact Collection Poor Decisions Disjointed sub-processes: Physical Security vs. Infosec Real-Time IR vs. Forensics as a “back-office” function Unclear Roles Authority & Bias Issues High-level processes lack “teeth” sufficient for good execution.
IR Teeth What to Implement
(1) Measure Evidence Strength Q: What are you really looking at?! Rating Scheme – follow the courts… “Computer Records and the Federal Rules of Evidence”, Orin S. Kerr; USA Bulletin (March 2001); U.S. Department of Justice: http://www.usdoj.gov.criminal/cybercrime/usamarch2001_4.htm Example: Physical (strong) Electronically Generated (strong) Electronically Stored (contains human content) Verbal / Hearsay / Opinion (weak) Stronger Less Human Bias in Content
(1) Measure Evidence Strength - Examples
(2) Define Incident Types / Subtypes No industry standard convention Opinions vary Vendor’s use of basic terminology varies Most incidents have more than one type Distinguish the means for exploitation from the impacts Consider how threats are practically managed One type leads to another, interrelationships
(2) Incident Types / Subtypes - Example
(2) Incident Types – Interrelationship
(3) Architecture-Based Data Collection “Architecture-Timeline View” Visual depiction of incident scene using logical architecture diagram Dataflow diagram plus Evidence articles Numerous value-added uses Capture IT Assets, Org/People, and Geographic data all at one time Every Incident can be defined as a set of ordered Events
(3) Architecture-Timeline View - Example
(3) Architecture-Timeline View (cont.) Electronic Incident Sequence of one or more Events involving IT Assets and/or Data and potential violations of policy/law Event attributes Source (Person, IT Asset, Location) Destination (Person, IT Asset, Location) Environment traversed (Persons, IT Assets, Locations) Action Taken Relative order of Event within overall incident Incident Type / Subtype Implied Verified vs. Potential Attempted vs. Successful Source, Environment, and Destination attributes: Inherent Data Risk (PII, SPII, etc.) Means (i.e. tactic, exploit, etc.) Vulnerability Targeted (if any) Evidence Articles Electronically Generated, Stored, or Verbal/Hearsay
(4) Risk Assessment Checklist Have an objective table / checklist pre-defined: First consider impacts to people, life, health, well-being, social standing, etc. Criminal Implications Laws & Regulations that apply PR Implications, long-term fallout Business / Monetary Estimate potential cost impacts, if possible
(5) Quantify Evidence Collection - Example Natural Cost/Skill Divide-Lines (i.e. “Tiers”) Example: Tier-A: Simple High-Level, research, lookups, etc. Discussion notes with witnesses Tier-B: Sample collection from systems involved Logs, file samples, registry, etc. Interview statements from tech support staff and operations Tier-C: Network Forensics Logical drive imaging on running systems (no outage) Tier-D: Full Evidence Collection & Preservation Seize laptops, shutdown systems, physical hard drive images, etc. Formally interview perpetrators for signed admission, etc.
(6) Quantify People Involvement - Example
(7) Quantify Response Time Participants in IR usually have other things to do. Clarify aggressiveness of IR actions required. Risk Assessment factors should justify speed required Example: 24/7: work non-stop until some steps are completed i.e. at least until Contained and volatile evidence preserved ASAP Business Hours: Highest priority work during normal hours Normal Operations: Balance IR tasks with other day-to-day priorities
(8) Structure Your Action Plan SANS Institute’s IR Phases are a good foundation: “Computer Security Incident Handling”, V.2.3.1, Stephen Northcutt http://security.gmu.edu/ComputerIncidentHandling.pdf Include Forensic IR Steps (FIR)! Most of you may be familiar with this model already: Identification Containment Include Forensic Incident Response (FIR) tasks Eradication Recovery Reporting Include back-office lab forensics analysis & reports Follow-Up
(9) Define Reporting Types & Points Management will push for status Don’t mislead, over-state, or speculate Managerial instinct is to take control, initiate actions Set realistic expectations early on Scale level of detail as necessary for target audience Examples: Brief Interim Status Summary – Current State Structured email Update after every IR team engagement / cycle Executive Summary Report Final, for management Deliver after most IR phases are complete, but before Follow-Ups Executive Summary w/ Full Forensic Report Final, for technical staff, law enforcement, attorneys, etc. Issue as the last step in Reporting Phase
(10) Formal / Rigid IR Facilitation Dedicate a person to the Facilitator role Runs the IR “machine” Explains the process quickly to field personnel and managers up front Totally new experience for most victims Do not understand risks/impacts of poor IR actions Runs conference calls using checklists, data collection forms, and tight procedures Maintains control over participant actions Leads & earns the trust of those involved
Summary… 10 New IR “Teeth” Measurement of Evidence Strength Clear Definition of Incident Types / Subtypes Collection of Data via Architecture-Timeline View Using a Risk Assessment Checklist Quantifying Extent of Evidence Collection Quantifying Level of Personnel Engagement Quantifying Response Time for Actions Taken Having a structured Action Plan w/ FIR Built-In Defining Report Types (and Set Expectations) Formally Facilitating IR Engagements
New Operational Definition of “Incident”
“Stateful” Nature of Incidents Observations… Incidents are stateful, and have a life-cycle Complex, multi-faceted, progressional A good process meets up to this reality “Defensible point-in-time decision making” Actions are based upon: (a) what you already know, and also… (b) what the current Incident Type leads to (i.e.) Infosec Control violation Unauth Access
Incident Review Cycle (or Session) Facilitator should identify current state attributes: Evidence articles, rated for strength Chronological Listing of Events What current evidence points to Also, what may have happened before, and after Incident Types involved Verified vs. Potential Attemped vs. Successful Risk Factors that Apply Current Evidence collection Tier And which one is appropriate to ascend to next
Incident Review Cycle (cont.) (attributes…) Current Personnel Engagement Level And which level should be ascended to next Tasks in the Action Plan With names, target completion dates, etc. Response Times to be Taken by IR Team
Example Interim Incident Report Date of Last Review Cycle: 8/18/2011 Facilitator: Del Russ IR Team Formed: John Doe, Jane Plain, Ricky Henderson, Art Vandelay Incident: Suspicious Unexpected Outage on EX-Bogus Application Summary of Evidence and Incident Types Identified: Strong electronic evidence exists which Verifies that the outage on EX-Bogus was a Denial of Service (Dos) Incident invoked via Sabotage by an employee. Weak evidence exists suggesting that an Unauthorized Access event may have proceeded the outage, via Use of Stolen Logon Credentials. Further Verification is being pursued at this time. Strong evidence exists to suggest that the employee Attempted an Unauthorized Disclosure of his own login credentials to an untrusted outside party.
Example Interim Incident Report (cont) Risk Factors Identified: EX-Bogus processes customer PII, national and state breach / disclosure laws apply Federal Computer Fraud & Abuse law may apply, felony criminal offense Business outage has resulted in 400 staff hours of impact, and $75,000 in lost revenue to-date Current Operating State: Tier-A and B evidence collection and forensics have applied thus far, no forced system outages yet Level-2 Personnel Engagement has been in effect, with Infosec and field operations triaging the scene We have been working 24/7 to secure evidence, contain, and recover from the initial outage Next Steps (Action Plan): John Doe will continue RCA, and to see if evidence exists to Verify the account Disclosure violation Level-3 team will be engaged – Senior management and legal counsel needed to determine if disclosure should/will occur, or law enforcement to be engaged Evidence Collection will plan for Tier-C network-based means, awaiting business decision by Level-3 team on whether to incur a full outage for hard drive imaging on EX-Bogus NOTE: An Executive Summary report will not be possible until the analysis is complete, perhaps in another 2-3 weeks. Further interim email reports of this sort will occur 2x per week until then. Next Review Cycle: Mon 8/22/11
Ascending Capability Further… Can develop escalation criteria ahead of time Formulate decision matrix (i.e.) Under what conditions should Tier-D (Full) forensic acquisition occur? What necessitates 24/7 responsiveness? When should Level-3 DA be involved or not? Eliminates bias! Predictive incident modeling possible…
Benefits And Caveats
First, the Caveats… It takes time to implement all of this Learning curve for Facilitator and IR team Changing habits and culture of IR “individual contributor” decision making ends The process dictates direction, not individuals Forces everyone to remain honest!
Primary Benefits Observed Handle ANY electronic incident optimally Earns trust of tough IR participants Reduces rate of costly mistakes Reduced friction, more order & control Promotes better understanding by managers who are accountable Balances response to the inherent risk Less over-engagement Less under-engagement
Some Additional Benefits Incident Types useful as metrics framework for many areas of Threat Management Architecture-Timeline View useful in final reports to summarize for all audiences Predictive potential, incident modeling Point-in-Time Defensible Decision Making Works well in uncharted incident terrain, new situations handled consistently Scalable Drives us closer to industry standardization?