960 likes | 1.13k Views
Chapter 6. Implement Threat Control Measures. After identifying vulnerabilities and threats Implement threat control measures. Identify level of protection. Initial risk exposure – Factor 1 Vulnerability and threat analyses (chapter 5) produced the initial risk exposure,
E N D
Chapter 6 Implement Threat Control Measures
After identifying vulnerabilities and threats • Implement threat control measures
Identify level of protection • Initial risk exposure – Factor 1 • Vulnerability and threat analyses (chapter 5) produced the initial risk exposure, • Identify the severity and likelihood of each vulnerability/threat pair with critical threat zones • Threat control measures implemented to reduce the initial risk exposure to the desired target.
Identify level of protection • Identify the IA-critical IA-related system entities and functions – Factor 2 • IA-critical – performance is essential to the safe, reliable, and secure operation and support of a system. • IA-related - a system or entity that performs or controls functions which prevent or minimize the effect of failure of an IA-critical system or entity. • See Exhibit 4 Page 132 • Correlate to the initial risk exposure • What is risk to IA critical and IA related functions
Identify level of protection • Factor 3 – Specify Must Work Functions (MWF) and Must Not Work Functions (MNWF) • MWF - software that if not performed or performed incorrectly inadvertently, or out of sequence could result in a hazard or allow a hazardous condition to exist. • MNWF is a sequence of events or commands that is prohibited because it would result in a system hazard.
Identify level of protection • Hazard and operability (HAZOP) study identifies MNWF’s • Neglecting to specify MNWF’s creates an opportunity for serious vulnerabilities. • See Exhibit 5. Page 134
Factor 4 – Entity control analysis (exhibit 6 page 79) • Carefully review entity-control analysis to eliminate any point of failure • Review control of entities over MWFs and MNWFs
Level of protection • Factor 5 - Time Element • Time element should not be ignored • Two aspects of the time element to evaluate • the time window during which the protection is needed, • the time interval during which the proposed threat control measures will be effective.
Level of protection • Factor 6 – Privacy • Re-examine privacy issues in light of systems design, operation and operational environment • Ensure that corporate or organizational assets, intellectual property and information is protected
Level of protection • Analysis and synthesis of six factors • Risk exposure reduction needed is identified • Estimated level of protection is identified • IA integrity level defined for the system
Level of protection • IA integrity level represents the level of IA integrity that must be achieved or demonstrated to maintain the IA risk exposure at or below its acceptable level. • There are five levels of IA integrity 4 – Very High 3 – High 2 - Medium 1 – Low 0 – None
IA integrity levels • used to: 1) prioritize the distribution of IA resources 2) to select appropriate threat control measures based on the type, level, and extent of protection needed. • IA integrity levels reflect confidence that a system will achieve and maintain required • safety, • Security • reliability under all stated conditions so that the risk exposure is maintained at or below the target • a measure of the robustness of a system’s IA features and the process(es)
Evaluate Controllability • Any aspect that enhances safety, security and reliability must be considered for threat control • Examine people • People have the potential to influence system integrity in a positive manner. • Controllability is measure of the ability of human action to control the situation following a failure. • Controllability • human-assisted form of fault tolerance or failing safe/secure. • Design provisions such as manual override, emergency shutdown, critical bypass, etc. • See Exhibit 6 page 138
Evaluate Operation Procedures • Procedures are developed for each operational mode/state • normal operations, • abnormal operations, • and recovery, • If developed and followed correctly operation procedures contribute to systems integrity. • Opposite is also true
Contingency Planning and Disaster Recovery • Integral part of risk management and implementing threat control measures • Contingency plans identify • alternative strategies to be followed or actions to be taken to ensure ongoing mission success • should unknown, uncertain, or unforeseen events occur. • Contingency planning assumes worst-case scenarios.
Contingency Planning • Steps in Contingency Planning • Identify all internal and external system entities and the degree of control: system definition and entity control analysis (Chapter4) • Identify what would go wrong with a system and its entities: the failure points/ modes and loss/ compromise scenarios. • Vulnerability/threat characterizations, transaction paths, and critical threat zones (chapter 5) are analyzed during the process. • See exhibit 7 page 141 and Exhibit 8 page 142
Contingency Planning • Appropriate response for each contingency is defined, consistent with the IA goals and IA integrity level. • Alternative courses of actions and identifying alternative system resources. • Priorities are established for restoring and maintaining critical functionality • The availability alternatives sources, services, and resources are specified • See exhibit 9 page 144
Contingency Planning • Assign responsibility for deploying the alternative course of action and resources. • Next, the maximum time interval during which the responsive action can be invoked is defined. • Identify secondary courses of action/resources to invoke, if the maximum time interval for the primary response is exceeded.
Contingency Planning • Plans must be communicated and staff must be trained. • Practice drills should be conducted regularly to • familiarize staff with the plan’s provisions • uncover any defects in the plan. • Contingency plans should be revived, updated, and revalidated at fixed intervals.
Perception Management • Systems owners want users to perceive that the system is safe, secure and reliable • Obvious benefits to the organization • Also serves as a deterrent to potential attackers • System is perceived to be difficult to attack • Do not go overboard and make it challenging to the attacker
Perception Management • Deploy Decoys that look authentic • decoy servers • decoy screens • decoy files/data • decoy passwords • Security trap to lure would be attackers • Lure attackers away from critical data • Effective method of blocking a DoS attack
IA Design Techniques and Features • Threat control measures are primarily implemented through • design techniques and features • operational procedures • contingency plans • physical security practices
IA Design Techniques and Features • Threat control measures chosen in response to specific vulnerabilities, hazards, and threats. • Goal of threat control measures - reduce the initial risk exposure to at or below the target. • Design techniques and features are a collection of methods by which a system (or component) is • designed • and capabilities are added to a system to enhance IA integrity. • See Exhibit 10 – page 146
Access control • Access control • a design feature that prevents unauthorized and unwarranted access to • Systems • applications • data • resources • Access controls should be operative at all layers of the OSI and networking protocols. • Access control mechanisms are activated immediately after authentication.
IA Design Technique - Access control • An initiator (person or process) request to perform an operation on a targetresource. • Access control mechanisms mediate requests based on predefined access control rules. • Access control rights • initiator/ resource combination • access privileges • initiator/operation combination
Access control Access control can be defined in 3 ways • Access control lists • specify the approved initiator(s) for each (group of) target(s) • Access capability lists • specify the target(s) accessible to a (group of) initiator(s) • Security labels, • each initiator and target is assigned to one or more security label (confidential, secret, top secret, etc.) Labels define access control rights and privileges
Access control • Develop Access control rules • First start with a general list of all Initiators, their operations and the resources each uses • Develop a matrix of initiators and resources indicating operations performed on a particular resource – Control List • Rotate the matrix to develop Access Capability list – Operations and initiators • Security Labels – group initiators with certain security clearance have same access control rights/privileges See Exhibit 14 – page 157
Access control • Invoke default “access denied” if the system encounters an unknown or undefined state. • Access control rules should be regularly reviewed, updated and revalidated. • Protect files defining access control rules from unauthorized access and modification. • Define who has the right to update/modify the access control rules, in both normal and abnormal situations.
Access control • Access control rights - time of access: • User/process may be allowed to access certain system resources only at certain times during the day. • User/process may only be allowed to access system resources during a specified time interval after their identity has been authenticated. • Time-sensitive information may only be accessed “not before” or “not after” specific dates and times. • E-mail, public keys, and other security tokens may have built-in (hidden) self-destruct dates and macros.
Access control • Access control - physical access control • control of and accountability for portable systems and media • physical access to • desktop PC’s • Servers • cable • Plants • shared printers • Archives • hardcopy outputs
IA Design Technique - Account for all possible logic states • Method to prevent a system from entering unknown or undefined states • potentially unstable, • compromise IA integrity • All logical states are defined for each critical decision point or command • Once the logic states have been identified, an appropriate response is defined for each of the following states • continue normal operations • trigger alarm • request further input/clarification • emergency shutdown
Account for all possible logic states • Implementing an OTHERWISE or default clause to trap exceptions or transient faults • This technique should be applied to all types of software: System software, application software, firmware, etc. • Useful for uncovering missing and incomplete requirements • See Exhibit 15 Page 160
IA Design Technique - Audit trail • Provides several IA integrity functions • Capturing information about • which people/processes accessed • what system resources • when. • Capturing information about system states and transitions and triggering alarms if necessary. • Developing normal system and user profiles for intrusion detection systems. • Providing information with which to reconstruct events during accidents/ incident investigation.
Audit trail • Audit trail provides real-time and historical logs of • system states • Transitions • resource usage. • When a system compromise is expected, a security alarm is triggered. • Alarm contents and primary and secondary recipients are defined during implementation.
Audit trail • Components of a security alarm • Identity of the resource experiencing the security event • Date/ timestamp of the security event • Security event type (integrity violation, operational violation, physical violation, security features violations, etc.) • Parameters triggering the alarm • Security alarm severity (indeterminate, critical, major, minor, warning) • Source that detected the event • Service user who requested the service that led to the generation of the alarm • Service provider that provided the service that led to the generation of the alarm
Audit trail • An audit trail consumes system resources; thus, carefully determine what events to record and how frequently they should be recorded. • Determination also has to be made about the interval at which audit trail should be archived and overwritten. • Protect Audit trails from unauthorized access.
IA Design Feature - Authentication • Accurate authentication is an essential first layer of protection. • Access control, audit trail, and intrusion detection functions depend on authentication. • Authentication methods: • Unilateral • Mutual • digital certificates • Kerberos • Data origin • Peer entity • Smartcards • Biometrics.
Authentication • Unilateral Authentication • When a user logs onto a system, the user is authenticated to the system but the system is not authenticated to the user. • Mutual authentication • mutual authenticated in which both parties (users, processes, or systems) are authenticated to each other before any transactions take place. E.g. E-Commerce • A challenge-response protocol is commonly used to perform mutual authentication.
Authentication • Data origin authentication ensures that messages received are indeed from the claimed senders and not an intruder who hijacked the session. • Data origin authentication is initiated after an association setup is established and may be applied to all or selective messages. • Smartcards are a physical security token that a user presents during the authentication process.
Authentication • Biometric system is a pattern recognition system • Establishes the authenticity of a specific physiological or behavior characteristic possessed by a user. • Nine types of biometric systems: • Fingerprints • Iris • Retina • Face • Hand • Ear • Body odor • Voice • Signature.
IA Design Technique - Block Recovery • Provides correct functional operation in the presence of one or more errors. • Implemented to increase the integrity of modules that perform critical functions. • Each critical module has a primary and secondary module • See Exhibit 17 Page 168 • After the system has been reset, normal operation continues. • Forward block recovery for anticipated errors • Backward block recovery for unanticipated errors.
IA Design Feature - Confinement • Restricts an un-trusted program from accessing systems resources and executing systems processes • Goal – Non-interference between independent functions that utilize shared resources and unintended inter-component communication • Interference: • Data Corruption – overwriting vital data stored in common memory and used by trusted components • Denial of service to critical resources – Untrusted components prevent or delay execution of critical shared resources, take too much CPU processing time
Confinement • Implement confinement by • Restrict a process from reading data it has written • Limit executable privileges to the minimum need to perform functions. Example: child processes do not inherit privileges of the parent • Mandatory Access Control (MAC) • Domain and type enforcement (DTE) • Domain associated with each subject (user or process) • Type is associated with each object (systems resource) • Matrix is defined • Wrappers • encapsulates data from view to anyone other than the intended recipient.
IA Design - Defense in Depth • Providing several overlapping subsequent limiting barriers • Threshold can only be passed if all barriers fail • Reflects common sense • Everything is done to prepare for known potential hazards and vulnerabilities • See Exhibit 18. Page 171
IA Design – Defensive programming • Prevents systems failures or compromises by detecting errors in control flow, data flow and data during execution • Reaction in a predetermined and acceptable manner • Applied to all IA-critical and IA-related functions
Defensive programming • Approached from 2 directions • Potential software design errors are compensated • Range, plausibility and dimension checks are performed at procedure entry and before executing critical commands • Separate read-only and read-write parameters to prevent overwriting
Defensive programming • Anticipate failures in the operating environment • perform control flow sequence checks to detect anomalous behavior : state transitions • regular verification of hardware and software procedures • conduct plausibility checks on critical input, intermediate and output variables before acting upon them • All actions and transitions are verified beforehand are a preventive strategy
Plausibility Checks • Enhances IA integrity by verifying the validity and legitimacy of critical parameters before acting upon them • Detects faults in the execution cycle and prevents them from progressing into failures • Value of parameters that affect IA critical and IA related functions are checked. • See examples on page 191
IA Design Technique Degraded-mode Operations • Purpose to ensure functionality of critical functions is maintained in the presence of one or more failures • In the event of anomalous behavior, suspected attacks or compromise IA-critical and IA-related functions can rarely cease to operate • Priorities are established for maintaining critical functions and dropping less critical ones • Total system (hardware, software and communication equipment) is considered for planning degraded-mode operations
Degraded-mode Operations • Tied directly to operational procedures of contingency plan • Specify IA-critical and related functions during requirements and design phase • Criteria for transitioning in degraded-mode • Define maximum time period system is allowed in degraded mode