1 / 29

Security Vulnerabilities: From Analysis to Detection and Masking Techniques

Security Vulnerabilities: From Analysis to Detection and Masking Techniques. Zbigniew Kalbarczyk Coordinated Science Laboratory University of Illinois at Urbana-Champaign Email: kalbar@crhc.uiuc.edu http://www.crhc.uiuc.edu/DEPEND. Objectives and Approach. Objective

mills
Download Presentation

Security Vulnerabilities: From Analysis to Detection and Masking Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Security Vulnerabilities: From Analysis to Detection and Masking Techniques Zbigniew Kalbarczyk Coordinated Science Laboratory University of Illinois at Urbana-Champaign Email: kalbar@crhc.uiuc.edu http://www.crhc.uiuc.edu/DEPEND

  2. Objectives and Approach • Objective • Vulnerability avoidance • formal-reasoning techniques for automated (or semi-automated) identification and removal of vulnerabilities from the code • Runtime detection • mechanisms for detecting and/or masking of security attacks that exploit residual vulnerabilities in the application code • Approach • Analyze raw data on security vulnerabilities and attacks • Generate models depicting security threats • Apply formal method to uncover security vulnerabilities • Implement defensive techniques at compiler, operating system and hardware levels

  3. Modeling and Analysis of Vulnerabilities:Breakdown of Vulnerabilities (Bugtraq)

  4. Vulnerability Distributions Across Operating Systems • Locations of observed vulnerabilities • Majority of the vulnerabilities occurred in applications • RedHat Linux (79%), Windows 2000 (77%) , and Solaris (90%) • 10% to 20% of vulnerabilities are present in the underlying operating systems

  5. Observations from Vulnerability Analysis (1) • Exploiting a vulnerability involves multiple vulnerable operationson several objects. • Example: Sendmail debugging function signed integer overflow (#3163) • Operation 1: manipulate the input integer • Operation 2: manipulate the function pointer

  6. Observations from Vulnerability Analysis (2a) • Exploits must pass through multiple elementaryactivities, each providing an opportunity for performing a security check.

  7. Observations from Vulnerability Analysis (2b) • Three instances of signed integer overflow classified to three different categories • Indicates that execution of corresponding application includes at least three activities: • get an input integer, • use the integer as the index to an array • execute a code referred to by a function pointer or a return address

  8. Observations from Vulnerability Analysis (3) • For each elementary activity, the vulnerability data and corresponding code inspections allow us to define a predicate, which if violated, naturally results in a security vulnerability • Example: Sendmail debugging function signed integer overflow • Integer index x assumed to be in the range [0,100], • Implementation checks to guarantee that x  100, • Vulnerability: x can be a negative index and underflow an array. • Correct predicate: 0  x  100.

  9. Primitive FSM SPEC Check State Reject State Accept State • We define Primitive FSM (pFSM) to depict an elementary activity, which specifies a predicate (SPEC) that should be guaranteed in order to ensure security. IMPL_REJECT SPEC_REJECT IMPL_ACCEPT SPEC_ACCEPT

  10. Sendmail Debugging Function Signed Integer Overflow (Bugtraq #3163) Op 1: Write an integer to an array location Elementary activity 1 Elementary activity 2 A function pointer can be overwritten Op 2: Manipulate the function pointer Elementary activity 3 Attacker’s malicious code is executed

  11. Elementary Activity 1 of Sendmail Vulnerability Elementary Activity 1: get user input Get strings str_x and str_i, convert them to integers x and i a ? (integer represented by str_x) > 231 pFSM Get str_x and str_i 1 (integer represented by str_x)  231 Convert str_x and str_i to integers x and i

  12. Elementary Activity 2 of Sendmail Vulnerability Elementary Activity 2: assign debug level x >100 x<0 or x>100 Convert str_x and str_i to integers x and i pFSM2 x 100 0x 100 tTvect[x]=i A function pointer (psetuid) is corrupted

  13. Elementary Activity 3 of Sendmail Vulnerability Elementary Activity 3: manipulation of function pointer psetuid A function pointer (psetuid) is corrupted ? psetuid is changed Load psetuid to the memory pFSM 3 starting sendmail program Execute the code referred by psetuid psetuid is unchanged Execute maliciouscode

  14. Summary the FSM Model of the Sendmail Vulnerability Operation 1: Write integer i to tTvect[x] ? ( integer represented by str_x)> 231 x > 100 pFSM1 get text strings str_x and str_i x < 0 or x > 100 ( integer represented by str_x) 231 convert str_i and str_x to integer i and x x  100 pFSM2 0  x  100 tTvect[x]=i Function pointer is corrupted Operation 2: Manipulate the function pointer ? addr_setuid changed Load the function pointer pFSM3 addr_setuid unchanged Execute code referred by addr_setuid Execute MCode

  15. NULL HTTPD Heap Overflow Vulnerabilities (Bugtraq #5774, #6255) Op 1: Read user input from a socket into a heap buffer contentLen<0 Size(PostData)<length(input) pFSM1 get (contentLen, input) pFSM2 Copy input from the socket Calloc PostData[1024+contentLen] contentLen>=0 length(input) <= Size(PostData) Buffer overflow Op 2: Allocate and free the buffer B->fd=AB->bk=C B->fd=&addr_free-(offset of field bk)B->bk=Mcode Calloc is called -¨- pFSM3 When buf is freed, execute B->fd->bk = B->bk B->fd and B->bk unchanged A function pointer is corrupted Op 3: Manipulate the function pointer - Load addr_free to the memory during program initialization addr_free changed- pFSM4 addr_free unchanged - -¨- Execute addr_free when function free is called Attacker’s malicious code is executed

  16. Operation 1 of NULL HTTPD: Read postdata from socket to an allocated buffer PostData 0: Get contentLen //Can be negative 1: PostData = calloc(contentLen +1024, sizeof(char));x=0; rc=0; 2: pPostData= PostData; 3: do { 4: rc=recv(sock, pPostData, 1024, 0); 5: if (rc==-1) { 6: closeconnect(sid,1); 7: return; 8: } 9: pPostData+=rc; 10: x+=rc; 11: }while ((rc==1024) || (x<contentLen)); contentLen<0 pFSM1 get (contentLen, input) contentLen is an integer, input: string to be read from a socket contentLen>=0 Calloc PostData[1024+contentLen] ? length(input)>Size(PostData) pFSM2 Copy input from the socket to PostData by recv() call length(input) <= Size(PostData)

  17. Classes of Modeled Vulnerabilities • Signed Integer Overflow • Heap Overflow • Stack Overflow • Format String Vulnerabilities • File Race Conditions • Input Validation Vulnerabilities (some)

  18. Common pFSM Types • Object Type Check • to verify whether the input object is of the type that the operation is defined on • Content and Attribute Check • to verify whether the contents and the attributes of the object meet the security guarantee • Reference Consistency Check • to verify whether binding between an object and its reference is preserved from the time when the object is checked to the time when the operation is applied on the object.

  19. Lessons Learned • Finite state machine model for vulnerability analysis • Forces rigorous reasoning about vulnerabilities and exploits • Pinpoints opportunities for effective security checks • Demonstrates that exploits, e.g., format-string, integer overflow, heap overflow and buffer overflow succeed because of • predictable program memory layout • unprotected control data. • A common characteristic of these vulnerabilities is: they all allow unauthorized control information tampering • Successful exploitation can force a program to execute malicious code • > 66% of all CERT advisories (00-03)

  20. How Attacks Work? p m msg.fptr = printf msg.fptr = m msg.buf[96] … malicious code malicious code m • Two conditions for a successful attack • Injecting malicious code/data at address m in app. memory • Changing control data at address p to point to m struct message { char buf[96]; int (*fptr)(char*); }; struct message msg; int get_message(…){ msg.fptr = printf; gets(msg.buf); msg.fptr(msg.buf); }

  21. Characteristic of Attacks • Key to a successful attack • Correctly determine the runtime location of control information, e.g., return address, function pointer • An attacker’s approach • Determine the type/version of the target OS and application • Configure a pilot system to mimic the target system • Construct and test the attack using the pilot system • Why can you determine runtime location of control info? • Memory layout is fixed and addresses are highly predictable • Lack of diversity in modern systems • O.S. memory management is fixed • Once compiled and deployed, a program’s memory layout remain fixed across the application lifetime • Uniformity becomes unnecessary burden

  22. Transparent Runtime Randomization • Introduce diversity into a system • Dynamically randomize the memory layout of a program • Each invocation has a different layout • Defeating attacks • Breaks memory layout assumption • Make it hard to determine m/p • Implementation – transparent to application • Modify Linux dynamic program loader • Position independent regions: stack, heap, shared libraries • Position dependent regions: global offset table (GOT)

  23. Position Independent Regions • Different sections at different fixed locations • Change the kernel/loader routines • Part of the process initialization modules • Random offset is applied to different regions 0xFFFFFFFF Kernel space 0xC0000000 0xBFFFFFFC - Rand user stack shared libraries 0x40000000 ± Rand user heap End_of_bss + Rand bss static data use code 0x08048000

  24. Position Dependent Region Relocated GOT new_got_printf: addr of printf() heap GOT got_printf: addr of printf() GOT got_printf: addr of printf() data PLT plt_printf: jmp *got_printf Changed PLT plt_printf: jmp *new_got_printf code Program code call plt_printf Program code call plt_printf

  25. Evaluation – Effectiveness • Experimental evaluation • Security test-bed • Real-world application with different types of vulnerabilities • Real attacks • Results • Without TRR – attacks succeed in obtaining local or remote shell • With TRR – the attacks cause the vulnerable program to crash • Crash is an acceptable option to the program being hijacked

  26. Runtime Detection: Back to Data • Many vulnerabilities (> 66%) due to pointer taintedness • a pointer value is derived directly or indirectly from user input. • Preventing pointer taintedness can defeat many real-world attacks • stack buffer overflow, format string, heap corruption, integer overflow • Avoidance of pointer taintedness • Uncover/remove vulnerabilities by source code analysis • Detection of pointer taintedness • Check pointers at runtime.

  27. Avoidance of Pointer Taintedness • Formally define the pointer taintedness • Develop a theorem proving technique to analysis C source code at machine code level • A significant portion of vulnerabilities (> 33%) due to errors in library functions or incorrect invocations of library functions • Extract a set of preconditions for each analyzed function • Satisfaction of preconditions  no possibility of pointer taintedness inside this function • Evaluation • Analyze strcpy(), printf(), free() and socket read functions of HTTP servers

  28. Extracting Security Specifications by Theorem Prover Automatically translated to formal representation C source code of a library function formal semantic representation Theorem generation For each pointer dereference in an assignment, generate a theorem stating that the pointer is not tainted Theorem proving A set of sufficient conditions that imply the validity of the theorems. They are the security specifications of the analyzed function.

  29. Runtime Pointer Taintedness Detection • A processor architectural level mechanism to detect pointer taintedness • Implemented a taintedness-aware memory system • Extended ALU instructions to propagate taintedness in memory • Evaluation using network applications and/or SPEC benchmarks • Effective in detecting attacks which exploit memory corruption • Transparent to applications,precompiled binary can run. • No known false alarm

More Related