2.2k likes | 2.41k Views
Security in Computing Chapter 3, Program Security. Summary created by Kirk Scott. 3.1 Secure Programs 3.2 Non-Malicious Program Errors 3.3 Viruses and Other Malicious Code 3.4 Targeted Malicious Code 3.5 Controls Against Program Threats 3.6 Summary of Program Threats and Controls.
E N D
Security in ComputingChapter 3, Program Security Summary created by Kirk Scott
3.1 Secure Programs • 3.2 Non-Malicious Program Errors • 3.3 Viruses and Other Malicious Code • 3.4 Targeted Malicious Code • 3.5 Controls Against Program Threats • 3.6 Summary of Program Threats and Controls
Treatment of topics will be selective. • You’re responsible for everything, but I will only be presenting things that I think merit some discussion • You should think of this chapter and the following ones as possible sources for presentation topics
3.1 Secure Programs • In general, what does it mean for a program to be secure? • It supports/has: • Confidentiality • Integrity • Availability
How are confidentiality, integrity, and availability measured? • Given the ability to measure, what is a “passing” score? • Conformance with formal software specifications? • Some match-up with “fitness for purpose”? • Some absolute measure?
“Fitness for purpose” is the “winner” of the previous list • But it is still undefined • What is the value of software, and what constitutes adequate protection? • How do you know that it is good enough, i.e., fit for the purpose? • (This will remain an unanswered question, but that won’t stop us from forging ahead.)
Code Quality—Finding and Fixing Faults • When measuring code quality, whether quality in general or quality for security: • Empirically, the more faults found, the more there are yet to be found • Faults found tends to be a negative, not a positive indicator of code quality
Find and fix is a bad model generally: • You may not be looking for, and therefore not find, serious faults • Even if you do, you are condemned to trying to fix the code after the fact • After the fact fixes, or patches, tend to have bad characteristics
Patches focus on the immediate problem, ignoring its context and overall meaning • There is a tendency to fix in one spot, not everywhere that this fault or type of fault occurs • Patches frequently have non-obvious side-effects elsewhere • Patches often cause another fault or failure elsewhere • Frequently, patching can’t be accomplished without affecting functionality or performance
What’s the Alternative to Find and Fix? • Security has to be a concern from the start in software development • Security has to be designed into a system • Such software won’t go down the road of find and fix • The question remains of how to accomplish this
The book presents some terminology for talking about software security • This goes back to something familiar to programmers • Program bugs are of three general kinds: • Misunderstanding the problem • Faulty program logic • Syntax errors
IEEE Terminology • Human error can cause any one or more of these things • In IEEE quality terminology, these are known as faults • Faults are an internal, developer-oriented view of the design and implementation of security in software
The IEEE terminology also identifies software failures • These are departures from required behavior • In effect, these are run-time manifestations of faults • They may actually be discovered during walk-throughs rather than at run time
Note that specifications as well as implementations can be faulty • In particular, specifications may not adequately cover security requirements • Therefore, software may “fail” even though it’s in conformance with specifications • Failures are an external, user-oriented view of the design and implementation of security in software
Book Terminology • The framework presented by the book, beginning in chapter 1, is based on this alternative terminology: • Vulnerability: This is defined as a weakness in a system that can be exploited for harm. • This seems roughly analogous to a fault. • It is more general than three types of programming errors • However, it’s more specific to security • It concerns something that is internal to the system
Program Security Flaw: This is defined as inappropriate program behavior caused by a vulnerability. • This seems roughly analogous to a failure. • However, this inappropriate behavior in and of itself may not constitute a security breach. • It is something that could be exploited. • It concerns something that could be evident or taken advantage of externally.
The Interplay between Internal and External • Both the internal and external perspectives are important • Evident behavior problems give a sign that something inside has to be fixed • However, some faults may cause bad behavior which isn’t obvious, rarely occurs, isn’t noticed, or isn’t recognized to be bad • The developer has to foresee things on the internal side as well as react to things on the external side
Classification of Faults/Vulnerabilities • Intentional: A bad actor may intentionally introduce faulty code into a software system • Unintentional: More commonly, developers write problematic code unintentionally • The code has a security vulnerability and attackers find a way of taking advantage of it
Challenges to Writing Secure Code • The size and complexity of code is a challenge • Size alone increases the number of possible points of vulnerability • The interaction of multiple pieces of code leads to many more possible vulnerabilities • Specifications are focused on functional requirements: • What the code is supposed to do
It is essentially impossible to list and test all of the things that code should not allow. • This leaves lots of room both for honest mistakes and bad actors
Changing technology is also both a boon and a bane • The battle of keeping up is no less difficult in security as in other areas of computing • Time is spent putting out today’s fires with today’s technologies while tomorrow’s are developing • On the other hand, some of tomorrow’s technologies will help with security as well as being sources of new concerns
Six Kinds of Unintentional Flaws • Intentionally introduced malicious code will be covered later. • Here is a classification of 6 broad categories of unintentional flaws in software security: • 1. Identification and authorization errors (hopefully self-explanatory) • 2. Validation errors—incomplete or inconsistent checks of permissions
3. Domain errors—errors in controlling access to data • 4. Boundary condition errors—errors on the first or last case in software • 5. Serialization and aliasing errors—errors in program flow order • 6. General logic errors—any other exploitable problem in the logic of software design
3.2 Non-Malicious Program Errors • There are three broad classes of non-malicious errors that have security effects: • 1. Buffer overflows • 2. Incomplete mediation • 3. Time-of-check to time-of-use errors
Buffer Overflows • The simple idea of an overflow can be illustrated with and out of bounds array access • In general, in a language like C, the following is possible: • char sample[10]; • sample[10] = ‘B’; • Similar undesirable things can be even more easily and less obviously accomplished when using pointers (addresses) to access memory
Cases to Consider • 1. The array/buffer is in user space. • A. The out of bounds access only steps on user space. • It may or may not trash user data/code, causing problems for that process. • B. The 10th position in the array would be outside of the process’s allocation. • The O/S should kill the process for violating memory restrictions.
2. The array/buffer is in system space. • Suppose buffer input takes this form: • while(more to read) • { • sample[i] = getNextChar(); • i++; • }
There’s no natural boundary on what the user might submit into the buffer. • The input could end up trashing/replacing data/code in the system memory space. • This is a big vulnerability. • The book outlines two common ways that attackers can take advantage of it.
Attack 1: On the System Code • Given knowledge of the relative position of the buffer and system code in memory • The buffer is overflowed to replace valid system code with something else • A primitive attack would just kill the system code, causing a system crash
A sophisticated attack would replace valid system code with altered system code • The altered code may consist of correct code with additions or modifications • The modifications could have any effect desired by the attacker, since they will run as system code
The classic version of this attack would modify the system code so that it granted higher level (administrator) privileges to a user process • Game over—the attacker has just succeeded in completely hijacking the system and at this point can do anything else desired
2. Attack 2: On the Stack • Given knowledge of the relative position of the buffer and the system stack • The buffer is overflowed to replace valid values in the stack with something else • Again, a primitive attack would just cause a system crash
A more sophisticated attack would change either the calling address or the return address of one of the procedure calls on the stack. • It’s also possible that false code would be loaded • Changing the addresses changes the execution path • This makes it possible to run false code under system privileges
The book refers to a paper giving details on this kind of attack • If you had a close system, you could experiment with things like this • The book says a day or two’s worth of analysis would be sufficient to craft such an attack
Do not try anything like this over the Web unless you have an unrequited desire to share a same-sex room-mate in a federal facility • The above comment explains why this course is limited in detail and not so much fun • All of the fun stuff is illegal • There are plenty of resources on the Internet for the curious, but “legitimate” sources, like textbooks, have to be cautious in what they tell
A General Illustration of the Idea • Parameter passing on the Web illustrates buffer overflows • Web servers accept parameter lists in URL format • The different parameters are parsed and copied into their respective buffers/variables • A user can cause an overflow if the receiver wasn’t coded to prevent it.
Essentially, buffer overflows have existed from the dawn of programming • In the good old, innocent days they were just an obscure nuisance known only to programmers • In the evil present, they are much more. • The form the basis for attacks where the goal of the attack is as varied as the attacker.
Incomplete Mediation • Technically, incomplete mediation means that data is exposed somewhere in the pathway between submission and acceptance • The ultimate problem is the successful submission and acceptance of bad data • The cause of the problem is the break, or lack of security in the pathway
The book uses the same kind of scenario used to illustrate buffer overflow • Suppose a form in a browser takes in dates and phone numbers • These are forwarded to a Web server in the form of a URL
The developer may put data validation checks into the client side code • However, the URL can be edited or a fake URL can be generated and forwarded to the server • This thwarts the validation checks and any security they were supposed to provide
What Can Go Wrong? • If the developer put the validation checks into the browser code, most likely the server code doesn’t contain checks. • Parameters of the wrong data type or with out of range values can have bad effects • They may cause the server code to generate bad results • They may also cause the server code to crash
An Example from the Book • The book’s example shows a more insidious kind of problem • A company built an e-commerce site where the code on the browser side showed the customer the price • That code also forwarded the price back to the server for processing
The code was exposed in a URL and could be edited • “Customers” (a.k.a., thieves) could have edited the price before submitting the online purchase • The obvious solution was to use the secure price on the server side and then show the customer the result
There are several things to keep in mind in situations like this: • Is there a way of doing complete mediation? • I.e., can data/parameters be protected when they are “in the pathway”? • If not, can complete validation checking be done in the receiving code? • In light of the example, you might also ask, is there a way of keeping all of the vital data and code limited to the server side where it is simply inaccessible?
Time-of-Check to Time-of-Use Errors • This has the weird acronym of TOCTTOU
In a certain sense, TOCTTOU problems are just a special kind of mediation problem • They arise in a communication or exchange between two parties • By definition, the exchange takes place sequentially, over the course of time • If the exchange involves the granting of access permission, for example, security problems can result
Example • Suppose data file access requests are submitted by a requester to a granter in this form: • Requester id + file id • Suppose that the access management system appends an approval indicator, granting access, and the request is stored for future servicing
The key question is where it’s stored • Is it stored in a secure, system-managed queue? • If so, no problem should result • Or is it given back to, or stored in user space? • If so, then it is exposed and the user may edit it • It would be possible to change the requester id, the file id, or both between the time of check and the time of use