310 likes | 322 Views
Learn how information is stored as bits in a computer from the ground up, exploring logic gates, Boolean expressions, memory units, and scaling up memory technology. Dive deep into the fundamentals of computer memory systems.
E N D
CMSC 100From the Bottom Up: It's All Just Bits Professor Marie desJardinsmariedj@cs.umbc.eduThursday, September 10, 2009
Just Bits • Inside the computer, all information is stored as bits • A “bit” is a single unit of information • Each “bit” is set to either zero or one • How do we get complex systems like Google, Matlab, and our cell phone apps? Abstraction! CMSC 100 -- Just Bits
Storing Bits • How are these “bits” stored in the computer? • A bit is just an electrical signal or voltage(by convention: “low voltage” = 0; “high voltage” = 1) • A circuit called a “flip-flop” can store a single bit • A flip-flop can be “set” (using an electrical signal) to either 0 or 1 • The flip-flop will hold that value until it receives a new signal telling it to change • Bits can be operated on using gates (which “compute” a function of two or more bits) • Later we’ll talk about how a flip-flop can be built of out gates CMSC 100 -- Just Bits
Manipulating Bits • What does a bit value “mean”? • 0 = FALSE [off, no] • 1 = TRUE [on, yes] • Just as in regular algebra, we can think of “variables” that represent a single bit • If X is a Boolean variable, then the value of X is either: • 0 [FALSE, off, no] or • 1 [TRUE, on, yes] • In algebra, we have operations that we can perform on numbers: • Unary operations: Negate, square, square-root, … • Binary operations: Add, subtract, multiply, divide, … • What are the operations you can imagine performing on bits? • Unary: ?? • Binary: ?? • Thought problem: How many unary operations are there? How many binary operations? CMSC 100 -- Just Bits
Remember Function Tables? CMSC 100 -- Just Bits
Negation (NOT): Conjunction (AND): Disjunction (OR): Boolean Operations:“Truth Tables” Or equivalently… CMSC 100 -- Just Bits
Logic Gates • In hardware, Boolean operations are implemented using circuits called logic gates XOR: Exclusive or (one input is TRUE, but not both) CMSC 100 -- Just Bits
Boolean Expressions • What would a truth table look like for the expression: (A B) (B C) CMSC 100 -- Just Bits
Boolean Expressions • What would a truth table look like for the expression: (A B) (B C) CMSC 100 -- Just Bits
Boolean Expressions • What would a truth table look like for the expression: (A B) (B C) CMSC 100 -- Just Bits
(Boolean) Algebraic Laws EC • DeMorgan’s Theorem • Analogous to the distributive law in regular algebra • (A B) = A B • (A B) = A B CMSC 100 -- Just Bits
Logic Circuits • We can implement any logical expression just by assembling the associated logical gates in the right order • What would a logic circuit look like for the expression: (A B) (B C) CMSC 100 -- Just Bits
Implementing a Flip-Flop CMSC 100 -- Just Bits
Setting the Output to 1 CMSC 100 -- Just Bits
Setting the Output to 1 (cont.) CMSC 100 -- Just Bits
Setting the Output to 1 (cont.) CMSC 100 -- Just Bits
What Else Can We Do? What happens if we put a zero on both inputs? …a one on the upper input and a zero on the lower input? …a zero on the upper input and a one on the lower input? …a one on both inputs? CMSC 100 -- Just Bits
Memory & Abstraction • There are other circuits that will also implement a flip-flop • These are sometimes called SRM (Static Random Access Memory) • …meaning that once the circuit is “set” to 1 or 0, it will stay that way until a new signal is used to re-set it • DRAM (Dynamic Random Access Memory): • Use a capacitor to store the charge (has to be refreshed periodically) • BUT… • Abstraction tells us that (for most purposes) it really doesn’t matter how we implement memory -- we just know that we can store (and retrieve) “a bit” at a time CMSC 100 -- Just Bits
Storing Information • One bit can’t tell you much… (just 2 possible values) • Usually we group 8 bits together into one “byte” • How many possible values (combinations) are there for one byte? • A byte can just be thought of as an 8-digit binary (base 2) number • Michael Littman's octupus counting video • Low-order or least significant bit == ones place • Next bit would be “10s place” in base 10 -- what about base 2? • High-order bit or most significant bite in a byte == ?? place CMSC 100 -- Just Bits
Orders of Magnitude • One 0/1 (“no/yes”) “bit” is the basic unit of memory • Eight (23) bits = one byte • 1,024 (210) bytes = one kilobyte (1K)* • 1,024K = 1,048,576 (220 bytes) = one megabyte (1M) • 1,024K (230 bytes) = one gigabyte (1G) • 1,024 (240 bytes) = one terabyte (1T) • 1,024 (250 bytes) = one petabyte (1P) • ... 280 bytes = one yottabyte (1Y?) CMSC 100 -- Just Bits
Scaling Up Memory • Computer chip: • Many (millions) of circuits • Etched onto a silicon wafer using VLSI (Very Large-Scale Integration) technology • Lots of flip-flops or DRAM devices == memory chip • Each byte has an address (and we use binary numbers to represent those addresses…) • An address is represented using a word, which is typically either; • 2 bytes (16 bits) -- earliest PCs • Only 64K combinations memory is limited to 64K (65,535) bytes! • 4 bytes (32 bits) -- first Pentium chips • This brings us up to 4G (4,294,967,295) bytes of memory! • 8 bytes (64 bits) -- modern Pentium chips • Up to 16.8 million terabytes (that’s 18,446,744,073,709,551,615 bytes!) • http://cnx.org/content/m13082/latest/ CMSC 100 -- Just Bits
Hexadecimal • It would be very inconvenient to write out a 64-bit address in binary:0010100111010110111110001001011000010001110011011110000011100000 • Instead, we group each set of 4 bits together into a hexadecimal (base 16) digit: • The digits are 0, 1, 2, …, 9, A (10), B (11), …, E (14), F (15) 0010 1001 1101 0110 1111 1000 1001 0110 0001 0001 1100 1101 1110 0000 1110 00002 9 D 6 F 8 9 6 1 1 C D E 0 E 0 • …which we write, by convention, with a “0x” preceding the number to indicate it’s heXadecimal: • 0x29D6F89611CDE0E0 CMSC 100 -- Just Bits
Other Memory Concepts(read the book!!) • Mass storage: hard disks, CDs, USB/flash drives… • Stores information without a constant supply of electricity • Larger than RAM • Slower than RAM • Often removable • Physically often more fragile than RAM • CDs, hard drives, etc. actually spin and have tracks divided into sectors, read by a read/write head • Seek time: Time to move head to the proper track • Latency: Time to wait for the disk to rotate into place • Access time: Seek + latency • Transfer rate: How many bits/second can be read/written once you’ve found the right spot • Flash memory: high capacity, no moving parts, but less reliable for long-term storage CMSC 100 -- Just Bits
Representing Information • Positive integers: Just use the binary number system • Negative integers, letters, images, … not so easy! • There are many different ways to represent information • Some are more efficient than others • … but once we’ve solved the representation problem, we can use that information without considering how it’s represented… via Abstraction! CMSC 100 -- Just Bits
Representing Characters • ASCII representation: one byte [actually 7 bits…] == one letter == an integer from 0-128 • No specific reason for this assignment of letters to integers! • UNICODE is a popular 16-bit representation that supports accented characters like é [Chart borrowed from ha.ckers.org] CMSC 100 -- Just Bits
Representing Integers • Simplest idea (“ones’ complement”): • Use one bit for a “sign bit”: • 1 means negative, 0 means positive • The other bits are “complemented” (flipped) in a negative number • So, for example, +23 (in a 16-bit word) is represented as:0000000000010111and -23 is represented as: 1111111111101000 • But there are two different ways to say “zero” (0000… and 1111…) • It’s tricky to do simple arithmetic operations like addition in the ones’ complement notation CMSC 100 -- Just Bits
Two’s Complement • Two’s complement is a clever representation that allows binary addition to be performed in an elegant way EC CMSC 100 -- Just Bits
Two’s Complement cont. EC CMSC 100 -- Just Bits
Floating Point Numbers • Non-integers are a problem… • Remember that any rational number can be represented as a fraction • …but we probably don’t want to do this, since • (a) we’d need to use two words for each number (i.e., the numerator and the denominator) • (b) fractions are hard to manipulate (add, subtract, etc.) • Irrational numbers can’t be written down at all, of course • Notice that any representation we choose will by definition have limited precision, since we can only represent 232 different values in a 32-bit word • 1/3 isn’t exactly 1/3 (let’s try it on a calculator!) • In general, we also lose precision (introduce errors) when we operate on floating point numbers • You don’t need to know the details of how “floating point” numbers are represented CMSC 100 -- Just Bits
Summary: Main Ideas • It’s all just bits • Abstraction • Boolean algebra • TRUE/FALSE • Truth tables • Logic gates • Representing numbers • Hexadecimal representation • Ones’ (and two’s) complement • Floating point numbers (main issues) • ASCII representation (main idea) • Types and properties of RAM and mass storage CMSC 100 -- Just Bits
ACTIVITY (if time) • Design a one-bit adder (i.e., a logic circuit that adds two 1-bit numbers together) • X + Y Z2 Z1 • Z2 is needed since the result may be two binary digits long • First let’s figure out the Boolean expression for each output… • Then we’ll draw the logic circuit CMSC 100 -- Just Bits