190 likes | 404 Views
Virtual Memory. Virtual Memory Management in Mach Labels and Event Processes in Asbestos Ingar Arntzen. Introduction. Virtual Memory Decoupling processes and physical address space Two problems – One solution Memory Management Process footprint grows to fill available memory
E N D
Virtual Memory Virtual Memory Management in Mach Labels and Event Processes in Asbestos Ingar Arntzen
Introduction • Virtual Memory • Decoupling processes and physical address space • Two problems – One solution • Memory Management • Process footprint grows to fill available memory • Isolation & Protection • Processes need to be isolated from other Processes • Resources need to be protected from Processes • Papers: Focus on different issues Virtual address Mapping Function Physical address
Memory Management • Problem: Not enough main memory • Copy data between main memory and disk • Virtual Memory / Paging • Need only parts of process to execute • Process footprint -> pages • On demand pagein • Page table • One entry per virtual page • X: Absent Main Memory x x x x x Virtual address MMU Physical address
Importance • Memory Management & CPU Utilization • Interleave i/o-bound processes • Effective MM => more processes may run concurrently • Decrease probability that all block at the same time • Future • Processes will continue to fight for physical memory
Isolation & Protection • Isolate processes from each other • Mapping : Process address-spaces map into disjunct physical address-spaces • Resource protection • Mapping : References protected by access control bits. Referenced Modified Present/Absent Page frame # Protection
Research • Tradeoff pagesize • Too small • huge pagetable, frequent page faults • Too big • Less concurrency, costly pagein • 0,5 – 64 KB • Page Replacement Algorithms • NRU, FIFO, CLOCK, LRU, …
Research cont. • Effective Pagetable implementation • Huge pagetable • Pagesize 4KB, 32-bit address space => 1M entries • Fast pagetable lookup • Evaluated on every memory reference • More than 1 memory reference per instruction • Design options • Hardware registers • + fast, - expensive, - context-switch penalty • Main memory • + cheap, - extra memory ref, - steals precious memory • Something in between
Research cont. cont. • Optimizations • Multilevel pagetable • Pageout unused parts • Cashing Pagetable-entries in hardware • TLB exploits locality. Very effective! • Inverted Page table • (physical page frame -> virtual page) • + Smaller, - Expensive to search • Software control of hardware • TLB cashe management, pagefault handling
Research cont. cont. cont. • Result • Multitude of hardware solutions • Multitude of software designs • Discussions • What are the best solutions? • Where to draw the line between HW and SW? • User-level meddling in HW business? • => Context of paper 1
Virtual MM in Mach • Problem • Portability • Strong dependencies between HW and OS • Mach goals • Virtual Memory Management • …on top of diverse HW architectures • Few HW assumptions • Clean HW/SW separation • Easy to port • No performance loss • Approach • Experiences with building and porting Mach
Mach Virtual Memory • Microkernel OS • Integrated Message Passing and Virtual Memory • Send = memory remap (cheap!) • Threads may… • Allocate, de-allocate virtual memory • Share address spaces • Copy address spaces • Pagein and pageout
Implementation • Address Map (PageTable) • Ordered, linked list or refs to Memory Objects (E.g. files) • + Only entries for used addresses • - More searching • Pagefault handling • User level Pager Services • Message Passing between Kernel and Pager Address Map (Page Table) Memory Object List Main Memory
Evaluation • Ported to • VAX, IBM RT PC, SUN3, … • UnixPT, InvertedPT, PT/Segment • Clean separation • HW dependent and HW independent • TLB not required • But may be used by Mach • Performance • Comparison UNIX - Mach on different architectures • Mach equal or better • Conclusion • Clean separation is possible and has no cost!
Labels and Event Processes • Isolation and Protection in WebServices • Stateful services with many concurrent users • Isolation between users (not processes) • Goals • Execute user requests in isolated address spaces • Restrict information flow • Avoid leaking private user data • User data only communicated to privileged system parts • Principle of least privilege • Application specific policies
Labels • Basic Idea • Restrict access to communication primitives, send & recv • A can talk to B, if B is equally privileged • If B receives a message from A, this may restrict B’s ability to speak with others • Labels define send & recv privileges relative to domains (compartments) • Kernel support • operations + checking of privileges • Applications define information flow policies
Event Processes • Execute user requests in isolated address spaces? • Problem • Address spaces are associated with processes • Forking 1 process per user does not scale! • One reason: Huge pagetables • Threads scale better, but provide no isolation • Solution • Isolate data from multiple users within one address space • Event Process Abstraction • Event handler executes in the context of a given user
Implementation • Base Process • Address space divided between event processes • Event processes • Context • Receive Ports • Communication privileges (Labels) • Private user data • Bind to private ports • Scheduled by kernel within private context • (On incoming message)
Evaluation • Experiments on WebService • Memory consumption • Extra 1.5 Page (4KB) per event process! • Cost of Isolation (Labels) • Modest overheads on throughput, latency • Throughput decrease as with increasing number of event processes • Database costs due to label storage growth • Importance • Virtual address is to big • How do we implement small virtual address spaces for lightweigth processes?