280 likes | 433 Views
제 51 강 : Process Address Space. Ch 14 Process Address Space. text. data. bss. text. data. bss. my code (92KB). 08048000-08049000. text. data. 08049000-0804a000. My Program. ld (92KB). 40000000-40013000. #include<stdio.h> int i=1; int main(argc, argv) {
E N D
제51강 : Process Address Space Ch 14 Process Address Space
text data bss text data bss my code (92KB) 08048000-08049000 text data 08049000-0804a000 My Program ld (92KB) 40000000-40013000 #include<stdio.h> int i=1; int main(argc, argv) { printf(“%d”i); } 40013000-40014000 40014000-40016000 libc (1232KB) 4001c000-40109000 40109000-4010d000 4010d000-40111000 bfffe000-c0000000 stack (8KB) stack text: r-x-p data: rw--p bss: rw--p stack: rwx-p Permissions / Purpose intervals of legal addresses “memory areas (VMA)” r: readable w: writable x: executable s: shared p: private(copy on write)
address space Example of Address Mapping The address map of the running process is /proc/<pid>/maps $ ./a.out >null & [1] 673 $ cat /proc/673/maps 08048000-08049000 r-xp 00000000 08:21 6160562 /home/trinite/a.out 08049000-0804a000 rw-p 00000000 08:21 6160562 /home/trinite/a.out 40000000-40013000 r-xp 00000000 08:01 917 /lib/ld-2.1.3.so 40013000-40014000 rw-p 00012000 08:01 917 /lib/ld-2.1.3.so 40014000-40016000 rw-p 00000000 00:00 0 4001c000-40109000 r-xp 00000000 08:01 923 /lib/libc-2.1.3.so 40109000-4010d000 rw-p 000ec000 08:01 923 /lib/libc-2.1.3.so 4010d000-40111000 rw-p 00000000 00:00 0 bfffe000-c0000000 rwxp fffff000 00:00 0 my code loader lib stack T D L S
struct task_struct { volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */ struct thread_info *thread_info; unsigned long flags; /* per process flags, defined below */ int prio, static_prio; struct list_head tasks; struct mm_struct*mm, /*active_mm; struct task_struct *parent; /* parent process */ struct list_head children; /* list of my children */ struct list_head sibling; /* linkage in my parent's children list */ struct tty_struct *tty; /* NULL if no tty */ /* ipc stuff */ struct sysv_sem sysvsem; /* CPU-specific state of this task */ struct thread_struct thread; /* filesystem information */ struct fs_struct *fs; /* open file information */ struct files_struct *files; /* namespace */ struct namespace *namespace; /* signal handlers */ struct signal_struct *signal; struct sighand_struct *sighand; };
address space CPU Stack mm_struct task_struct task field thread_info mmap vm_area_struct pointers mm tty files fs kernel stack T vm_next vm_area_struct D vm_next vm_area_struct L vm_next vm_area_struct S vm_next
address space struct mm_struct { /* memory descriptor of a process */ struct vm_area_struct*mmap; /* list of VMAs */ struct rb_rootmm_rb; atomic_t mm_users; /* How many users with user space? */ atomic_t mm_count; /* How many references to "mm_struct" */ int map_count; /* number of VMAs */ struct rw_semaphore mmap_sem; spinlock_t page_table_lock; /* Protects task page tables and .. */ struct list_head mmlist; /* List of all active mm's. */ unsigned long start_code, end_code, start_data, end_data; unsigned long start_brk, brk, start_stack; unsigned long arg_start, arg_end, env_start, env_end; unsigned long rss, total_vm, locked_vm; unsigned long def_flags; ….. }; T D L S
address space Memory Descriptor mm_struct mmap • represents process’s address space • each process receives unique mm_struct • consists of (has pointers to) several VMA’s (memory areas) • processes may share address space with children • clone(VM_flag); called “thread” (LWP) then mm_struct is not allocated vm_area_struct vm_next T vm_area_struct vm_next D vm_area_struct vm_next S
Reaching Memory Areas mm_struct(per Process) CPU Stack task_struct mmap field mm_rb field thread_info singly linked listof vm_area_structs (to visiteverynode) balanced binary treeof vm_area_structs (to visitspecificnode) pointers vm_area_struct mm tty files fs kernel stack vm_next VMA my part vm_area_struct VMA libc VMA ld vm_next VMA W VMA X vm_area_struct VMA Y VMA Z vm_next
struct mm_struct { /*memory descriptor of a process */ struct vm_area_struct *mmap; /* singly linked list of VMAs */ struct rb_root mm_rb; /* balanced binary tree of VMA’s */ atomic_t mm_users; /* How many users with user space? */ atomic_t mm_count; /* How many references to "mm_struct" */ int map_count; /* number of VMAs */ struct rw_semaphore mmap_sem; spinlock_t page_table_lock; /* Protects task page tables and .. */ struct list_head mmlist; /* List of all active mm's. */ unsigned long start_code, end_code, start_data, end_data; unsigned long start_brk, brk, start_stack; unsigned long arg_start, arg_end, env_start, env_end; unsigned long rss, total_vm, locked_vm; unsigned long def_flags; ….. };
address space VMA (memory area) T mm_struct(per Process) CPU Stack task_struct D mmap field thread_info vm_area_struct pointers L VMA - text start_address end_address permission file operations o page fault o ++ o -- mm tty files fs kernel stack vm_area_struct S VMA - data vm_area_struct VMA – stack
address space Memory Area T struct vm_area_struct{ unsigned long vm_start; unsigned long vm_end; struct vm_operations_struct *vm_ops; struct mm_struct *vm_mm; struct vm_area_struct *vm_next; struct file * vm_file; … } • vm_start: the initial address in the interval • vm_end: the final address in the interval • vm_ops: operations associated with a given VMA • vm_mm: points back to this VMA’s associated mm_struct • vm_next: list of VMA’s • vm_file: file we map to D L S
address space VMA (memory area) T • definition • intervals of legal memory address • where process has permission/purpose to access • content • text, data, bss, stack, memory mapped files, … • kernel can dynamically add/delete memory areas • eg “add memory mapped file”, “remove shared memory”, etc • If two VMA’s • have adjacent addresses & • have same permissions merge them D L S
address space Other Fields T mm_struct (per Process) struct task_struct struct mm pgd D vm_ops field page mapping table L vm_operations_struct nopage– used by page fault handler, when no page found open– when the memory area is added to an address space close– when the memory area is removed to an address space …. S
Kernel thread - Memory Descriptor • does not have process address space (no user context) • mm field == NULL • But, kernel threads need some data, such as page tables • To provide it, kernel threads use the memory descriptor of a task that ran previously
Address Space & Page Table Size • Size of Address Space • Assume 12 bit for displacement (4 KB page) • 16 bit machine • 4 bit for page address • Page table per process 24 entries • 32 bit machine • 20 bit for page address • Page table per process 220 entries • 64 bit machine • 52 bit for page address • Page table per process 252 entries Mapping Table is Too Big Too Sparse
64 bit Address Space per Process T 32 bit 16 bit T D T D D S S L Assuming 4KB size page (12 bits for offset) 32 bit machine needs 220 entries for page table 64 bit machine needs 252 entries for page table Too Large Space per Each Process Too Sparse Too much memory wasted for (unused) Page Tables S
Page_no(20) Offset(12) Dir_no(10) Page_no(10) Offset(12) PTE Table T T 1024 entries PTE Table D D directory 1024 entries 1024 entries 1024 x 1024 PTE! L L PTE Table 1024 entries S S (1024 x 1024) entries (4 x 1024) entries
Dir_no(10) Page_no(10) Offset(12) 31 22 21 12 11 0 directory table size page table size page itself 1024 enrties 1024 entries 4KB page table page directory page no page table if NULL entry NULL
Paging in Linux • For 64 bit address, one more directory(4 parts) • directory --- page global directory page middle directory • page table (PTE) • offset • The size of each parts depends on the architecture • For 32bit, Linux eliminates Page Middle Directory • Same code can work on 32bit and 64bit machine Dir_no(10) Page_no(10) Offset(12) global_directory middle_directory Page_no Offset(12)
address space Page Mapping Table T mm_struct (per Process) struct task_struct struct mm vm_ops pgd D PTE vm_area_struct Directory vm_next L page PTE vm_area_struct vm_next S start_address end_address permission file operations page fault() add_vma remove_vma PTE vm_area_struct vm_next
Allocating Memory Descriptor • During fork(), memory descriptor is allocated. • do_fork() copy_process() copy_mm() • copy_mm(): • If normal process, • The mm_struct structure is allocated • from the mm_cachep slab cache via the allocate_mm() • if thread(CLONE_VM), • do not callallocate_mm() • mm field is set to point toparent’s memory descriptor
Destroying Memory Descriptor • exit_mm() mmput() • mmput(): • decrease mm_users • if mm_users is zero, mmdrop() is called • mmdrop(): • decrease mm_count • if mm_count is zero, free_mm() is invoked • to return the mm_struct to the mm_cachep slab cache • via kmem_cache_free()
Manipulating Memory Areas T • Creating a VMA • do_mmap() • is used by the kernel to create a new VMA • Is new interval adjacent to existing interval? • if they share the same permissions, • the two intervals are merged into one • otherwise, a new VMA is created • mmap() system call • do_mmap() is exported to user via mmap() sys call • actually the real name of system call is mmap2() D L S
Manipulating Memory Areas T • Removing a VMA • do_munmap() • is used by the kernel to remove a VMA • munmap() • do_munmap() is exported to user via munmap() sys call D L S
Manipulating Memory Areas T • find_vma() • Look up the first VMA which statisfies (addr < vm_end) • finds the 1st VMA that (contains addr) • or (begins at an address greater than addr) • find_vma_prev() • same as find_vma but return pointer to previous VMA • find_vma_intersection() • returns 1st VMA that overlaps given address interval D L S