590 likes | 688 Views
Chapter 5. Memory Management, Memory-Mapped Files, and DLLs. OBJECTIVES (1 of 2). Upon completion of this Chapter you will be able to: Describe the Windows memory management architecture and the role of heaps and memory-mapped files
E N D
Chapter 5 Memory Management, Memory-Mapped Files, and DLLs
OBJECTIVES (1 of 2) • Upon completion of this Chapter you will be able to: • Describe the Windows memory management architecture and the role of heaps and memory-mapped files • Use multiple independent heaps in applications requiring dynamic memory management • Use Structured Exception Handling to respond to memory allocation errors • Use memory-mapped files
OBJECTIVES (2 of 2) • Determine when to use the independent heaps and when to use memory-mapped files and to describe the advantages and disadvantages of each • Describe Windows dynamic link libraries (DLLs) • Describe the difference between static, implicit, and explicit linking • Describe the advantages and disadvantages of each • Use DLLs to load different implementations of the same function
OVERVIEW (1 of 2) • 32-bit operating system, so pointers are 4-byte objects • Win64 provides 64-bit pointers • Processes have a private 4GB virtual address space • Half (2GB) is available to a process • Remainder allocated to shared data and code • Win64 enlarges VA space; required for many applications • Programs can create independent memory “heaps” Processes can map files to memory • Processes can share memory through a mapped file • Fast and convenient for some file processing
OVERVIEW (2 of 2) • Dynamic Link Libraries with Monolithic Programs • Gather all the source code, including commonly used Chapters such as utility functions • Put all the source code in a single project • Build, test, debug, and use the program • Inefficiency • Recompile same code in all projects • All executables include the same object code • Waste of disc space and physical memory at run time • Maintenance complexity as shared code changes
AGENDA • Part I Memory Management and Heaps • Lab 5–A • Part II Memory-Mapped Files • Lab 5–B • Part III Dynamic Link Libraries • Lab 5–C
Part I Memory Management and Heaps
Memory Management Architecture Windows Program C library: malloc, free Heap API: HeapCreate, HeapDestroy, HeapAlloc, HeapFree MMF API: CreateFileMapping, CreateViewOfFile Virtual Memory API Windows Kernel with Virtual Memory Manager Disc & File System Physical Memory
HEAPS (1 of 2) • Pools of memory within the process virtual address space • Every process has a default process heap • A process may have more than one heap. Benefits of separate heaps include: • Fairness (between threads and between uses) • Allocation efficiency (fixed size blocks in each heap) • Deallocation efficiency (you can deallocate a complete data structure with one call) • Locality of reference efficiency
HEAPS (2 of 2) • Every process has a process heap • Every heap has a handle • The programmer can use the process heap or create new ones • HANDLE GetProcessHeap (VOID) • Return: The handle for the process’ heap; NULL on failure
MEMORY MGT. IN MULTIPLE HEAPS Program Virtual Address Space Not allocated ProcHeap = GetProcessHeap ( ); pRoot = HeapAlloc (ProcHeap); Process Heap · · · Not allocated RecHeap = HeapCreate ( ); NodeHeap = HeapCreate ( ); Record · · · RecHeap Record while ( ) { pRec = HeapAlloc (RecHeap); pNode = HeapAlloc (NodeHeap); · · · } Record Not allocated Node · · · NodeHeap Node HeapFree (RecHeap, 0, pRec); HeapFree (NodeHeap, 0, pNode); HeapDestroy (RecHeap); HeapDestroy (NodeHeap); Node Not allocated
HEAP MANAGEMENT (1 of 2) • HANDLE HeapCreate (DWORD flOptions, DWORD dwInitialSize, DWORD dwMaximumSize) • Return: A heap handle or NULL on failure • dwMaximumSize — How large the heap can become • 0 — “growable heap”; no fixed limit • non-zero — “non-growable heap” • The entire block is allocated from the virtual address space • But only the initial size is committed in the paging file
HEAP MANAGEMENT (2 of 2) • flOptions is a combination of two flags: • HEAP_GENERATE_EXCEPTIONS • HEAP_NO_SERIALIZE • By generating exceptions, you can avoid explicit tests after each heap management call
HEAPS • BOOL HeapDestroy (HANDLE hHeap) • hHeap — a heap generated using HeapCreate • Do not destroy the process’ heap (obtained using GetProcessHeap) • Benefits of HeapDestroy: • No data structure traversal code • No need to deallocate each individual data structure element, which can be time-consuming
MANAGING HEAP MEMORY (1 of 4) • LPVOID HeapAlloc (HANDLE hHeap, DWORD dwFlags, DWORD dwBytes) • Return: A pointer to the allocated memory block (of size dwBytes) or NULL on failure (unless exception generation is specified) • hHeap — Handle from GetProcessHeap or HeapCreate • dwFlags — A combination of: • HEAP_GENERATE_EXCEPTIONS • HEAP_NO_SERIALIZE • HEAP_ZERO_MEMORY — Allocated memory initialized to zero
MANAGING HEAP MEMORY (2 of 4) • BOOL HeapFree (HANDLE hHeap, DWORD dwFlags, • LPVOID lpMem) • dwFlags — Should be zero (or HEAP_NO_SERIALIZE) • lpMem — Should have a value returned by HeapAlloc or HeapReAlloc • hHeap — Should be the heap that lpMem was allocated from
MANAGING HEAP MEMORY (3 of 4) • LPVOID HeapReAlloc (HANDLE hHeap, DWORD dwFlags, • LPVOID lpMem, DWORD dwBytes) • Return: Pointer to the reallocated block. Failure returns NULL or causes exception. • dwFlags — Some essential control options: • HEAP_GENERATE_EXCEPTIONS and HEAP_NO_SERIALIZE • HEAP_ZERO_MEMORY — Only newly allocated memory is initialized • HEAP_REALLOC_IN_PLACE_ONLY — Do not move the block • lpMem — Existing block in hHeap to be reallocated • dwByte — New block size
MANAGING HEAP MEMORY (4 of 4) • DWORD HeapSize (HANDLE hHeap, DWORD dwFlags, • LPVOID lpMem) • Return: The size of the block or zero on failure.
HEAP FLAGS (1 of 2) • HEAP_NO_SERIALIZE • Specified in HeapCreate, HeapAlloc, and other functions • Performance gain (about 15% in tests) as functions do not provide mutual exclusion to threads accessing the heap • Can safely be used if (BUT, BE CAREFUL): • Your process uses only a single thread • Each thread has its own heap(s) that no other thread can access • You provide your own mutual exclusion mechanism to prevent concurrent access to a heap by several threads • You use HeapLock and HeapUnlock
HEAP FLAGS (2 of 2) • HEAP_GENERATE_EXCEPTIONS • Allows you to avoid error tests after each allocation
OTHER HEAP FUNCTIONS • HeapValidate • Determine whether a heap has been corrupted • HeapCompact • Combine adjacent free blocks; decommit large free blocks • HeapWalk • Determine all blocks allocated within a heap
LAB 5–A (Part 1 – 1 of 2) • Write a program, sortHP, which reads fixed-size records from a file into a memory-allocated buffer in a heap, where the first 8 characters are a birth date (CCYYMMDD format). The rest of the record is a line of text. • Enter each date in an array, along with a file position. Each array element will contain the date and the file position of the record (which is not fixed length). • Sort the array using the C library qsort function. • Print out the complete file sorted by birth date. • Repeat the process for each file on the command line. Before each new file, destroy the heaps from the preceding file.
LAB 5–A (Part 1 – 2 of 2) • The TestData directory contains two text files with 64-byte records that can be used to test your program. Or, use the RandFile program to generate sortable files of any size.
LAB 5–A (Part 2) • Modify the sort program to create sortBT, which enters the records in to a binary search tree and then scans the tree to display the records in order. • Allocate the tree nodes and the data in separate heaps. • Destroy the heaps before sorting the next file, rather than freeing individual tree nodes and data elements. • Test the program with and without heap serialization and determine whether there is a detectable performance difference.
Part II Memory-Mapped Files
MEMORY-MAPPED FILES • Advantages to mapping your virtual memory space directly to normal files rather than the paging file: • You never need to perform direct file I/O • Data structures you create are saved in the file • You can use in-memory algorithms (string processing, sorts, search trees) to process data even though the file may be much larger than available physical memory • There is no need to manage buffers and the file data they contain • Multiple processes can share memory (this is the only way), and the file views will be coherent • There is no need to consume space in the paging file
PROCESS ADDRESS SPACEMAPPED TO A FILE Program File fH = CreateFile ( ); mH = CreateFileMapping (fH); · · · while ( ) { pRecA = MapViewOfFile (mH); pRecB = MapViewOfFile (mH); pRecB -> Data = pRecA -> Data; · · · UnmapViewOfFile (pRecA); UnmapViewOfFile (pRecB); } Process Address Space CloseHandle (mH); CloseHandle (fH);
FILE-MAPPING OBJECTS (1 of 4) • HANDLE CreateFileMapping (HANDLE hFile, • LPSECURITY_ATTRIBUTES lpsa, • DWORD dwProtect, DWORD dwMaximumSizeHigh, • DWORD dwMaximumSizeLow, LPCTSTR lpMapName) • Return: A file mapping handle or NULL
FILE-MAPPING OBJECTS (2 of 4) • Parameters • hFile — Open file handle; protection flags compatible with dwProtect • LPSECURITY_ATTRIBUTES — NULL for now • dwProtect — How you can access the mapped file: • PAGE_READONLY — Pages in the mapped region are read only • PAGE_READWRITE — Full access if hFile has both GENERIC_READ and GENERIC_WRITE access • PAGE_WRITECOPY — When you change mapped memory, a copy is written to the paging file
FILE-MAPPING OBJECTS (3 of 4) • dwMaximumSizeHigh and dwMaximumSizeLow — Specify the size of the mapping object; 0 for current file size. The file is extended if the current file size is smaller than the map size. • lpMapName — Names the mapping object, allowing other processes to share the object
FILE-MAPPING OBJECTS (4 of 4) • You can also obtain a file-mapping handle by specifying an existing mapping object name • HANDLE OpenFileMapping (DWORD dwDesiredAccess, • BOOL bInheritHandle, LPCTSTR lpNameP) • Return: A file mapping handle or NULL • CloseHandle destroys mapping handles
MAPPING PROCESS ADDRESS SPACE (1 of 3) • LPVOID MapViewOfFile (HANDLE hMapObject, • DWORD dwAccess, DWORD dwOffsetHigh, • DWORD dwOffsetLow, DWORD cbMap) • Return: The starting address of the block (file view) or NULL on failure • hMapObject — Identifies a file-mapping object • dwAccess — Must be compatible with mapping object’s access: • FILE_MAP_WRITE • FILE_MAP_READ • FILE_MAP_ALL_ACCESS
MAPPING PROCESS ADDRESS SPACE (2 of 3) • dwOffsetHigh and dwOffsetLow • Starting location of the mapped file region • Must be a multiple of 64K • Zero offset to map from beginning of file • cbMap — Size in bytes of the mapped region • Zero indicates entire fileNote: The map size is limited by the 32-bit address
MAPPING PROCESS ADDRESS SPACE (3 of 3) • MapViewOfFileEx is similar, but you can specify an existing address • BOOL UnmapViewOfFile (LPVOID lpBaseAddress) • To release file views
FILE-MAPPING LIMITATIONS • Disparity between Windows’s 64-bit file system and 32-bit addressing • With a large file (greater than 4GB) you cannot map everything into virtual memory space • Process data space is limited to 2GB • You cannot use all 2GB; available contiguous blocks will be smaller • When dealing with large files, you must create code that carefully maps and unmaps file regions as you need them
BASED POINTERS (1 of 2) • If you use pointers in a mapped file region, they should be of type _based • A conventional pointer refers to the virtual address • This address base will almost certainly be different the next time that file is mapped or a new view is created of the same region • The pointer should be based on the view address
BASED POINTERS (2 of 2) • int *pi; • int __based(pi) *bpi, i; • ... • pi = MapViewOfFile (...); • *pi = 3; • bpi = pi; • i = *bpi; • ...
LAB 5–B (Part 1) • Rewrite the atou (ASCII to UNICODE) program to create atouMM • Use memory mapping only; do not use ReadFile and WriteFile • You do not need to change the main function in atou.c. Instead, change the asc2un.c function to create asc2unMM.c.
LAB 5–B (Part 2) • Rewrite the sort program of the previous section to create sortMM, so that key records (in the array) are mapped to a “key” file • Do not use the file pointers; instead, use based pointers to address in a view of the original file • As part of the test of _based pointers, have a program option to simply use the saved key file to produce a sorted listing without actually performing a sort. The next slide shows diagrams the operation. • This is a difficult exercise!
K0 P0 K1 P1 K2 P2 ··· ··· MyFile K0 S0 K1 S1 K2 S2 Ki Pi Kj Pj Kk Pk ··· sortMM OPERATION sortMM MyFile Ki: Key Si: String Pi: Based Pointer MyFile.idx qsort
Part III Dynamic Link Libraries
STATIC LIBRARIES • Build one or more libraries as “static libraries” • Link the libraries with each project as needed • Advantages • Simplifies and expedites project building • Disadvantages • Disc and memory space issues • Maintenance requires relinking and redistribution • Different programs may use different library versions • Programs cannot use alternate utility implementations for different situations
DYNAMIC LINK LIBRARIES (1 of 4) • DLLs solve these and other problems very neatly • Library functions are linked at: • Program load time — implicit linking • Program run time — explicit linking • Program image can be much smaller • It does not include the library functions • Multiple programs can share a single DLL • Only a single copy will be loaded into memory • All programs map their process address space to DLL code • Each thread will have its own copy of non-shared storage on the stack
DYNAMIC LINK LIBRARIES (2 of 4) • New versions or alternate implementations: • Supplying a new version of the DLL • All programs can use the new version without modification • Explicit linking: • Program decides at run time which library version to use • Different libraries may be alternate implementations of the same function • May carry out totally different tasks • Just as separate programs do • The library will run in the same process and thread as the calling program
DYNAMIC LINK LIBRARIES (3 of 4) • DLLs are used in nearly every operating system • Including UNIX and Windows 3.1 • Windows (all versions) uses DLLs to implement the OS interfaces, among other things • Windows 3.1 DLLs run at the same address space for all processes • Windows DLLs run in the process’ virtual address space
DYNAMIC LINK LIBRARIES (4 of 4) • Multiple Windows processes can share DLL code • Code, when called, runs as part of the calling process and thread • Library can use the calling process’ resources (file handles, ...) • Uses the calling thread’s stack • DLLs must be thread-safe • DLLs can also export variables as well as function entry points
IMPLICIT LINKING (1 of 2) • Implicit, or load-time, linking is the easiest of the two techniques • Steps: • Collect and built function source as a DLL • Build process constructs a .LIB library file • “stub” for the actual code • Place .LIB in project library directory • Build process also constructs a .DLL file
IMPLICIT LINKING (2 of 2) • Contains the actual executable image • Placed in the same directory as the application that uses it • The current working directory is the secondary location • Then system directory, Windows directory, PATH • The program loads the DLL during its initialization • You must “export” the function interfaces in the DLL source
EXPORTING AND IMPORTING INTERFACES (1 of 3) • DLL entry point must be declared • Microsoft C, using the _declspec (dllexport) storage modifier: • _declspec (dllexport) • DWORD MyFunction (...); • Calling program declares the function is to be imported • Use the _declspec (dllimport) storage modifier
EXPORTING AND IMPORTING INTERFACES (2 of 3) • Standard technique in include file • Use a preprocessor variable such as “MYPROJ_EXPORTS“ • “MYPROJ” is the project name #ifdef MYPROJ_EXPORTS #define LIBSPEC _declspec (dllexport) #else #define LIBSPEC _declspec (dllimport) #endif LIBSPEC DWORD MyFunction (...);