470 likes | 488 Views
Front End vs Back End of a Compilers. The phases of a compiler are collected into front end and back end.
E N D
Front End vs Back End of a Compilers • The phases of a compiler are collected into front end and back end. • The front end consists of those phases that depend primarily on the source program.These normally include Lexical and Syntactic analysis,Semantic analysis ,and the generation of intermediate code
Front End vs Back End of a Compilers(Cont’d) • A certain amount of code optimization can be done by front end as well.
Front End vs Back End of a Compilers(Cont’d) • The BACK END includes the code optimization phase and final code generation phase,along with the necessary error handling and symbol table operations.
Front End vs Back End of a Compilers(Cont’d) • The front end analyzes the source program and produces intermediate code while the back end synthesizes the target program from the intermediate code. • A naive approach (front force) to that front end might run the phases serially
Front End vs Back End of a Compilers(Cont’d) • It is also tempting to compile several different languages into the same intermediate language and use a common back end for the different front ends, thereby obtaining several compilers for one machine. • However, because of subtledifference in the view points of different language, there has been only limited success in this direction.
Passes • In an implementation of a complier, portion of one or more phases are combined into a module called a pass
Passes(cont’d) • Several phases of complier are usually implemented in a single pass consisting of reading an input file and writing as output file. • It is common for several phases to be grouped into one pass and for the activity of these phases to be interleaved during the pass
Passes(cont’d) • For example lexical analysis ,syntax analysis, semantic analysis and intermediate code generation might be grouped into one pass. • If so, the token stream after lexical analysis may be translated directly into intermediate code.
Passes(cont’d) • A pass reads the source program or out put of the previous pass make the transformation specified by its phases and writes output into an intermediate file , which may then be read by a subsequent pass.
Multi Pass Compiler • A multi pass compiler use less space than a single pass compiler. • Since the space occupied by the complier program for one pass can be reused by the following pass.
Multi Pass Compiler(cont’d) • A multi pass complier is of the course slower than a single pass compiler, because each pass reads and writes an intermediate file .
Multi Pass Compiler(cont’d) • Thus compiler running in computers with small memory would normally use several passes while on a computer with a large random memory , a compiler with fewer passes would be possible.
Reducing The No of Passes It is desirable to have relatively few passes,since it takes time to read and write intermediate files. on the other hand ,if we group several phases into one pass,we may be forced to keep the entire program in memory.
Reducing The No of Passes(cont’d) Because one phase may need information in a different order than a previous phase produce it.
Reducing The No of Passes(cont’d) The internal form of the program may be considerably larger than either the source program or the target program ,so this space may not be a trivial matter.
Reducing The No of Passes(cont’d) For some phases,grouping into one pass presents few problems.For example,the interface b/w lexical and syntactic analyzers can often be limited to a single token.
Reducing The No of Passes(cont’d) On the other hand,it is often very hard to perform code generation until the intermediate representation has been completely generated.
Reducing The No of Passes(cont’d) For example,languages like PL/I and Algol 68 permit variables to be used before they are declared.we can not generate the target code for a construct if we do not know the types of variables involved in that construct.
Reducing The No of Passes(cont’d) • Similarly most languages allow goto`s that jump forward in the code. • We can not determine the target addresses of such a jump until we have seen the intervening source code and generated target code for it.
Reducing The No of Passes(cont’d) • In some cases ,it is possible to leave a blank slot for missing information ,and fill in the slot when the information becomes available. • In particular ,intermediate and target code generation can often be merged into one pass using a technique called “back patching”
Reducing The No of Passes(cont’d) we can combine the action of the passes as follows.On encountering an assembly statement that is forward reference ,say GOTO target.
Reducing The No of Passes(cont’d) We generate a skeletal instruction ,with the machine operation code for GOTO and blanks for the address.All instructions with blanks for the address of target are kept in a list associated with the symbol table entry for target.
Reducing The No of Passes(cont’d) The blanks are filled in when we finally encounter and instruction such as target : MOV bar, R1
Reducing The No of Passes(cont’d) And determine the value of target ;it is the address of the current instruction.We then back patch by going down the list for target of all the instructions that need its address, substituting the address of target for the blanks in the address fields of those instructions
Reducing The No of Passes(cont’d) This approach is easy to implement if the instructions can be kept in memory until all target addresses can be determined.
Reducing The No of Passes(cont’d) This approach is a reasonable one for an Assembler that can keep all its output in memory.Since the intermediate and final representations of code for an assembler are roughly the same.
Reducing The No of Passes(cont’d) • And surely of approximately the same enough,back patching over the length of the entire assembly program is not infeasible. • However, in a compiler ,with a space consuming intermediate code ,we may need to be careful about the distance over which back patching occurs.
Compiler Construction Tools • A number of tools have been developed specifically to held construct compilers.These tools variously called compiler-compilers,compiler-generators, or translator-writing systems,which produce a compiler from some form of specification of a source language and target m/c language.
Compiler Construction Tools(cont’d) • Largely ,they are oriented around a particular model of languages and they are most suitable for generating compilers of languages similar to the model.
Compiler Construction Tools(cont’d) • For example , it is tempting to assume that lexical analyzers for all languages are essentially the same,except for the particular key words and signs recognized.
Compiler Construction Tools(cont’d) • Many compiler-compilers do in fact produce fixed lexical analysis routines for use in the generated compiler.
Compiler Construction Tools(cont’d) • These routines differ only in the list of key words recognized ,and this list is all that needs to be supplied by the user.The approach is valid, but may be unworkable if it is required to recognize nonstandard tokens,such as identifiers that may include certain character other than letters and digits.
Compiler Construction Tools(cont’d) • Some general tools have been created for the automatic design of specific compiler components. • These tools use specialized languages for specifying and implementing the component ,and many use algorithms that are quite sophisticated.
Compiler Construction Tools(cont’d) • The most successful tools are those that hide the details of the generation algorithm and produce components that can be easily integrated into the remainder of a compiler
Compiler Construction Tools(cont’d) The following is a list of some useful compiler construction tools. • Parser Generators • Scanner Generators • Syntax-directed translation Engines • Automatic Code Generators • Data Flow Engines
Parser Generators • These produce syntax analyzers ,normally from input that is based on a context free grammar.In early compilers,syntax analysis consumed not only a large fraction of the running time of a compiler but a large fraction of the intellectual effort of writing a compiler.
Parser Generators(cont’d) • This phase is now considered one of the easiest to implement. • Many parser generators utilize powerful parsing algorithms that are too complex to be carried out by hand.
Scanner Generators • These automatically generate lexical analyzer normally from a specification based on regular expressions.
Syntax Directed Translation Engine • These produce collections of routines that walk the parse tree ,generating intermediate cods.
Automatic Code Generators • Such a tool takes a collection of rules that defines the translation of each operation of the intermediate language into the m/c for the target machine.
Automatic Code Generators(cont’d) • The rules must include sufficient detail that we can handle the different possible access methods for data e.g. variables may be in registers or a fixed (static)location in memory or may be allocated a position on a stack.
Data Flow Engines. • Much of the information needed to perform good code optimization involves “data flow analysis “ the gathering of information about how values are transmitted from one part of the programme to each part.
Lexical Analyzer Syntax Analyzer Source Program Symbol Table Error Handler Manager Target Program Semantic Analyzer Intermediate Code Generator Code Optimizer Code Generator
A Compiler operates in phases ,each of which transforms the source programme from one representation to another. • In practice,some of the phases may be grouped together. • The first three phases ,forming the bulk of the analysis portion of a compiler
Two other activities , symbol table management and error handling,are shown interacting with the six phases of lexical analysis, syntax analysis,semantic analysis,intermediate code generation,code optimization,and code generation. • Informally,we shall also call the symbol table manager and error handler Phases.