1 / 46

Chapter 8

Chapter 8. Intermediate Code Basic Code Generation Techniques. Gang S. Liu College of Computer Science & Technology Harbin Engineering University. Introduction.

kyna
Download Presentation

Chapter 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8 Intermediate Code Basic Code Generation Techniques Gang S. Liu College of Computer Science & Technology Harbin Engineering University

  2. Introduction • Final task of the compiler is to generate executable code for a target machine that is a representation of a semantics of the source code. • This is the most complex phase of a compiler. • It depends on detailed information about • the target architecture, • the structure of the runtime • OS • There is an attempt to optimize the speed and the size of the target code to take advantages of special features of the target machine (registers, addressing modes, pipelining, and cache memory) Samuel2005@126.com

  3. Introduction (cont) • The code generation is typically broken into several steps, often including an abstract code called intermediate code. • Two popular forms are • Three address code • P-code Samuel2005@126.com

  4. Intermediate Code • A data structure that represents the source program during translation is called an intermediate representation (IR). • An abstract syntax tree was used as the principal IR. • An abstract syntax tree does not resemble target code. • Example: control flow constructs. • A new form of IR is necessary. • Such intermediate representation that closely resembles target code is called intermediate code. Samuel2005@126.com

  5. Form of Intermediate Code • Intermediate code is a linearization of the syntax tree. • Intermediate code • Can be very high level, representing operations almost as abstractly as the syntax tree or can closely resemble target code. • May use or not used detailed information about the target machine and runtime environment. Samuel2005@126.com

  6. Use of Intermediate Code • Intermediate code is useful • For producing extremely efficient code • In making a compiler more easily retargetable (if intermediate code is relatively target independent). Source Language 1 Target Language 1 Intermediate Code Source Language 2 Target Language 2 Samuel2005@126.com

  7. Intermediate code generation is in the mediate part of compiler, it is a bridge which translate source program into intermediate representation and then translate into target code. The position of intermediate code generation in compiler is shown in Figure 8.1. . Samuel2005@126.com

  8. Samuel2005@126.com

  9. There are two advantages of using intermediate code, • The first one is that we can attach different target code machines to same front part after the part of intermediate code generation; ; • The second one is that a machine-independent code optimizer can be applied to the intermediated representation. . Samuel2005@126.com

  10. Intermediate codes are machine independent codes, but they are close to machine instructions. The given program in a source language is converted to an equivalent program in an intermediate language by the intermediate code generator. . Samuel2005@126.com

  11. Intermediate language can be many different languages, and the designer of the compiler decides this intermediate language. Postfix notation, four-address code(Quadraples), three-address code, portable code and assembly code can be used as an intermediate language. In this chapter, we will introduce them in detail. Samuel2005@126.com

  12. 8.1 Postfix Notation • If we can represent the source program by postfix notation, it will be easy to be translated into target code, because the target instruction order is same with the operator order in postfix notation. . Samuel2005@126.com

  13. 8.1.1 The definition of postfix notation • the postfix notation for the expression a+b*c is abc*+. the expression are as follows: 1 The order of operands for expression in postfix notation is same with its original order. 2 Operator follows its operand, and there are no parentheses in postfix notation. 3 The operator appears in the order by the calculation order. Samuel2005@126.com

  14. For example, the postfix notation for expression a*(b+c/d) is abcd/+*, the translation procedure is just following the steps above. . • firstly, according to step 1 we get the order of operands of the expression: abcd, • secondly, by the step 2, the first operator in operator order is /, because it just follows its operands cd, in addition, as the step 3, operator / is calculated first, so the operator follow operands is / . The second operator in operator order is +, it dues to that there is parentheses in the original expression, operator + should be calculated earlier than operator *.The last one is *, because * is calculated lastly. . Samuel2005@126.com

  15. The other example, the postfix notation for expression a*b+(c-d)/e is ab*cd-e/+. From examples, we know it is a bit difficult to translate an expression into its postfix notation. So scientist E.W.DIJKSTRA from Holand created a method to solve the problem. . Samuel2005@126.com

  16. 8.1.2 E.W.DIJKSTRA Method • There are two stacks in E.W.DIJKSTRA method, one stack storages operands, the other one is for operators, the procedure of it is shown by Figure 8.2, and the step of E.W.DIJKSTRA method is as follows: . Samuel2005@126.com

  17. Samuel2005@126.com

  18. Actually, scanning the expression is from left to right. At the beginning of scanning, we push identifier # to the bottom of operator stack, similarly, we add identifier # to the end of expression to label that it is terminal of expression. When the two identifier # meet, it means the end of scanning. The steps of scanning are: 1 If it is operand, go to the operand stack: Samuel2005@126.com

  19. 2 If it is operator, it should be compared with the operator on the top of operator stack. When the priority of operator on the top stack is bigger than the scanning operator, or equal to it, the operator on the top of operator stack would be popped and go to the left side. On the other hand when the priority of operator on the top stack is less than the scanning operator, scanning operator should be pushed into operator stack. Samuel2005@126.com

  20. 3 If it is left parenthesis, just push it into operator stack, and then compare the operators within parentheses. . • If it is right parenthesis, pop all the operators within parentheses, what is more, parentheses would be disappeared and would not be represented as postfix notation. . 4 Return to step 1 till two identifier # meet. Samuel2005@126.com

  21. Example 8.1 • There is an expression of a+b*c , its postfix notation is abc*+. From the translating procedure shown by Figure 8.3, we can see that operator order is *+, it is also the pop order of the operator stack and calculating order . Samuel2005@126.com

  22. Samuel2005@126.com

  23. Samuel2005@126.com

  24. Three-Address Code • The most basic instruction of three address code x = y op z • The use of the address x differs from the addresses of y and z. • y and z can represent constants and literal values. Samuel2005@126.com

  25. Form of Intermediate Code • Intermediate code is a linearization of the syntax tree. • Intermediate code • Can be very high level, representing operations almost as abstractly as the syntax tree or can closely resemble target code. • May use or not used detailed information about the target machine and runtime environment. Samuel2005@126.com

  26. Example + 2*a+(b-3) * - 2 a b 3 t1=2*a t2=b-3 t3=t1+t2 t1=b-3 t2=2*a t3=t2+t1 Right-to-left linearization Left-to-right linearization Samuel2005@126.com

  27. Three-Address Code (cont) • It is necessary to vary form of the three-address code to express all constructs (e.g. t2=-t1) • No standard form exists. Samuel2005@126.com

  28. Implementation of Three-Address Code • Each three-address instruction is implemented as a record structure containing several fields. • The entire sequence is an array or a linked list . • The most common implementation requires four fields –quadruple • One for operation and three for addresses. • For instructions that need fewer number of addresses, one or more addresses fields is given null or “empty” values. Samuel2005@126.com

  29. Factorial Program • { Sample program • in TINY language - • computes factorial } • read x; { input an integer } • if 0 < x then { don't compute if x <= 0 } • fact := 1; • repeat • fact := fact * x; • x := x - 1 • until x = 0; • write fact { output factorial of x } • end Samuel2005@126.com

  30. Syntax Tree for Factorial Program Samuel2005@126.com

  31. Example (rd, x, _, _) (gt, x, 0, t1) (if_f, t1, L1, _) (asn, 1, fact, _) (lab, L2, _, _) (mul, fact, x, t2) (asn, t2, fact, _) (sub, x, 1, t3) (asn, t3, x, _) (eq, x, 0, t4) (if_f, t4, L2, _) (wri, fact, _, _) (lab, L1, _, _) (halt, _, _, _) { Sample program in TINY language - computes factorial } read x; { input an integer } if 0 < x then { don't compute if x <= 0 } fact := 1; repeat fact := fact * x; x := x - 1 until x = 0; write fact { output factorial of x } end Samuel2005@126.com

  32. Different Representation • Instructions themselves represent temporaries. • This reduces the number of address fields from three to two. • Such representation is called a triple. • Amount of space is reduced. • Major drawback: any movement becomes difficult for array representation. Samuel2005@126.com

  33. Example (rd, x, _, _) (gt, x, 0, t1) (if_f, t1, L1, _) (asn, 1, fact, _) (lab, L2, _, _) (mul, fact, x, t2) (ans, t2, fact, _) (sub, x, 1, t3) (asn, t3, x, _) (eq, x, 0, t4) (if_f, t4, L2, _) (wri, fact, _, _) (lab, L1, _, _) (halt, _, _, _) (0) (rd, x, _) (1) (gt, x, 0) (2) (if_f, (1), (11)) (3) (asn, 1, fact) (4) (mul, fact, x) (5) (asn, (4), fact) (6) (sub, x, 1) (7) (asn, (6), x) (8) (eq, x, 0) (9) (if_f, (8), (4)) (10) (wri, fact, _) (11) (halt, _, _) Samuel2005@126.com

  34. P-Code • Standard assembly language code produced by Pascal compilers in 1970/80. • Designed for hypothetical stack machine, called P-machine. • Interpreters were written for actual machines. • This made Pascal compilers easy portable. • Only interpreter must be rewritten for a new platform. • Modifications of P-code are used in a number of compilers, mostly for Pascal-like languages. Samuel2005@126.com

  35. P-Machine • Consists of • A code memory • An unspecified data memory for named variables • A stack for temporary data • Registers needed to maintain the stack and support execution. Samuel2005@126.com

  36. Example 1 2*a+(b-3) ldc 2 ; load constant 2 lod a ; load value of variable a mpi ; integer multiplication lod b ; load value of variable b ldc 3 ; load constant 3 sbi ; integer subtraction adi ; integer addition Samuel2005@126.com

  37. Example 2 x:=y+1 lda x ; load address of x lod y ; load value of y ldc 1 ; load constant 1 adi ; add sto ; store top to address ; bellow top & pop both Samuel2005@126.com

  38. Factorial Program lod fact ;load value of fact lod x ;load value of x mpi ;multiply sto ;store top to ;address of second & ;pop lda x ;load address of x lod x ;load a value of x ldc 1 ;load constant 1 Sbi ;subtract sto; lod x ldc 0 equ ;test for equality fjp L2 ;jump to L2 of false lod fact; wri lab L1 stp lda x ;load address of x rdi ;read an integer, store to ;address on top of the stack ;(& pop it) lod x ;load the value of x ldc 0 ;load constant 0 grt ;pop an compare top two ;values push the Boolean ;result fjp L1 ;pop Boolean value, ;jump to L1 if false lda fact ;load address of fact ldc 1 ;load constant 1 sto ;pop two values, storing ;the first to address ;represented by second lab L2 ;definition of label 2 lda fact ;load address of fact Samuel2005@126.com

  39. P-Code and Three-Address Code • P-code • is closer to actual machine. • Instructions require fewer addresses. • “One-address” or “zero-address” • Less compact in terms of instructions. • Not “self-contained” • Instructions operate implicitly on a stack • All temporary values are on stack, no need for temporary names. Samuel2005@126.com

  40. Generation of Target Code • Involves two standard techniques • Macro expansion • Replaces each intermediate code instruction with an equivalent sequence of target code instructions. • Static simulation • Straight-line simulation of the effects of the intermediate code and generating target code to match these effects. Samuel2005@126.com

  41. Example exp → id = exp | aexp aexp → aexp + factor | factor factor → (exp) | num | id (x=x+3)+4 lda x lod x ldc 3 adi stn ldc 4 adi t1 = x+3 x = t1 t2 = t1+4 Samuel2005@126.com

  42. Static Simulation lda x lod x ldc 3 adi stn ldc 4 adi top of stack 3 x address of x t1=x+3 top of stack t1 address of x Samuel2005@126.com

  43. Static Simulation lda x lod x ldc 3 adi stn ldc 4 adi top of stack t1 address of x x=t1 top of stack t1 Samuel2005@126.com

  44. Static Simulation t1 = x+3 x = t1 t2 = t1+4 lda x lod x ldc 3 adi stn ldc 4 adi top of stack 4 t2=t1+4 t1 top of stack t2 Samuel2005@126.com

  45. Example exp → id = exp | aexp aexp → aexp + factor | factor factor → (exp) | num | id (x=x+3)+4 lda x lod x ldc 3 adi stn ldc 4 adi t1 = x+3 x = t1 t2 = t1+4 Samuel2005@126.com

  46. Macro Expansion lda t1 lod x ldc 3 adi sto lda t1 lod x ldc 3 adi sto lda x lod t1 sto lda t2 lod t1 ldc 4 adi sto t1 = x+3 x = t1 t2 = t1+4 t1 = x+3 lda x lod t1 sto x = t1 lda t2 lod t1 ldc 4 adi sto t2 = t1+4 Samuel2005@126.com

More Related