In Connection with the Exam
Questions will be answered here.
The order is not chronological. Instead, general questions regarding
the exam are listed first, and the rest are ordered according to
the sequence in which the topics are covered in the course.
 How much am I supposed to cover during the presentation?
 At the final lecture, I will give a sketch of how all
questions may be organized.
It is important to point out that the speed you use
at the exam should be much higher than the speed used at
the lectures. At the lectures, material is explained to
students who, except for preparation, do not know the
material in advance and should learn it.
At the exam, the material is presented to professionals
with the purpose of demonstrating hos much the student knows.
Choose interesting material, i.e., do not give too much of
an introduction and do not dwell on trivial special cases.
 Is there any reason to prepare my own examples if you're giving the examples I should use anyway?

For the topic you draw, you get to give a presentation, and you can
(and I expect you to) use your own examples. After this part, when
I start asking questions to other topics in the curriculum, I will
provide examples to discuss when relevant.
 What does "topdown" and "bottomup" mean?
 These are the traditional terms for two different parsing techniques.
They refer to the way in which the parser tree is built during the
parsing of the input string.
I have used these terms in the lectures, but the book has
chosen other ways of naming the techniques.
In the book, the corresponding terms are
"predictive" (Section 3.2) and "LR" (Section 3.3).
For "topdown", I'm particularly interested in hearing about LL(1)
and for "bottomup", the interesting topic is LR(1).
Don't waste time on explaining LR(0) or SLR, but note that some definitions
and algorithms of relevance to LR(1) are given in those sections.
 Is it correct, as the book indicates, that recursive descent is
the same as predictive parsing?
 As it often happens, different authors do not agree how to use the
terms, and when they start involving the concept of backtracking as
well, it becomes even less clear. If one allows full backtracking,
then algorithms turn into exhaustive search and it does not make sense
to call them anything else at that point.
So, let us assume that there is no backtracking.
Then the recursive descent approach is just one way to implement a
predictive parser. It is the term for the approach where one writes
mutually recursive functions, one function for each nonterminal,
and a number of cases in each function equal to the number of different
rules with the given nonterminal as the left side of the rule.
Another way of implementing predictive parsing without using recursive descent
would be to use a parser table.
 When you run Algorithm 3.13 on an example, you sometimes get
sequences like Y_{2}, ..., Y_{1}?
 This is standard mathematical notation. If you consider a
sequence Y_{1}, ..., Y_{k} and look at some
example subsequences, then Y_{i}, ..., Y_{j} is

Y_{i}, Y_{i+1}, if j = i+1.

Y_{i}, if j = i.

the empty sequence, if j = i1. This includes the case where k=0.
 I think that the book's coverage of code generation is very
different from your coverage at the lectures!
 Yes, the approach is somewhat different.
For the oral exam, my recommendation is that you use
Supplementary Notes for DM546
as your starting point; see also my explanations in connection
with the curriculum.
 What can you talk about with respect to optimization?
 The purpose of optimization and the different forms of
optimization is a good starting point.
Peephole optimization ought to be covered, including
termination of the process (termination function); it could
be an advantage to demonstrate this termination argument
on an interesting example, i.e., an example which among other things
contain patterns that actually increase the size of the code;
see Supplementary Notes for DM546.
It is also fine to give a brief account of the register allocation
problem at the end, even though this is a question in its own right.