Weekly Note 8 - Week 13

22 March 2019

Lecture - Tuesday, March 26th.

16-19 in U82

We will cover chapter 10 on virtual memory. This chapter will be the primary material for exam topic 7: "Virtual Memory".

We will have a small review of the previous lecture, so this time it will be chapters 9, and have the Kahoot as well.

Tutorial session

Friday March 22th. 10-12 in U166

Preparation:

Make a list of 10-15 keywords for a 10 min. presentation with the topic: "Virtual Memory"

Prepare at home to discuss:

  1. Assume a program has just referenced an address in virtual memory. Describe a scenario how each of the following can occur: (If a scenario cannot occur, explain why.)

    1. TLB miss with no page fault

    2. TLB miss and page fault

    3. TLB hit and no page fault

    4. TLB hit and page fault

  2. A simplified view of thread states is Ready, Running, and Blocked, where a thread is either ready and waiting to be scheduled, is running on the processor, or is blocked (for example, waiting for I/O). This is illustrated in the figure below. Assuming a thread is in the Running state, answer the following questions, and explain your answer:
    ex 9 2

    1. Will the thread change state if it incurs a page fault? If so, to what new state?

    2. Will the thread change state if it generates a TLB miss that is resolved in the page table? If so, to what new state?

    3. Will the thread change state if an address reference is resolved in the page table? If so, to what new state? Consider a system that uses pure demand paging:

  3. When a process first starts execution, how would you characterize the page fault rate?

    1. Once the working set for a process is loaded into memory, how would you characterize the page fault rate?

    2. Assume a process changes its locality and the size of the new working set is too large to be stored into available free memory. Identify some options system designers could choose from to handle this situation?

  4. What is the copy-on-write feature, and under what circumstances is it beneficial? What hardware support is required to implement this feature?

  5. Consider the following page reference string: 7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7, 1, 0, 5, 4, 6, 2, 3, 0 , 1. Assuming demand paging with three frames, how many page faults would occur using

    1. LRU replacement

    2. FIFO replacement

    3. Optimal replacement

  6. Assume that you are monitoring the rate at which the pointer in the clock algorithm (which indicates the candidate page for replacement) moves. What can you say about the system if you notice the following behavior:

    1. pointer is moving fast

    2. pointer is moving slow

  7. Discuss situations in which the LFU page-replacement algorithm generates fewer page faults than the LRU page-replacement algorithm. Also discuss under what circumstances the opposite holds.

  8. Discuss situations in which the MFU page-replacement algorithm generates fewer page faults than the LRU page-replacement algorithm. Also discuss under what circumstances the opposite holds

  9. Consider a demand-paging system with the following time-measured utilizations:

    • CPU utilization: 20%

    • Paging disk: 97.7%

    • Other I/O devices: 5%
      For each of the following, say whether it will (or is likely to) improve CPU utilization. Explain your answers.

      1. Install a faster CPU

      2. Install a bigger paging disk

      3. Increase the degree of multiprogramming.

      4. Decrease the degree of multiprogramming.

      5. Install more main memory.

      6. Install a faster hard disk or multiple controllers with multiple hard disks.

      7. Add prepaging to the page fetch algorithms.

      8. Increase the page size.

  10. What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?

In class:

Use the first 30 minutes to work on those exercises:

  1. Suppose that your replacement policy (in a paged system) is to examine each page regularly and to discard that page if it has not been used since the last examination. What would you gain and what would you lose by using this policy rather than LRU or second-chance replacement?

  2. Consider a demand-paging system with a paging disk that has an average access and transfer time of 20 milliseconds. Addresses are translated through a page table in main memory, with an access time of 1 microsecond per memory access. Thus, each memory reference through the page table takes two accesses. To improve this time, we have added an associative memory that reduces access time to one memory reference, if the page-table entry is in the associative memory. Assume that 80 percent of the accesses are in the associative memory and that, of those remaining, 10 percent (or 2 percent of the total) cause page faults. What is the effective memory access time?

  3. Assume we have a demand-paged memory. The page table is held in registers. It takes 8 milliseconds to service a page fault if an empty page is available or the replaced page is not modified, and 20 milliseconds if the replaced page is modified. Memory access time is 100 nanoseconds. Assume that the page to be replaced is modified 70 percent of the time. What is the maximum acceptable page-fault rate for an effective access time of no more than 200 nanoseconds?

  4. The slab allocation algorithm uses a separate cache for each different object type. Assuming there is one cache per object type, explain why this doesn’t scale well with multiple CPUs. What could be done to address this scalability issue?

  5. Consider a system that allocates pages of different sizes to its processes. What are the advantages of such a paging scheme? What modifications to the virtual memory system provide this functionality?

Then spend the remaining time discussing your list of keywords, and the excercises prepared at home and in class.

  • Chapter 10 in Operating System Concepts, Enhanced eText, 10th Edition

Material (Slides, etc.)