1. 程式人生 > >Computer Organization Chapter 5

Computer Organization Chapter 5

儲存器層次結構


本章主要討論如何構建一個容量無限大的虛擬快速儲存器

Questions to ask

  1. What I’ve known?
    • 我知道儲存器是分級的;而分級可以減小因為磁碟和CPU速度差而導致的速度變慢
  2. What I don’t know?
    • 不知道為什麼分級可以提高速度;不知道儲存器具體的層次結構
  3. What I can learn?
    • 儲存器是怎麼分層的;為什麼分層能提高速度;關於儲存器的速度具體怎麼計算
  4. What I can’t learn?
    • 應該怎麼去系統性地設計儲存器;儲存器硬體的具體實現比如DRAM

Abstract

  • Temporal locality, space locality, block, line, hit rate, miss rate,
  • SRAM, DRAM, 地址交叉, EEPROM, 磁軌、扇區、
  • direct mapped, valid bit、cache相關計算、cache確實處理、寫直達、寫緩衝、寫回
  • cache效能評估、計算cache效能、set associative
  • virtual memory, page table, TLB: translation-Lookaside Buffer

Terms

  1. Temporal locality:(locality in time): if an item is referenced, it will tend to be referenced again soon
  2. Spatial locality (locality in space): if an item is referenced, items whose addresses are close by will tend to be referenced soon
  3. Block, line: the minimum unit of infor mation that can be either present or not present in the two­level hierarchy is called a block or a line
  4. The hit rate, or hit ratio, is the fraction of mem ory ac cesses found in the upper level
  5. Direct mapped: A cache structure in which each memory location is mapped to exactly one location in the cache
  6. write-through: A scheme in which writes always update both the cache and the next lower level of the memory hierarchy, ensuring that data is always con sistent between the two.
  7. write buffer: A queue that holds data while the data is waiting to be written to memory.
  8. write-back: A scheme that han dles writes by updating values only to the block in the cache, then writing the modified block to the lower level of the hierar chy when the block is replaced.
  9. Handling Cache Misses:
    1. Send the original PC value (current PC – 4) to the memory.
    2. Instruct main memory to perform a read and wait for the memory to complete its access.
    3. Write the cache entry, putting the data from memory in the data portion of the entry, writing the upper bits of the address (from the ALU) into the tag field, and turning the valid bit on.
    4. Restart the instruction execution at the first step, which will refetch the instruction, this time finding it in the cache.
  10. fully associative cache: A cache structure in which a block can be placed in any location in the cache.
  11. set-associative cache: A cache that has a fixed number of loca tions (at least two) where each block can be placed.
  12. virtual memory: A technique that uses main memory as a “cache” for secondary storage.
  13. physical address An address in main memory.
  14. protection A set of mecha nisms for ensuring that multiple processes sharing the processor, memory, or I/O devices cannot interfere, intentionally or unin tentionally, with one another by reading or writing each other’s data. These mechanisms also isolate the operating system from a user process.
  15. page fault An event that occurs when an accessed page is not present in main memory.
  16. virtual address An address that corresponds to a location in virtual space and is translated by address mapping to a physical address when memory is accessed.
  17. address translation Also called address mapping. The process by which a virtual address is mapped to an address used to access memory.
  18. segmentation A variable­size address mapping scheme in which an address consists of two parts: a segment number, which is mapped to a physical address, and a segment offset.
  19. page table The table contain ing the virtual to physical address translations in a virtual memory system. The table, which is stored in memory, is typically indexed by the virtual page number; each entry in the table contains the physical page number for that virtual page if the page is currently in memory.
  20. swap space The space on the disk reserved for the full virtual memory space of a process.
  21. reference bit Also called use bit. A field that is set whenever a page is accessed and that is used to implement LRU or other replacement schemes.
  22. translation-lookaside buffer (TLB) A cache that keeps track of recently used address mappings to try to avoid an access to the page table.
  23. virtually addressed cache A cache that is accessed with a vir tual address rather than a physi cal address.
  24. aliasing A situation in which the same object is accessed by two addresses; can occur in vir tual memory when there are two virtual addresses for the same physical page.
  25. physically addressed cache A cache that is addressed by a physical address.