Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.
Please read the tutorial at this link: https://ebookbell.com/faq
We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.
For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.
EbookBell Team
4.4
102 reviewsISBN 13: 9781032727196
Author: Douglas Comer
This easy-to-read textbook provides an introduction to computer architecture, focusing on the essential aspects of hardware that programmers need to know. Written from a programmer’s point of view, Essentials of Computer Architecture, Third Edition, covers the three key aspects of architecture: processors, physical and virtual memories, and input-output (I/O) systems. This third edition is updated in view of advances in the field. Most students only have experience with high-level programming languages, and almost no experience tinkering with electronics and hardware. As such, this text is revised to follow a top-down approach, moving from discussions on how a compiler transforms a source program into binary code and data, to explanations of how a computer represents data and code in binary. Additional chapters cover parallelism and data pipelining, assessing the performance of computer systems, and the important topic of power and energy consumption. Exclusive to this third edition, a new chapter explains multicore processors and how coherence hardware provides a consistent view of the values in memory even though each core has its own cache. Suitable for a one-semester undergraduate course, this clear, concise, and easy-to-read textbook offers an ideal introduction to computer architecture for students studying computer programming.
Chapter 1 Introduction And Overview
1.1 The Importance Of Architecture
1.2 Learning The Essentials
1.3 The Cycle Of New Hardware And New Software
1.4 What We Will Cover
1.5 What We Will Omit
1.6 What We Will Emphasize
1.7 Architecture, Design, and Implementation
1.8 Hardware Designs And Unexpected Constraints
1.9 Summary
Chapter 2 Program Interpretation And Transformation
2.1 Introduction
2.2 Specification Of Computation
2.3 Automated Program Interpretation
2.4 What Level Of Abstraction Should Be Used For Programs?
2.5 Transforming A Source Program To Machine Language
2.6 Using Memory Addresses Instead Of Variable Names
2.7 Global And Local Variables
2.8 Segments And Their Location In Memory
2.9 Relocation And Memory Addresses
2.10 The ELF Standard for Storing Object Programs
2.11 Translation Of An Example C Program
Chapter 3 Data And Program Representation
3.1 Introduction
3.2 Definitions Of Bit And Byte
3.3 Possible Values In A Byte
3.4 Binary Weighted Positional Representation
3.5 Bit Ordering
3.6 Hexadecimal Notation Used By Humans
3.7 Notation For Hexadecimal And Binary Constants
3.8 Character Sets
3.9 Unicode
3.10 Unsigned Integers And Endianness
3.11 Signed Binary Integers
3.12 Quirks Of Signed Representations
3.13 An Example Of Two's Complement Numbers
3.14 Sign Extension
3.15 Casting Integers
3.16 Floating Point
3.17 Range Of IEEE Floating Point Values
3.18 Biased Exponent Values
3.19 An Example Floating Point Number
3.20 Special Values And NaN
3.21 Binary Coded Decimal Representation
3.22 Signed, Fractional, And Packed BCD Representations
3.23 Data Aggregates
3.24 Instructions And Their Representation
3.25 Summary
Chapter 4 A High-Level Overview Of Processors
4.1 Introduction
4.2 The Two Basic Architectural Approaches
4.3 The Harvard And Von Neumann Architectures
4.4 Definition Of A Processor
4.5 The Range Of Processors
4.6 Hierarchical Structure And Computational Engines
4.7 Structure Of A Conventional Processor
4.8 Processor Categories And Roles
4.9 Processor Technologies
4.10 Stored Programs
4.11 The Fetch-Execute Cycle
4.12 Instructions In Memory
4.13 Variable-length and Fixed-length Instructions
4.14 Clock Rate And Instruction Rate
4.15 Control: Getting Started And Stopping
4.16 Starting The Fetch-Execute Cycle
4.17 Summary
Chapter 5 Instruction Sets And Operands
5.1 Introduction
5.2 Mathematics, Convenience, And Cost
5.3 Instruction Sets
5.4 Instruction Set Architecture
5.5 Opcodes, Operands, And Results
5.6 Typical Instruction Format
5.7 General-Purpose Registers
5.8 Floating Point Registers And Register Identification
5.9 Programming With Registers
5.10 Register Banks
5.11 Terminology: Complex And Reduced Instruction Sets
5.12 RISC Design And The Execution Pipeline
5.13 Pipelines And Instruction Stalls
5.14 Other Causes Of Pipeline Stalls
5.15 Consequences For Programmers
5.16 Programming, Stalls, And No-Op Instructions
5.17 Forwarding
5.18 Types Of Operations
5.19 Program Counter, Fetch-Execute, And Branching
5.20 Condition Codes And Conditional Branching
5.21 Subroutine And Function Calls
5.22 Argument Passing And Return Values
5.23 An Example Instruction Set
5.24 The Principle Of Minimalism
5.25 The Principles Of Elegance And Orthogonality
5.26 Summary
Chapter 6 Operand Addressing And Operand Types
6.1 Introduction
6.2 Zero-, One-, Two-, And Three-Address Designs
6.3 Zero Operands Per Instruction
6.4 One Operand Per Instruction
6.5 Two Operands Per Instruction
6.6 Three Operands Per Instruction
6.7 Operand Sources And Immediate Values
6.8 The Von Neumann Bottleneck
6.9 Implicit And Explicit Operand Encoding
6.10 Operands That Combine Multiple Values
6.11 Tradeoffs In The Choice Of Operands
6.12 Direct And Indirect Operands In Memory
6.13 Illustration Of Operand Addressing Modes
6.14 Summary
Chapter 7 Assembly Languages And Programming Paradigm
7.1 Introduction
7.2 The Reason To Learn Assembly Language
7.3 Characteristics Of High-Level And Low-Level Languages
7.4 Assembly Languages
7.5 Assembly Language Syntax And Opcodes
7.6 Operand Order
7.7 Register Names
7.8 Operand Syntax
7.9 Assembly Code For If-Then
7.10 Assembly Code For If-Then-Else
7.11 Assembly Code For Definite Iteration (For Loop)
7.12 Assembly Code For Indefinite Iteration (While Loop)
7.13 Assembly Code For A Function Call
7.14 Variable Declarations In Assembly Code
7.15 Example Assembly Language Code
7.16 How An Assembler Works: Two-Pass Translation
7.17 Assembly Language Macros
7.18 Summary
Chapter 8 Main Memory And Memory Addressing
8.1 Introduction
8.2 Characteristics Of Computer Memory
8.3 Static And Dynamic RAM Technologies
8.4 Memory Performance And Higher Data Rate Technologies
8.5 Memory Addresses For An Array Of Bytes
8.6 The Fetch-Store Paradigm, Pointers, And Dereferencing
8.7 Memory Dumps
8.8 Aligned Memory Access And Aggregates In Memory
8.9 Ordering Items To Optimize Space With Aligned Access
8.10 Memory Organization And Aligned Memory Access
8.11 Memory Hardware, Words, And Word Operations
8.12 Byte Addressing, Word Addressing, And Alignment
8.13 Calculating A Word Address
8.14 Calculating Word Addresses Using Powers of Two
8.15 Multiple Memories With Separate Controllers
8.16 Memory Banks
8.17 Interleaving
8.18 Memory Sizes, Powers Of Two, And Prefixes
8.19 Content Addressable Memory
8.20 Ternary CAM
8.21 Summary
Chapter 9 Virtual Memory Technologies And Virtual Addressing
9.1 Introduction
9.2 Definition Of Virtual Memory
9.3 Memory Management Unit And A Virtual Address Space
9.4 Address Translation Using Powers Of Two
9.5 Virtual Address Spaces For Concurrent Applications
9.6 Mechanisms Used To Create Virtual Address Spaces
9.7 Base-Bound Registers
9.8 Virtual Memory Isolation And Protection
9.9 Loading Program Pieces And Demand Paging
9.10 Hardware And Software For Demand Paging
9.11 Page Replacement
9.12 Paging Terminology And Data Structures
9.13 Address Translation With A Demand Paging System
9.14 Using Powers Of Two For Address Translation
9.15 Presence, Use, And Modified Bits
9.16 Page Table Storage
9.17 Efficient Paging With A Translation Lookaside Buffer
9.18 Consequences For Programmers
9.19 Single-Level Page Tables And Page Table Size
9.20 The Advantage Of Multi-Level Page Tables
9.21 Multi-Level Page Tables For 64-Bit Architectures
9.22 Sparseness And Address Space Layout Randomization
9.23 Page Table Walks And The Importance Of A TLB
9.24 Summary
Chapter 10 Caches And Caching
10.1 Introduction
10.2 How Data Propagates Through A Storage Hierarchy
10.3 Definition of Caching
10.4 Characteristics Of A Cache
10.5 Cache Terminology
10.6 Best-Case And Worst-Case Cache Performance
10.7 Cache Performance On A Typical Sequence
10.8 Cache Replacement Policy
10.9 LRU Replacement
10.10 Multilevel Cache Hierarchy
10.11 Preloading Caches
10.12 Caches Used With Memory
10.13 Main Memory Cache
10.14 Write Through And Write Back
10.15 Cache Coherence
10.16 L1, L2, and L3 Caches
10.17 Sizes Of L1, L2, And L3 Caches
10.18 Instruction And Data Caches
10.19 Modified Harvard Architecture
10.20 Implementation Of Memory Caching
10.21 Direct Mapped Memory Cache
10.22 Using Powers Of Two For Efficiency
10.23 Hardware Implementation Of A Direct Mapped Cache
10.24 Set Associative Memory Cache
10.25 Consequences For Programmers
10.26 The Relationship Between Virtual Memory And Caching
10.27 Virtual Memory Caching And Cache Flush
10.28 A Note About Cache Performance
10.29 Summary
Chapter 11 Storage: File Systems, Blocks, And SSDs
11.1 Introduction
11.2 Persistent Storage Mechanisms And Files
11.3 The History Of Persistent Storage
11.4 Solid-State Drives (SSDs)
11.5 The Block-Oriented Interface
11.6 DMA Hardware Used For Block Transfer
11.7 Storing Files In 512-Byte Blocks
11.8 Blocks Of Cells, Logical Blocks, And Erase-Before-Write
11.9 Flash Lifetime
11.10 The Internal Structure Of An SSD
11.11 Logical Block Storage And Update
11.12 Repeated Wear, Location Mapping, And Wear Leveling
11.13 Overprovisioning And Bad Block Management
11.14 Garbage Collection
11.15 Assessment Of SSD Technology
11.16 Summary
Chapter 12 A Programmerʼs View Of Devices, I/O, And Buffering
12.1 Introduction
12.2 Devices And Device Hardware
12.3 Device-Independent I/O And Encapsulation
12.4 Conceptual Parts Of A Device Driver
12.5 Two Main Categories Of Devices
12.6 Example Flow Through A Device Driver
12.7 An Input Queue In A Device Driver
12.8 An Output Queue In A Device Driver
12.9 Isolating I/O Devices From Applications
12.10 The Motivation For A Standard I/O Library
12.11 Reducing System Call Overhead
12.12 Standard I/O Functions
12.13 Buffered Input
12.14 Buffered Output
12.15 Flushing A Buffer
12.16 Using Buffered I/O With Devices
12.17 The Relationship Between Buffering And Caching
12.18 Summary
Chapter 13 Buses And Bus Architectures
13.1 Introduction
13.2 Definition Of A Bus
13.3 Processors, I/O Devices, And Buses
13.4 Physical Bus Connections And Sockets
13.5 Control, Address, And Data Lines In A Bus
13.6 The Fetch-Store Paradigm
13.7 Fetch-Store Operations On A Parallel Bus
13.8 Data Transfer Rate, Bus Width, And Serial Buses
13.9 Large Data Transfers And Direct Memory Access
13.10 Bus Transfer Size And The Size Of Data Items
13.11 Bus Address Space
13.12 Invalid Addresses And Bus Errors
13.13 Memory Addresses And Sockets
13.14 The Question Of Multiple Buses
13.15 Using Fetch-Store With Devices
13.16 Operation Of An Interface
13.17 Asymmetric Specifications And Bus Errors
13.18 An Example Bus Address Space And Address Map
13.19 Holes In A Bus Address Space And Utilization
13.20 The Program Interface To A Bus
13.21 Bridging Between Two Buses
13.22 An Example Bridge Mapping
13.23 Consequences For Device Driver Programmers
13.24 Switching Fabrics As An Alternative To Buses
13.25 Summary
Chapter 14 Programming Devices And Interrupt-Driven I/O
14.1 Introduction
14.2 The Two I/O Paradigms
14.3 Programmed I/O
14.4 Synchronization
14.5 Synchronization Using Polling
14.6 Code For Polling
14.7 Control And Status Registers
14.8 Using A Struct To Define CSRs
14.9 Processor Use And Polling
14.10 Interrupt-Driven I/O
14.11 An Interrupt Mechanism And Fetch-Execute
14.12 Handling An Interrupt
14.13 Interrupt Vectors
14.14 Interrupt Initialization And Disabled Interrupts
14.15 Interrupting An Interrupt Handler
14.16 Configuration Of Interrupts
14.17 Dynamic Bus Connections And Pluggable Devices
14.18 Interrupts, Performance, And Smart Devices
14.19 Smart Devices, DMA, And Offloading
14.20 Extending DMA With Buffer Chaining
14.21 Scatter Read And Gather Write Operations
14.22 Operation Chaining
14.23 Summary
Chapter 15 Data Paths And Instruction Execution
15.1 Introduction
15.2 Data Paths
15.3 An Example Instruction Set
15.4 Instructions In Memory
15.5 Moving To The Next Instruction
15.6 Fetching An Instruction
15.7 Decoding An Instruction
15.8 Connections To A Register Unit
15.9 Control And Coordination
15.10 Arithmetic Operations And Multiplexing
15.11 Operations Involving Data In Memory
15.12 Example Execution Sequences
15.13 Summary
Chapter 16 CPUs: Microcode, Protection, And Processor Modes
16.1 Introduction
16.2 A Central Processor
16.3 CPU Complexity
16.4 Modes Of Execution
16.5 Backward Compatibility
16.6 Changing Modes
16.7 Privilege And Protection
16.8 Multiple Levels Of Protection
16.9 Microcoded Instructions
16.10 Microcode Variations
16.11 The Advantage Of Microcode
16.12 FPGAs And Changes To An Instruction Set
16.13 Vertical Microcode
16.14 Horizontal Microcode
16.15 Example Horizontal Microcode
16.16 A Horizontal Microcode Example
16.17 Operations That Require Multiple Cycles
16.18 Horizontal Microcode And Parallel Execution
16.19 Look-Ahead And High-Performance Execution
16.20 Parallelism And Execution Order
16.21 Out-Of-Order Instruction Execution
16.22 Conditional Branches And Branch Prediction
16.23 Consequences For Programmers
16.24 Summary
Chapter 17 Parallelism
17.1 Introduction
17.2 Parallel And Pipelined Architectures
17.3 Characterizations Of Parallelism
17.4 Microscopic Vs. Macroscopic
17.5 Examples Of Microscopic Parallelism
17.6 Examples Of Macroscopic Parallelism
17.7 Symmetric Vs. Asymmetric
17.8 Fine-Grain Vs. Coarse-Grain Parallelism
17.9 Explicit Vs. Implicit Parallelism
17.10 Types Of Parallel Architectures (Flynn Classification)
17.11 Single Instruction Single Data (SISD)
17.12 Single Instruction Multiple Data (SIMD)
17.13 Multiple Instructions Multiple Data (MIMD)
17.14 Communication, Coordination, And Contention
17.15 Performance Of Multiprocessors
17.16 Consequences For Programmers
17.17 Redundant Parallel Architectures
17.18 Distributed And Cluster Computers
17.19 A Modern Supercomputer
17.20 Summary
Chapter 18 Data Pipelining
18.1 Introduction
18.2 The Concept Of Pipelining
18.3 Software Pipelining
18.4 Hardware Pipelining
18.5 How Hardware Pipelining Increases Performance
18.6 When Pipelining Can Be Used
18.7 The Conceptual Division Of Processing
18.8 Pipeline Architectures
18.9 Pipeline Setup, Stall, And Flush Times
18.10 Definition Of Superpipeline Architecture
18.11 Summary
Chapter 19 Assessing Performance
19.1 Introduction
19.2 Measuring Computational Power And Performance
19.3 Measures Of Computational Power
19.4 Application-Specific Instruction Counts
19.5 Instruction Mix
19.6 Standardized Benchmarks
19.7 I/O And Memory Bottlenecks
19.8 Moving The Boundary Between Hardware And Software
19.9 Choosing Items To Optimize, Amdahl's Law
19.10 Amdahl's Law And Parallel Systems
19.11 Summary
Chapter 20 Multicore Processors
20.1 Introduction
20.2 The Move To Multicore Processor Chips
20.3 The Multicore Concept: Parallelism And Shared Memory
20.4 Multicore Processor Vs. Multiprocessor
20.5 Asymmetry
20.6 Direct Communication Among Cores
20.7 Tight Coupling And Extremely Low Latency
20.8 Shared Access To All I/O Devices
20.9 The Ability To Associate Interrupts With Specific Cores
20.10 The Ability To Start And Stop Cores
20.11 Shared Memory And Multicore Caching
20.12 Cache Inconsistency
20.13 Preventing Inconsistency: Cache Coherence
20.14 Programming Multicore: Threads And Scheduling
20.15 Thread Scheduling And Affinity
20.16 Simultaneous Multi-Threading (SMT)
20.17 Inter-core Communication Via Messages And Software Interrupts
20.18 Mutual Exclusion Among Cores
20.19 Locks, And Busy Waiting
20.20 Using A Memory Location As A Lock (Test-And-Set)
20.21 An Example Of Test-And-Set
20.22 Atomic Update And Cache Coherence
20.23 Summary
Chapter 21 Power And Energy
21.1 Introduction
21.2 Definition Of Power
21.3 Definition Of Energy
21.4 Power Consumption By A Digital Circuit
21.5 Switching Power Consumed By A CMOS Digital Circuit
21.6 Cooling, Power Density, And The Power Wall
21.7 Energy Use
21.8 Power Management
21.9 Software Control Of Energy Use
21.10 Choosing When To Sleep And When To Awaken
21.11 Sleep Modes And Network Devices
21.12 Summary
Chapter 22 Building Blocks: Transistors, Gates, And Clocks
22.1 Introduction
22.2 The History Of Digital Technologies
22.3 Electrical Terminology: Voltage And Current
22.4 The Transistor
22.5 Logic Gates
22.6 Implementation Of A Nand Logic Gate Using Transistors
22.7 Symbols Used For Logic Gates
22.8 Example Interconnection Of Gates
22.9 A Digital Circuit For Binary Addition
22.10 Multiple Gates Per Integrated Circuit
22.11 The Need For More Than Combinatorial Circuits
22.12 Circuits That Maintain State
22.13 Feedback And Propagation Delay
22.14 Using Latches To Create A Memory
22.15 Flip-Flops And Transition Diagrams
22.16 Binary Counters
22.17 Clocks, Feedback, And Sequences
22.18 The Importance Of Feedback
22.19 Starting A Sequence
22.20 Iteration In Software Vs. Replication In Hardware
22.21 Gate And Chip Minimization
22.22 Using Spare Gates
22.23 Power Distribution And Heat Dissipation
22.24 Timing And Clock Zones
22.25 Clockless Logic
22.26 Circuit Size And Moore's Law
22.27 Circuit Boards And Layers
22.28 Levels Of Abstraction
22.29 Summary
Chapter 23 Hardware Modularity
23.1 Introduction
23.2 Motivations For Modularity
23.3 Software Modularity
23.4 Parameterization
23.5 Forms Of Hardware Modularity
23.6 Modular Chip Construction
23.7 Replication And Parallelism
23.8 Basic Block Replication
23.9 An Example Modular Design: A Rebooter
23.10 High-Level Rebooter Design
23.11 A Building Block To Accommodate A Range Of Sizes
23.12 Parallel Interconnection
23.13 Module Selection
23.14 Summary
essentials of computer architecture 3rd edition pdf
essentials of computer architecture 2nd edition pdf
essentials of computer architecture second edition
essentials of computer architecture 3rd edition
essentials of computer architecture comer
Tags: Douglas Comer, Computer, Architecture