Computer Memory and Variables'-Addressing Concepts

Computer memory system is how a computer stores and retrieves data for processing. 

  1. Computer Memory System Concept: Memory is organized in a hierarchy based on speed, cost, and size:
  • Registers: Smallest, fastest, inside CPU. Hold current instruction + data being processed. 
  • Cache L1/L2/L3: SRAM memory on or near CPU. Caches frequently used data to avoid slow RAM access. 
  • Main Memory / RAM: DRAM. Volatile. Holds OS, apps, data currently in use. Accessed by CPU via memory bus.
  • Secondary Storage: HDD, SSD, NVMe. Non-volatile, large, slow. Stores OS, files, programs long-term.
  • Tertiary Storage: Tape, cloud. For backup/archival.

Let's understand more with the help of a Table below:

Think of memory like a library with different storage rooms
Level Name Speed Size Volatility Purpose
0 CPU-Registers ~1 cycle ~KB Volatile Hold data for current instruction
1 L1/L2/L3 Cache ~3-30 cycles ~MB Volatile Hold hot data to avoid RAM trips
2 Main Memory RAM ~100 cycles ~GB Volatile Running programs, OS, stack, heap
3 SSD / HDD ~100,000 cycles ~TB Non-volatile Files, apps, OS on disk
4 Cloud / Tape seconds+ ~PB Non-volatile Backup, archive

How data flows: CPU asks for data → checks L1 cache → L2 → L3 → RAM → SSD. Each level is bigger + slower. This is called "locality-of-reference": programs tend to reuse nearby data/code, so caching works.

Memory-Management by OS: 

  • Allocation: malloc / new gets you a chunk of virtual memory.
  • Protection: Each process gets its own virtual space. Process A can’t read Process B’s RAM.
  • Swapping: If RAM is full, OS moves inactive pages to disk = "page file" or "swap".

Key idea: "Memory wall" = CPU is much faster than RAM. So we use caching + hierarchy to hide latency.

  • Addressing Concepts: Addressing is how CPU refers to a specific memory location.
  1. Physical Address: Actual location in RAM chips. The hardware memory controller uses this.
  2. Logical / Virtual Address: Address generated by CPU/program. OS + MMU translates it to physical address. This enables memory protection + virtual memory.
  3. Address Space: Range of addresses a program can use. 32-bit = 2^32 = 4GB. 64-bit = 2^64 = huge.

Addressing modes - how instructions specify addresses:
- Immediate: MOV AX, 5  → value is part of instruction
- Direct: MOV AX, [1000] → use address 1000 directly  
- Indirect: MOV AX, [BX] → address is in register BX
- Indexed: MOV AX, [BX + SI] → base + offset
- Relative: JMP +10 → address relative to current PC

Addressing answers: "Where exactly is this byte?"

Physical vs Virtual Addressing:

  • Physical Address: The real wire-level address on RAM chip. Only OS + hardware sees this.
  • Virtual Address: What your program sees. int x = 10; might be at virtual address 0x7ffd1234. 
  • MMU + Page Table: Memory Management Unit converts virtual → physical. If page isn’t in RAM, CPU triggers "page fault" and OS loads it from disk.

Why virtual? 3 big benefits:

  • Isolation: '2' programs can both think they own address 0x0000 without conflict.
  • Flexibility: Program doesn’t need to be loaded in contiguous RAM.
  • Security: OS can mark pages as "read-only" or "no-execute" to stop buffer overflows.

Memory-Mapping-Techniques:

  • Segmentation: Divide memory into segments like code, data, stack. Each has base + limit.
  • Paging: Divide memory into fixed-size pages, usually 4KB. Page table maps virtual page → physical frame. Used in Windows, Linux, Android.
  • TLB: Translation Lookaside Buffer = cache for page table entries to speed up virtual → physical translation.

So in short: Memory hierarchy gives speed vs capacity tradeoff, and addressing gives us a way for programs to safely and efficiently access that memory without knowing physical layout.

Addressing Modes in CPU instructions: CPU has to encode "where to get data" in the instruction itself. Common modes:

  • Register: ADD R1, R2 → use data in CPU registers. Fastest.
  • Immediate: ADD R1, #5 → constant 5 is in instruction. 
  • Direct: LOAD R1, [2000] → go to memory address 2000.
  • Indirect: LOAD R1, [R2] → R2 holds the address. Used for pointers.
  • Base + Offset: LOAD R1, [R2 + 8] → array access. R2 is array start, 8 is index*size.

C. Paging vs Segmentation
- Segmentation: Split memory into logical chunks: code segment, data segment, stack. Each has base address + length. Problem: "external fragmentation" - lots of small gaps.
- Paging: Split memory into fixed 4KB blocks called "pages". Virtual page 0 can map to physical frame 15, page 1 to frame 100, etc. Solves fragmentation, but needs page table.

Multi-level Page Tables
 

For 64-bit systems, 2^64 addresses is too big for 1 table. So OS uses 4-level page tables: 
Virtual address = 9 bits Level4 + 9 bits Level3 + 9 bits Level2 + 9 bits Level1 + 12 bits offset. 
This is why malloc doesn’t crash even though your program "has" 128TB of address space.

Real-world example: Android-Application
 

Your Android app runs in its own virtual address space. When you read a file, kernel maps SSD page to RAM page, then maps RAM page to your app’s virtual address. If you access unmapped memory, you get SIGSEGV = crash.

So: Memory hierarchy = performance. Addressing = abstraction + protection.