How many memory addresses can we get with a 32-bit processor and 1gb ram?

How many memory addresses can we get with a 32-bit processor and 1gb ram and how many with a 64-bit processor?

I think that it's something like this:

1GB of ram divided by 32bits or divided by 4? to get the number of memory addresses?

But i'm not sure. That's why i'm asking.

I red on wikipedia that 1 memory addresses is 32bits wide or 4 octets (1 octet = 8 bits), compared to 64bit a processor where 1 memory addresses or 1 integer is 64bits wide or 8 octets. But don't know if i understood it correctly either.

8

6 Answers

Short answer: The number of available addresses is equal to the smaller of those:

  • Memory size in bytes
  • Greatest unsigned integer that can be saved in CPU's machine word

Long answer and explaination of the above:

Memory consists of bytes (B). Each byte consists of 8 bits (b).

1 B = 8 b

1 GB of RAM is actually 1 GiB (gibibyte, not gigabyte). The difference is:

1 GB = 10^9 B = 1 000 000 000 B
1 GiB = 2^30 B = 1 073 741 824 B

Every byte of memory has its own address, no matter how big the CPU machine word is. Eg. Intel 8086 CPU was 16-bit and it was addressing memory by bytes, so do modern 32-bit and 64-bit CPUs. That's the cause of the first limit - you can't have more addresses than memory bytes.

Memory address is just a number of bytes the CPU has to skip from the beginning of the memory to get to the one it's looking for.

  • To access the first byte it has to skip 0 bytes, so first byte's address is 0.
  • To access the second byte it has to skip 1 byte, so its address is 1.
  • (and so forth...)
  • To access the last byte, CPU skips 1073741823 bytes, so its address is 1073741823.

Now you have to know what 32-bit actually means. As I mentioned before, it's the size of a machine word.

Machine word is the amount of memory CPU uses to hold numbers (in RAM, cache or internal registers). 32-bit CPU uses 32 bits (4 bytes) to hold numbers. Memory addresses are numbers too, so on a 32-bit CPU the memory address consists of 32 bits.

Now think about this: if you have one bit, you can save two values on it: 0 or 1. Add one more bit and you have four values: 0, 1, 2, 3. On three bits, you can save eight values: 0, 1, 2... 6, 7. This is actually a binary system and it works like that:

Decimal Binary
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
10 1010
11 1011
12 1100
13 1101
14 1110
15 1111

It works exactly like usual addition, but the maximum digit is 1, not 9. Decimal 0 is 0000, then you add 1 and get 0001, add one once again and you have 0010. What happend here is like with having decimal 09 and adding one: you change 9 to 0 and increment next digit.

From the example above you can see that there's always a maximum value you can keep in a number with constant number of bits - because when all bits are 1 and you try to increase the value by 1, all bits will become 0, thus breaking the number. It's called integer overflow and causes many unpleasant problems, both for users and developers.

 11111111 = 255
+ 1
----------- 100000000 = 0 (9 bits here, so 1 is trimmed)
  • For 1 bit the greatest value is 1,
  • 2 bits - 3,
  • 3 bits - 7,
  • 4 bits - 15

The greatest possible number is always 2^N-1, where N is the number of bits. As I said before, a memory address is a number and it also has a maximum value. That's why machine word's size is also a limit for the number of available memory addresses - sometimes your CPU just can't process numbers big enough to address more memory.

So on 32 bits you can keep numbers from 0 to 2^32-1, and that's 4 294 967 295. It's more than the greatest address in 1 GB RAM, so in your specific case amount of RAM will be the limiting factor.

The RAM limit for 32-bit CPU is theoretically 4 GB (2^32) and for 64-bit CPU it's 16 EB (exabytes, 1 EB = 2^30 GB). In other words, 64-bit CPU could address entire Internet... 200 times ;) (estimated by WolframAlpha).

However, in real-life operating systems 32-bit CPUs can address about 3 GiB of RAM. That's because of operating system's internal architecture - some addresses are reserved for other purposes. You can read more about this so-called 3 GB barrier on Wikipedia. You can lift this limit with Physical Address Extension.


Speaking about memory addressing, there are few things I should mention: virtual memory, segmentation and paging.

Virtual memory

As @Daniel R Hicks pointed out in another answer, OSes use virtual memory. What it means is that applications actually don't operate on real memory addresses, but ones provided by OS.

This technique allows operating system to move some data from RAM to a so-called Pagefile (Windows) or Swap (*NIX). HDD is few magnitudes slower than RAM, but it's not a serious problem for rarely accessed data and it allows OS to provide applications more RAM than you actually have installed.

Paging

What we were talking about so far is called flat addressing scheme.

Paging is an alternative addressing scheme that allows to address more memory that you normally could with one machine word in flat model.

Imagine a book filled with 4-letter words. Let's say there are 1024 numbers on each page. To address a number, you have to know two things:

  • The number of page on which that word is printed.
  • Which word on that page is the one you're looking for.

Now that's exactly how modern x86 CPUs handle memory. It's divided into 4 KiB pages (1024 machine words each) and those pages have numbers. (actually pages can also be 4 MiB big or 2 MiB with PAE). When you want to address memory cell, you need the page number and address in that page. Note that each memory cell is referenced by exactly one pair of numbers, that won't be the case for segmentation.

Segmentation

Well, this one is quite similar to paging. It was used in Intel 8086, just to name one example. Groups of addresses are now called memory segments, not pages. The difference is segments can overlap, and they do overlap a lot. For example on 8086 most memory cells were available from 4096 different segments.


An example:

Let's say we have 8 bytes of memory, all holding zeros except for 4th byte which is equal to 255.

Illustration for flat memory model:

 _____
| 0 |
| 0 |
| 0 |
| 255 |
| 0 |
| 0 |
| 0 |
| 0 | -----

Illustration for paged memory with 4-byte pages:

 PAGE0 _____
| 0 |
| 0 |
| 0 | PAGE1
| 255 | _____ ----- | 0 | | 0 | | 0 | | 0 | -----

Illustration for segmented memory with 4-byte segments shifted by 1:

 SEG 0 _____ SEG 1
| 0 | _____ SEG 2
| 0 | | 0 | _____ SEG 3
| 0 | | 0 | | 0 | _____ SEG 4
| 255 | | 255 | | 255 | | 255 | _____ SEG 5 ----- | 0 | | 0 | | 0 | | 0 | _____ SEG 6 ----- | 0 | | 0 | | 0 | | 0 | _____ SEG 7 ----- | 0 | | 0 | | 0 | | 0 | _____ ----- | 0 | | 0 | | 0 | | 0 | ----- ----- ----- -----

As you can see, 4th byte can be addressed in four ways: (addressing from 0)

  • Segment 0, offset 3
  • Segment 1, offset 2
  • Segment 2, offset 1
  • Segment 3, offset 0

It's always the same memory cell.

In real-life implementations segments are shifted by more than 1 byte (for 8086 it was 16 bytes).

What's bad about segmentation is that it's complicated (but I think you already know that ;) What's good, is that you can use some clever techniques to create modular programs.

For example you can load some module into a segment, then pretend the segment is smaller than it really is (just small enough to hold the module), then choose first segment that doesn't overlap with that pseudo-smaller one and load next module, and so on. Basically what you get this way is pages of variable size.

9

In addition to the above, note that virtual addressing is used, along with multiple address spaces. So, even though you only have 1GB of RAM, a program could conceptually use up to 4GB of virtual memory (though most operating system will limit it to less than this). And you can conceptually have a (nearly) infinite number of such 4GB address spaces.

RAM size doesn't constrain (so much) the maximum size of a program or the number of programs you can run, but rather constrains performance. When real memory becomes "over-committed" and the system begins to "thrash" as it "swaps" "pages" of memory back and forth between RAM and disk, performance plummets.

The 1GByte of RAM would occupy 1024*1024*1024 bytes, or 1,073,741,824 bytes.

A 32-bit processor always has 4*1024*1024*1024 bytes, or 4,294,967,296 bytes of address space The 1Gbyte of RAM appears within this space. On Intel processors, some RAM needs to appear at address 0 for the interrupt vectors, so physical RAM starts at address 0 and goes up.

Other things appear in that address space, such as BIOS and option ROMs (in the upper 384Kbytes within the first 1Mbyte), I/O devices (like the APIC), and the video RAM. Some weird things also go on with system management mode "SMRAM" that I don't completely understand yet.

Note this is physical address space, from the kernel's point of view. The MMU can rearrange all of this in any manner to a userspace process.

3

A 32bit processor can address at most 2^32 individual bytes of memory (about 4GB), but having 1GB of memory would make 1*1024*1024*1024 addressable bytesof memory (though you would probably still have a 2^32 virtual address space). A 64bit CPU could address 2^64 individual bytes, but I think most systems use only 48 bits for memory addresses making the upper bound. addressable bytes 2^48.

4

The accepted answer gives a good explanation. But I don't think that it is the answer. It doesn't contain anything about address bus. And its size is actually the main reason of memory constrains. For example 8080 is 8-bit processor (size of its data bus is 8 bits), but it has a 16-bit address bus. It can address 2^16=(2^6)*(2^10)=64*1024 bytes=64KB.

You can find more here (32-bit) in "Technical history" section.

1

I belive the most basic info is lost in this conversation, so here is my answer:

Saying "This is a a 32 bit processor" means that the instruction size, or command size, that cpu can understand and work with at one time is 32 bits. Likewise with 64 bit processors: they can handle instructions of at most 64 bits.

Think of this like an old mechanical calculator: you only have so many digits, so simply cannot input any longer number.

Now, an address a CPU can use also has to fit into that same space, so for a 32 bit processor, the address it uses can also only be 32 bit at most. So, from here we can simply calculate the maximum number of addresses (ie, maximum amount of RAM usable by the CPU):

2^32 = 4294967296 (= 4 GB)

or

2^64 = 18446744073709551616 (= way more ;)

Or, as a fun example, my old Commodore 64 had a 16 bit CPU, so it was capable of managing a memory of:

2^16 = 65536 bytes (= 64 KB)

This is the basic logic, but, as stated before, there are ways around this limitation, like virtual address spaces, memory mapping etc.

4

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

You Might Also Like