I am reading Operating Systems Concept and I am on the 8th chapter! However I could use some clarification, or reassurance that my understanding is correct.
Logical Addresses: Logical addresses are generated by the CPU, according to the book. What exactly does this mean? (In an execute-generated address system..) I assume when code is compiled for a program, the program has no idea where the code will be loaded in memory. All the compiler does is set up a general sketch of the program layout and how the image should be laid out, but doesn't assign any real addresses to it. When the program is executed the CPU takes this layout image that the compiler made and hands out some addresses (logical ones) to the ones generated from the code.
Physical Addresses: The physical addresses are not generated until after the CPU generates some set of logical addresses (consisting of a base address and an offset). The logical addresses go through the MMU or another device and somewhere along the line the logical addresses are mapped to physical RAM addresses.
What then is the actual difference? I can see one benefit. Using logical addresses gives more freedom to the applications. If the physical addresses were hard coded, then the program success would depend heavily on the physical computer machine, available RAM addresses etc.
Doesn't the use of logical addresses converted to physical address impose two steps instead of a one to one, and therefore more over head?
Where then do the logical addresses reside after generation? They may exist in a register on the CPU while the CPU is servicing a process, but before and after, where do they go? I understand this is implementation dependent. I assume they may be stored in some special register space or buffer on the CPU such as a TLB, correct? If not, then the table may exist in the actual RAM itself, and the CPU only holds a pointer/address to the base address of the table in RAM, correct?
It seems holding the addresses in RAM is counter productive to the purpose of logical memory addresses. I can only assume my understanding is incorrect.
This answer is by no means exhaustive but it may explain it enough to make things click.
In virtual memory systems, there is a disconnect between logical and physical addresses.
An application can be given a virtual address space of (let's say) 4G. This is its usable memory and it's free to use it as it sees fit. It's a nice contiguous block of memory (from the point of view of the application).
However, it is not the only application running, and the OS has to mediate between them all. Underneath that nice contiguous model, there is a lot of mapping going on to convert logical to physical addresses.
With this mapping, the OS and hardware (I'll just call these the lower layers from here on in) is free to put the application pages anywhere it wants (either in physical memory or swapped out to secondary storage).
When the application tries to access memory at logical address 50, the lower levels can translate that to a physical address using translation tables. And, if it tries to access logical memory that's been swapped out to disk, a page fault is raised and the lower levels can bring the relevant data back into memory, at whatever physical address it wants.
In the bad old days when physical addresses were all you had, code had to be relocatable (or fixed up on load) since it could load anywhere. With virtual memory, that code (and data) can be at logical memory location 50 in a dozen different processes at the same time - it's actual physical address will be different however.
It can even be shared so that one physical copy exists in the address space of many processes at once. This is the crux of shared code (so we don't use more physical memory than we need) and shared memory to allow easy inter-process communication).
It is, of course, less efficient than a pure physical-address environment but the CPU manufacturers try to make it as insanely efficient as possible, since it's used heavily. The advantages far outweigh the disadvantages.