A True Random Access Memory

Markus | Saturday, August 17th 2019, 18:38

-- What if random access memories were actually true to their name?

Figure 1. Why do they even call it random?

In this post I will analyze the feasibility of using a memory which randomly performs either a read or write operation instead of giving the user the ability to choose between the two. Spoiler: It sucks!

Introduction

Recently, I sat on an eleven hour flight back from Japan, so naturally, I had some time to kill. And after exhausting the amount of time to ponder about what kind of food the airline had just served me, my mind started to drift to a more technical topic…

What if random access memories were actually true to their name?

True to their name as in, for each access, we execute a read or write cycle, but we can’t control which one it is.

On the surface, it’s a stupid thought, because, why would you build something like this? And I suppose there really isn’t much value in the concept, but maybe building a system like this will allow us to better understand and mitigate possible failure modes in convential memory architectures.

I did not do any research in advance of writing this “paper” (Geez, Internet on airplanes is waaaaayyy too expensive), so sorry if I’m neglegting or re-iterating things that have been discussed a long time ago.

Definitions and Assumptions

The following terms are used throughout this document:

Term Description
RAM A Random access memory in the conventional sense. Write is one instruction, read is the other one.
TRAM True random access memory. When accessing this memory, there is only one instruction, and we do not know whether it is interpreted as a read or write instruction.
wreat Accessing a TRAM memory. Can either result in a read or write operation.

For all further analysis, the following assumptions are made:

  • The CPU has multiple registers which are can be used as scratch registers. These registers have separate read / write instructions.
  • The system has a full-duplex memory bus.

TRAM Memory Implementations

Just as with conventional RAM, there are many ways to implement such a memory, and the feasibility of a creating a working system with one of these largely depends on it. So let’s quickly go over some possible approaches, and figure out which are worth analyzing in-depth.

For each memory type, I’ll be asking the question “How could we algorithmically determine whether we just did a write or read access to the memory?”

Write Without Read

Behavior. When the access to the memory turns out to be

  • a read access, the value from the address is returned on the bus.
  • a write access, the value from the bus is written to the address. All zeroes (or an alternative arbitrary value) is returned on the bus.

Write Access. A write access is easy. Deterministic writing can be performed through the following steps:

  1. Wreat the value to be written to the memory.
  2. Check if returned value is the fixed write-return constant. If no, go to 1, else continue.

If a write was performed, the return value is the fixed constant. If we see that, we know the write was successful. If we see a value other than it, we know that the current operation resulted in a read operation, and that our value has not been written. In this case, we just try again until we see that a write was performed. In case the cell holds the constant write-return value, it is possible that we exit the loop without ever writing to that memory location. That, however, is not a problem as the cell already contains the value we want in there, even without the write. So far so good.

Read Access. Now that’s the tricky part. And the one that breaks the TRAM concept for this implementation.

  1. Wreat an arbitrary value to the memory location.
  2. Crash and burn!

If our wreat access results in a read access, we get the correct value and are happy. However, if we “accidentally” execute a write access here, we overwrite the data in the location, never to be found again. Even if we changed the write access to write the data value to multiple memory locations per access, this would not help. There is no guarantee that we don’t get 1000 successive writes here. Sure, one can argue that the probability eventually gets low enough, but I want a 100% reliable system here. If you were to go with the probability approach though, don’t forget to perform a write immediately after a successful read to restore potentially now overwritten memory locations.

Unfortunately, this is probably the most common implementation type you would find, so the TRAM concept is off to a bad start.

Read-next on Write

Behavior. When the access to the memory turns out to be

  • a read access, the value from the address is returned on the bus.
  • a write access, the value from the bus is written to the address. The value on the bus is returned on the bus.

This implementation type shows essentially the same behavior as the “write without read” approach. The same analysis applies, only that now we need to check against the target / bus value instead of a constant value.

So, sadly, no cake here. Moving on.

Read-previous on Write

Behavior. When the access to the memory turns out to be

  • a read access, the value from the address is returned on the bus.
  • a write access, the value from the bus is written to the address. The value contained in the memory cell before the write is returned on the bus.

Write Access. Checking for a successful write is pretty much the same again as in the previous cases, but the explanation why it works is quite a bit different:

  1. Wreat the value to be written to the memory.
  2. Check if the returned value is the expected value. If no, go to 1, else continue.

Let’s look at the following states:

Current content Operation Result
Target value Write The correct value is overwritten with the same value. Since the correct value was already there and was returned, we exit the loop.
Other value Write The correct value is written to the cell, but we see the old one, so we repeat the loop. This brings us to either of the “Target value” cases in this table.
Target value Read We don’t write anything to the cell, but the correct value is already there and returned. Since it matches the target value, we exit the loop.
Other value Read The value is wrong, and stays wrong, but so is the returned value. Therefore, we repeat wreating until we reach reach the “Other value / Write” case in this table.

Read Access. Now we’re talking! Look at the following:

  1. Wreat an arbitrary value to the address.
  2. Perform the write algorithm above to write the value returned by step 1 to the address
  3. The value returned by step 1 is the correct value

If the wreat results in a read operation, nothing changes in memory and we have our value (as each operation returns the previously stored value). If we “accidentally” perform a write here, we corrupt the memory at the location we just read. However, since the wreat also returned the value from before the write, we now know the correct value nonetheless. And knowing it, we can just use our write algorithm from above to write it back to the address. After writing it there again, we can continue with the program using the value from step 1. Also, step 2 does not need to be conditional. If step 1 results in a read, step 2 will just overwrite the correct value with the correct value. No harm done.

Now, to be honest, it is debatable whether this type of memory can still be considered a true random access memory. Technically, it’s more of a dual access memory at this point as each access results in both a read and write operation for each access. A value is read no matter which operation is randomly selected. It’s just that, sometimes, this memory also performs a write operation at the same time.

As a sidenote, this requires one additional scratch register in the write algorithm to store the value received from the bus. Also, our CPU’s memory opcode needs to be able to accept two registers; one with the value to be written, and another one to store the return value. Alternatively, the instruction could use a single supplied register for both reading and writing. In that case, the write algorithm above needs to get an extra step to restore the register value before each wreat instruction.

Any Advantages?

So, is there any reason to implement such a memory? The only positive thing I can think of is that we can save one pin by dropping the RD/WR connection. However, as we need a full-duplex (dual-port) RAM in the only feasible implementation, I would say that this one additional trace does not present an issue in the first place.

Summary and Outlook

So we managed to create an algorithm to theoretically make a sad excuse for a true random access memory work, and while it was a fun thought experiment, there isn’t really anything to be gained from it other than terrible headaches.

In the beginning, I thought it would be neat to demonstrate the concept with a little MCU design in VHDL based on an existing soft-core CPU, yet, none of the solutions above really strike me as a great discovery. The last one, while theoretically working, just doesn’t really match my image of a true random access memory.

Under these circumstances, I will invest my time into something more useful, like building an FPGA from 74-series logic chips… Stay tuned for that :)

Tags: architecture nonsense