Share this

Here's a lot you might not know about emerging storage technologies.

2026-04-06 05:08:16 · · #1

Currently, emerging storage technologies are attracting increasing attention from the industry. For example, PCM, MRAM, ReRAM, and FRAM have been lying dormant for decades, seeking application opportunities that suit their own characteristics. Today, it seems that their opportunity has truly arrived.

In fact, some of the storage types mentioned above are already in mass production, and these chips have generated considerable sales revenue. As advanced logic processing nodes drive complex processors and ASICs to adopt emerging, persistent storage technologies, the market is expected to undergo a significant transformation. Meanwhile, industry leader Intel has begun actively promoting its new 3DXPoint memory as non-volatile memory for advanced computing. SNIA (Storage Networking Industry Association), JEDEC, and other standards organizations, as well as the Linux community and major software companies, are working to establish the necessary standards and ecosystem to support the persistent development of these new storage technologies.

This article will examine emerging memory technologies from multiple perspectives and predict how these technologies will change the chip market.

The necessity of emerging storage technologies

There is a question in the industry: since silicon-based memory technology has always been the preferred solution, why research non-silicon-based memory? Is there a need for this change?

Actually, this question is not difficult to answer.

Silicon memory technology benefits from its use of nearly the same process technology as that used to produce CMOS logic chips, allowing it to leverage the advantages of co-developing memory and logic processes. In fact, until the mid-1980s, logic and memory processes were identical. Only then did the memory market become large enough (over $5 billion per year) to support the development of any additional process technology.

Even so, there has been no major difference between the manufacturing process of memory chips and logic chips, and this synergy between memory and logic chips can continue to reduce the development cost of manufacturing processes.

Almost all emerging memory technologies use novel materials not used in logic processes, so they do not benefit from this synergy. These novel materials are not as well understood as silicon, and this lack of understanding leads to yield problems.

So why have emerging storage technologies developed so rapidly in such a short period of time?

The reason NAND flash memory evolved to 3D is that planar floating gates (the basic bit cells of NAND and NOR flash memory) cannot be shrunk to below 15nm. This is the main reason why all NAND flash memory manufacturers converted planar technology to 3D.

Currently, the most advanced logic processes in the semiconductor industry have shrunk to below 10nm, and TSMC has already begun mass production of 7nm chips. SoCs built on these logic processes will benefit from non-volatile memory containing firmware, but this memory needs to be generated on a 15nm process. A chip using a 7nm logic process and 15nm flash memory may not perform as well as a logic and memory chip using a 15nm process. From this perspective, it seems inevitable that some new technologies will be needed to ensure that non-volatile memory continues to evolve in tandem with logic chips.

If new memory technologies are developed for logic chips, the resulting process technology can be applied to discrete memory chips at a very reasonable cost. This suggests that the market for discrete emerging memory chips could also see significant growth. However, other factors are also at play in this scenario, which will be discussed later.

Significant efforts have been invested in developing new memory technologies to replace NAND once its scaling limitations at 15nm are overcome. However, NAND flash memory developers have attempted to find solutions by shifting to 3D, making it difficult for new technologies to penetrate the NAND market quickly. Therefore, emerging memory companies have shifted their focus to the upcoming expansion of DRAM, likely because its process technology shrinks to below 10nm, opening up a significant market for their technology. This is indeed possible; DRAM developers say they still have many paths to further scaling without needing emerging memory technologies like MRAM.

From today's perspective, emerging memory technologies may first be mass-produced as embedded memory in logic SoCs, and then evolve into an important part of the discrete memory market.

Bit selector

Before introducing and defining all emerging storage technologies, we need to understand the concept of bit selectors. This is because bit selectors are used to determine how small a bit unit is available, which is a very important consideration and component of the total cost of new storage technologies. Cost is crucial because no system designer will use overly expensive components.

You may have never heard of selectors before, but they are actually not complicated. Let me explain them in detail below.

Each bit cell in a memory chip requires a selector, which routes the bit cell's contents to a bus connected to the chip's pins, allowing reading or writing. The bit cell technology determines the type of selector: SRAM uses two transistors, DRAM uses one transistor, and flash memory connects the transistor to the bit cell so that the transistor stores the bit and performs the selection operation.

Emerging memory technologies use selectors that are much simpler than those required by today's leading memory technologies. They can use either two-terminal or three-terminal selectors, as shown in the circuit diagram below, and it can be seen that there is not much difference between the two. In both cases, the selector controls the current through the bit cell by turning it off with a transistor, or by turning it off when the current flows through a diode in reverse.

The basic principles of selectors will be explained below.

The first is a resistive RAM (ReRAM) array, shown in the simplified top view below. Each bit cell is represented by the intersection of a word line and a bit line (this diagram is simplified and the selector is not shown). The word line provides current to select which row of bits to read or write. The bit line reads the bit on that word line, or it allows current to be applied to the bit line to program that bit.

A position can be in a high-resistance state or a low-resistance state. Here, red indicates a high-resistance state and green indicates a low-resistance state. In these states, either the current stops flowing or the current is allowed to flow.

As shown in the diagram below, we now assume there are no selectors. If a bit is in a low-resistance state and its word line (blue) is energized (activated), current flows from the word line through the bit line (green) to the green cell. No other bit lines receive current because all their bit cells are in a high-resistance state (red).

If any other word line is activated, no current flows into any bit line because all other cells are red, meaning they are in a high-resistance state.

However, problems arise when other bits are in a low-resistance state. See the diagram below for details.

The output current flowing down the bit line can also flow up the bit line to another cell in a low-resistance state, as indicated by the striped arrows in the figure. This low-resistance bit allows current to flow backward to another word line, and any bit on that word line can redirect the erroneous current to its own bit line.

Imagine a diagram with a 1,024x1,024 array of bits, randomly programmed into a 50/50 mix of low and high resistance states. No word line could be energized without causing each bit line to output current!

The purpose of the selector is to ensure that the above situations do not occur. A diode can be connected in series with the bit cell to prevent reverse current from flowing to other word lines. In some emerging memory technologies, the diode can be placed directly below the bit cell, so that it does not occupy any space (in Crossbar's design, the selector is actually a function of the bit cell memory mechanism, which is explained in detail in the relevant white paper).

However, most memories cannot use diodes as selectors because current must flow bidirectionally through the cell. This is explained in detail below. Currently, much research is still underway in the industry to develop good bidirectional selectors that behave like diodes at low voltages and resistors at higher voltages. But in most cases, using transistors is much easier, as shown in the three-terminal selector schematic above.

However, transistor selectors require a lot of space because word lines and source lines must run across the array. The following diagram provides a rough understanding of how it works.

Because each word line is accompanied by a source line, half-word lines will fit a given region, just like in a double-terminated configuration. This makes the memory cost approximately twice that of a double-terminated selector.

Developers working on emerging memory technologies are very excited about technologies that allow selection via diodes, a way to reduce size and cost, as chip cost is proportional to its area. Unfortunately, most emerging memory technologies require forward current for writing and reverse current for erasing, so simple diodes won't work. The industry is developing bidirectional selectors, but another problem is hindering them. What is it? This will be discussed below.

PCM appears to have an advantage in this regard. As will be explained below, PCM is programmed and erased with a forward current, thus requiring only a simple diode as the selector for the PCM cell. Intel Fellow Al Fazio, the godfather of 3DXPoint memory technology, promoted this idea two years before 3DXPoint's release. However, this seems insufficient; more is needed.

Selectors are notoriously difficult to make absolutely correct. When 3DXPoint was first introduced in 2015, Micron's Scott de Boer stated that while you could make ReRAM bit cells from almost any material, selectors were a tricky problem.

So what's the use of a bidirectional selector? If the ratio of the selector's "on" to "off" resistors is 100:1, and there are 100 trailing paths to the bit lines, then the current from the 100 trailing paths will be equal to the current from the legal paths. In large arrays, this almost always happens. The selector must have better execution capabilities. However, in most cases, transistors offer a better on/off ratio and allow bidirectional current to flow, thus becoming the necessary "devil."

All of this is to explain why selectors have a significant impact on memory array area, and why the cost of an array is proportional to its area. Therefore, memory with dual-ended selectors can be used instead of memory that must use three-ended selectors, thus offering a better chance to compete with existing mature memory.

Which will dominate: MRAM, ReRAM, PCM, XPoint, or FRAM?

PCM: Also known as PRAM, phase change memory technology is based on materials that are amorphous or crystalline at normal ambient temperatures. Crystalline materials have low electrical resistance, while amorphous materials have high electrical resistance.

In chemistry and physics, any amorphous substance is referred to as liquid or gas. Solid, liquid, and gaseous states are also called "phases." The name phase-change memory originates from the result of bit cells switching between crystalline and amorphous phases.

Since research began in the 1960s, PCM (Polydioxanone Memory) first shipped in 2006. This technology is typically based on chalcogenide glasses; the Intel/Micron 3D XPoint memory is based on PCM. The biggest advantage of PCM is that it can use simple two-terminal diodes for selection, instead of bidirectional devices, because the current flows in the same direction during bit setup, reset, or read.

MRAM: Magnetic RAM is based on giant magnetoresistive (GMR) and has been used in HDD recording heads since the early 1990s. When some layers of a multilayer GMR stack are magnetized in the same direction, another layer will exhibit low resistance. When they are magnetized in opposite directions, the resistance of the layers will be high. This magnetization can be generated by a field around the conductor (ToggleMode MRAM) or by passing a forward or reverse current through the bit cell (Spin-Tunnel Torque or STTMRAM). Both products are currently available.

MRAM has received significant investment, resulting in numerous STTMRAM variants, including Vertical STT, Process Spin Torque, Rotating Orbit Torque (SOT), and others. While all devices to date have used three-terminal selectors, recent research suggests that two-terminal selectors may be used in the coming years.

ReRAM: Resistive RAM has many names, with ReRAM, RRAM, and Memristor being the most common. The broadest definition of ReRAM includes any memory that uses resistive storage elements; this includes PCM and MRAM. To distinguish them, ReRAM here refers to any non-PCM or MRAM-based resistive storage technology.

The bit set/reset mechanism in most ReRAMs involves the creation and elimination of filaments or oxygen vacancies: atoms actually move within the device. This naturally leads to wear, but researchers believe this wear can be significantly less than in NAND flash memory. The process uses forward and reverse currents, which sometimes makes three-terminal selectors easier to use than two-terminal selectors. However, some ReRAMs can be used with two-terminal selectors, and some variants can even perform selection within the bit cell. This makes them economical to use in a single layer and allows them to be constructed in multiple layers to further reduce costs.

While most ReRAMs use new materials, some companies have developed methods to manufacture them using mature materials already used in high-volume chip production. Currently, some ReRAMs are already being shipped in bulk.

FRAM: Ferroelectric RAM, or FeRAM, but it doesn't use iron. This technology is named this because its mechanism is very similar to that of iron when it is magnetized and demagnetized. A current in one direction causes atoms within the FRAM cell to move to one end of the molecule, and a current in the opposite direction moves them to the other end.

FRAM is typically not resistive memory. Today's FRAM uses a destructive read mechanism, which applies a write voltage to the cell. If current flows, it means the atom moves from one end of the cell to the other, and the cell is erased. If no current flows, the atom is already at the other end of the cell. If a read operation causes the atom to move, it must be restored to its original position after the cell is read.

Recent research has found that FRAM can be made using hafnium oxide, a material widely used in semiconductor manufacturing. This is a decisive advantage that distinguishes FRAM from other emerging memory technologies. Current FRAM uses a three-terminal selector, which limits its processing capacity.

Other technologies: NRAM is made from carbon nanotubes, graphene memory, conductive electronic RAM (CeRAM) and variations of the above technologies, such as polymer ferroelectrics, ferroelectric tunnel junctions (FTJs), ferroelectric FETs (FeFETs), interface PCMs (iPCMs, also known as Superlattice PCMs or TRAMs), magnetoelectric RAMs (MeRAMs), racetrack memory, and so on.

In conclusion, when DRAM and NAND flash memory can no longer reduce costs, all new technologies will compete for the next generation of storage market position, but many technical and application hurdles must be overcome before that can happen.

Research on new storage materials

The development of emerging memory technologies requires extensive testing to accurately study the performance of these new technologies and materials. Using a batch of 300mm wafers for a single test, especially if other tests cannot be run on that wafer, would significantly increase costs.

Another major challenge is that most memory manufacturers operate wafer fabs at very high efficiency and in large volumes, and interrupting the production process to inject a batch of test wafers would be dangerous and wasteful. Most fab managers are reluctant to change processes to accommodate experimentation.

So, what measures can be taken to improve this situation?

Intermolecular, Inc. (IMI) has a solution where they have built a small wafer fab that allows a single wafer to be processed with parameters varying across the entire wafer. In this way, a single wafer can perform 36 or more different experiments simultaneously. This is clearly more economical than performing experiments on 36 wafers.

The company describes itself as a research and development outsourcing company.

Some readers may have already guessed: this will require specialized tools. Standard wafer processing tools are designed to provide absolutely consistent processing results across the entire surface of a semiconductor wafer. IMI has modified industry-standard tools to allow researchers to change parameters in a controlled manner in different areas of the wafer. This not only significantly reduces experimental costs but also accelerates the process, reducing it to a fraction of the time required by standard semiconductor processing facilities. IMI claims this method can accelerate screening by 10 to 100 times.

As shown in the image above, the wafer is divided into several "spots". Based on a pre-experimental decision, the fabrication process for each spot is slightly different from any other spot on the wafer, allowing for the characterization of many variables on the same wafer. The image below shows a sample of this feature.

In emerging memory technologies, most bit cells are built between two metal interconnect layers, which are formed much later in the manufacturing process. This means that customers can use their own tools to process all the conventional CMOS logic layers on the wafer and then send it to IMI to deposit the final metal layers and the bit layers between them.

IMI states that some of the materials they are characterizing are MRAM, FRAM, capacitor-free DRAM, FTJ (ferroelectric tunnel junction), phase-change memory (PCM), chalcogenides, TRAM (topology switch RAM), interface phase-change memory (iPCM), correlated electronic RAM (CeRAM), and ReRAM.

IMI is also working on other non-memory projects, including quantum computing devices, standard HKMG logic (high-k metal gate), photovoltaics and LEDs, and even window glass coatings.

Company officials said IMI has conducted more than 1,800 experiments and classified 225 new materials.

IMI, located in San Jose, is one of the very few remaining foundries in Silicon Valley, along with Applied Materials' Maydan Technology Center, Thinfilm's fab, and Lam Research's training facility. Apple acquired a fully-tooled fab from Maxim Integrated in 2015, but it's uncertain whether it's still in use.

Conclusion

This article introduces numerous emerging memory technologies and their value to applications and markets, focusing particularly on their bit selectors. It also provides examples of new memory materials and their mass production. However, this is not all. Subsequent articles will explore the requirements and challenges these emerging memory technologies place on processes and production equipment, provide detailed introductions to emerging memory companies, and offer market forecasts. Stay tuned.

Read next

CATDOLL 140CM Qing TPE

Height: 140cm Weight: 30kg Shoulder Width: 32cm Bust/Waist/Hip: 76/61/77cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm Anal...

Articles 2026-02-22