A recent study in Nature Electronics describes an analog microchip capable of solving matrix equations with precision comparable to digital processors, but with extraordinary efficiency and speed. Although the idea of analog computing is not new, this advance breaks several paradigms: demonstrates that analog technology can not only coexist, but compete with modern digital chips.
The study addresses the classic problem of inverting a matrix (solving ) using analog techniques combined with resistive memory (RRAM). What does this mean? For a digital model, solving a matrix (the database of data and figures that computers use to predict the weather, train an AI, or process an image), It’s like trying to put together thousands of pieces of a puzzle: they do it piece by piece, following a rigid and precise sequence. Analog chips, on the other hand, attack all parts at the same time. This new chip combines both strategies.
First uses low-precision analog operations, something like a quick sketch of the result: traces the contours without dwelling on the details. It then applies high-precision multiplications, which refine the drawing and correct edges. That combination of analog speed and digital precision is what makes it so effective.
To store and manipulate information, the chip does not use traditional “ones and zeros,” but rather a special memory called resistive memory (RRAM). Each of its tiny cells can have multiple levels of conductivity, like a series of faucets that not only open or close, but let more or less water through depending on the intensity of the calculation. This allows mathematical operations to be mapped directly onto the material, as if the equation were solved within the material itself.
Then it comes into play an iterative algorithm, called BlockAMC, that repeats the process several times until reaching very high precision, equivalent to the 32 bits of the most powerful digital systems, but using much less time and energy. In practical tests (such as MIMO communication systems, where thousands of signals cross simultaneously), the chip achieved results comparable to those of a digital processor… in just two or three passes.
The result is surprising: up to a thousand times faster and a hundred times more efficient than conventional digital processors. This jump is striking: it is not just an academic curiosity, but proof that analog processing can approach digital dominance in essential linear calculation tasks.
The relevance of this progress increases if we frame it in the context of the technological restrictions facing China. For years, the United States has imposed sanctions and vetoes that limit Chinese access to certain advanced chip manufacturing processes, extreme lithography (EUV) equipment, and cutting-edge semiconductor technologies.
Thanks to this, China creates its own alternative route that allows it to compete with the most advanced digital architectures (for example, those of extreme lithography). The microchip also has an advantage: its energy efficiency. In high-performance systems such as data centers, networks or wireless communications, energy efficiency is key. If an analog chip can produce comparable results while consuming a fraction of the power, it becomes an attractive option, especially in scenarios where access to advanced lithography is limited.
No technological advance is without challenges. In the case of this analog microchip, one of the problems is scalability. The results demonstrated are for relatively small matrices (16×16 or 256 numbers in total, while conventional ones are about 512 x 512). Scaling to much larger sizes will require overcoming physical limitations and noise in analog circuits.
The study does not say that digital chips no longer make sense; rather, it highlights that hybrid architectures (analog + digital) can open strategic paths. It is a statement that Innovation is not only given by classic digital miniaturization, but also by rethinking how we compute.