Skip to main content

Steps forward toward universal memories with a joint research between Russia, the USA and Switzerland

A collaborative research project between the Moscow Institute of Physics and Technology, the University of Nebraska and the University of Lausanne has allowed to grow a ferroelectric ultrathin film of silicon, paving the way to the realization of a suitable material for the construction of a universal non-volatile memory and for memristor that can be used for the construction of cognitive systems neuromorphic of the future.

The search for a universal memory to replace DRAM, SRAM, flash memory and hard disk is a mission that involves many researchers around the world. This particular project differs from many other for the advantage of being able to realize the ferroelectric film using conventional tools, commonly used in the production of electronic components.

"The difference between our approach and other attempts to grow ultra-thin ferroelectric films, in particular on silicon, is that we can grow films of an alloy of hafnium and zirconium oxidizes polycrystalline (instead of epitaxial) that keeps the ferroelectric properties up to a thickness of less the three nanometers, "explained Andrei Zenkevich, head of Functional Material and Devices laboratory the MIPT.

The ability to make this ferroelectric material compatible with silicon substrates allows the use of the CMOS production tools well tried to realize tunneling junctions using the ferroelectric material. "We use the Atomic Layer Deposition technique and use alternate cycles of hafnium and zirconium precursors, combined with water to grow an amorphous hafnium oxide and zirconium with a predefined composition" continued Zenkevich.

Until now, researchers have only demonstrated the ability to manufacture and characterize the material: the next step involves the construction of prototypes to be used to demonstrate that the tunnel effect can be used for the actual memory chip, compared to a theory already proven. The information bits are retained by the inversion of polarization through the layer of hafnium-zirconium, which occurs by passing a current through the layer in the right direction.

The reason why the tunneling junctions by an additional ferroelectric material can lead to a kind of universal memory are that they are very small and can maintain the value without the need to consume enrgia, together with the advantage of being able to be produced with tools conventional CMOS and a possible scalability as other CMOS components.

It will still take several years to confirm these hypotheses, and for that time you will already have entered the era of cognitive computing, where the hafnium-zirconium oxide may be the core element of the memory neuromorfiche synapses.

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT