MemCon and the Path Forward with Memory in a Post Moore’s Law World

As we wrap the first quarter of 2023 we are facing an environment of soft demand for data center infrastructure and questions about DRAM innovation. Traditional DRAM technologies are pushing up against traditional cell shrink expectations with the demise of Moore’s Law and innovation cost recovery delayed from traditional one generation cadence. Price curves on DRAM have also taken a hit from soft demand looking like double diamond ski slopes for the last couple of quarters. In fact, as the industry discussed the state of memory yesterday in Mountain View, Micron announced the worst loss in the history of the company due to memory write offs but also signaled that the epic fall may be behind us providing a glimmer of hope for demand stabilization. With this subdued current state of affairs, I don’t think there’s a better time to look to the future where disruptive paths of innovation will help propel this entire sector forward and bring the data capacity and performance capability AI era applications demand.

In today’s MemCon Keynote, Microsoft’s Zaid Khan laid out the challenge clearly. AI workloads require massive memory footprints, and while HBM solutions hold promise for the performance AI requires, issues with error sustainability and capacity may limit its application. New alternatives are required. Zaid called for radical scaling solution alternatives to be created by the industry that address not only performance and speed/bandwidth requirements but also break through on geometry and TCO challenges facing today’s traditional innovation curve with new memory hierarchies.

All Industry Innovation Leads to CXL

While there was talk of new memory tiering with HBM and accelerated memory in the MemCon presentations, the behemoth in the room at the conference was CXL. The new industry standard available on both Intel and AMD platforms today and gaining traction with industry solution availability promises to introduce a new tier of memory for additional tiering of memory for applications not requiring the low latency of DDR5 or even HBM configurations. CXL will also open the door for disaggregated system design functioning as a switch connecting compute pools with coherent memory pools and enabling workload composability with resource precision we’ve not historically seen in data centers. The good news about CXL? The large cloud providers are squarely behind the technology serving on the board for the standards consortium and placing their weight behind integration of the standard into their infrastructure. This matters…a lot…when you consider the volume of infrastructure the large players represent to the market.

But questions still remain. Will CXL’s reliance on PCIe gen 5 and 6 provide the data rates required for tiered memory solutions, or will latency drawbacks gate use by many applications? What happens to the enterprise? Do they have sophisticated enough workload oversight for on-prem use of this new capability at scale, and will CXL’s introduction form an even greater wedge between cloud provider and enterprise data center environments? And can CPU vendors deliver on schedule to ensure the swift transition to the “now it really gets interesting” flavors of the specification, namely introduction of CXL 2.0 and 3.0 in servers? Intel and AMD are both on the record for integration of 2.0 in next generation servers, and the collective assembled in Mountain View certainly hope that they both deliver to promises as their delivery is required for broad market adoption. As always, thanks for reading - AK

Previous
Previous

The Future of Semi Innovation from Disruptive Memory to Chiplets with Alphawave Semi

Next
Next

AMD MemCon Interview Transcript