At the 2025 Storage Developers Conference (SDC), the Storage Networking Industry Association (SNIA) announced a new initiative called Storage.AI. This project aims to create open standards to manage the increasing data demands of artificial intelligence systems more efficiently.
Alongside SNIA's announcement, partners from the Open Compute Platform (OCP) detailed progress on storage hardware for large-scale data centers. Experts also highlighted how new semiconductor designs could lower the significant energy consumption associated with AI workloads.
Key Takeaways
- SNIA has launched Storage.AI, a vendor-neutral initiative to create open standards for optimizing AI data services and workflows.
- The Open Compute Platform (OCP) is advancing data center storage with new NVMe SSD specifications for improved telemetry, latency monitoring, and management at scale.
- Emerging technologies like chiplets and non-volatile memory are being explored as solutions to reduce the substantial energy usage of AI data centers.
- A broad coalition of industry groups, including UEC, NVMe, and OCP, are collaborating on the Storage.AI initiative.
SNIA Launches Storage.AI Initiative
J. Metz, Chair of the SNIA Board of Directors, outlined the vision for Storage.AI during a keynote address at the conference. The primary goal is to establish a set of open, vendor-neutral standards to address critical challenges in AI data management.
These challenges include memory tiering, reducing latency, efficient data movement, and improving storage efficiency. According to Metz, the initiative is designed to handle the entire AI workload lifecycle, from data initiation to consumption.
What is SNIA?
The Storage Networking Industry Association (SNIA) is a non-profit global organization dedicated to developing and promoting standards and technologies for information management. It brings together member companies from across the IT industry to collaborate on storage solutions.
Core Goals of Storage.AI
The project focuses on several key objectives to streamline how AI systems interact with data. SNIA has outlined the following primary goals:
- Reduce I/O Data Amplification: Optimize the input/output consumption for accelerators like GPUs.
- Efficient and Secure Data Movement: Ensure data is moved reliably and securely throughout the AI process.
- New Accelerator Models: Develop new models for how accelerators initiate and consume data.
- Standardized Interfaces: Create uniform hardware and software interface definitions.
- Open Programming Models: Promote open and accessible programming frameworks.
To achieve these goals, Storage.AI will leverage several existing SNIA technical working groups. These include the Smart Data Acceleration Interface (SDXI), the Computational Storage API, and the NVM Programming Model, which together help standardize data access and near-data computation.
A Collaborative Effort
Storage.AI is not a standalone project. It involves a broad ecosystem of industry partners, including the Ultra Ethernet Consortium (UEC), NVMe, Open Compute Platform (OCP), Distributed Management Taskforce (DMTF), PCI-SIG, and Greengrid. The UALink promoter group is also expected to join soon.
OCP Advances Data Center Storage
In a separate keynote, representatives from Meta and Microsoft discussed ongoing storage projects within the Open Compute Platform (OCP). Ross Stenfort of Meta and Lee Prewitt of Microsoft highlighted OCP's contributions to specifications for NVMe Solid State Drives (SSDs) used in hyperscale data centers.
These specifications are designed to improve the management and deployment of digital storage at a massive scale. The effort between OCP and SNIA demonstrates a shared industry focus on creating more robust and manageable storage infrastructure.
Key Features for Large-Scale Deployments
The OCP's work has introduced several key features for data center SSDs:
- OCP Health Information Extended Log: Provides detailed telemetry metrics based on real-world, large-scale deployments to monitor drive health.
- OCP Latency Monitoring Feature: Helps isolate, monitor, and debug latency spikes that can impact performance.
- OCP Formatted Telemetry: Delivers useful, human-readable telemetry logs with enhanced security.
- Open-Source OCP NVMe Command Line Interface: Offers open-source tools for managing and interacting with NVMe drives.
Additional improvements include a Hardware Component Log for providing manufacturing data to customers and enhanced Device Self-Test capabilities with universal codes for failing segments. These features collectively give data center operators greater insight and control over their storage hardware.
Addressing AI's Growing Energy Consumption
A significant theme at the conference was the rising energy demand from AI infrastructure. Volatile memories like DRAM are a major contributor to power consumption in data centers. Analyst Jim Handy presented on how new semiconductor technologies could offer a solution.
The presentation, titled "The Processor Chip of the Future," focused on the role of chiplets and advanced stacked die packaging. These approaches allow for the integration of different types of memory directly into processor packages.
By using new non-volatile memories in these advanced packages, it's possible to reduce the reliance on power-hungry DRAM, which could lead to significant energy savings across data centers.
Pathways to Lower Power Use
The discussion highlighted several methods to decrease the energy footprint of AI systems. The primary strategy involves replacing volatile memory with non-volatile alternatives where possible. This shift is enabled by new packaging technologies that make it practical to combine different chip types in a single component.
As AI models become larger and more complex, the amount of data moved between storage, memory, and processors increases dramatically. Initiatives like Storage.AI and the hardware standards from OCP, combined with more energy-efficient chip designs, represent a multi-faceted industry effort to build a sustainable foundation for future AI development.