Largest SSD with 100TB Storage Space
In March 2018, Nimbus Data shocked the storage world by delivering the ExaDrive™ DC100 — a 3.5-inch SATA SSD with 100 terabytes of flash in a single device. When it launched, the DC100 was trumpeted as the ‘largest SSD in the world’ — a breakthrough device that reframed the flash density picture outside the back room of a hyperscaler. This article explains the 100TB ExaDrive’s magic, the machinations behind it, who would buy it, and why that 100TB crown room came under challenge (but remains still in history).
What was the 100TB ExaDrive actually?
ExaDrive DC100 features were essentially a plug-and-play server and appliance device using a full 3.5-inch form factor (same physical size as many desktop HDDs) and a SATA interface. Nimbus relied on extremely dense eTLC NAND (engineered TLC) and even more aggressive packing at the board level to achieve 100TB raw. DC100 performance targets spanned hundreds of MB/s sequential throughput and ~100k IOPS for random workloads — not peak NVMe-class performance, but an incredible amount of capacity per slot. Nimbus also made a point of high endurance and energy efficiency: the DC100 was sold with power-per-TB metrics that were the best in the industry, and a five-year, this is what Nimbus termed “unlimited endurance” installed. Besides this, if your data is at stake, you are worried about your data, you can try the SysTools Hard Drive Recovery Tool.
Why a 100TB SSD mattered
The following are the three reasons why the ExaDrive DC100 has been in the spotlight.
- Rack-unit density: Based on 100TB per 3.5in drive, data-center designers could significantly increase storage capacity while consuming fewer drive bays, power outlets, and cooling amenities than all but a handful of smaller SSDs or rack-scale HDDs. This reduces the complexity of designing a data center and reduces operational costs for some workloads.
- HDD replacement for nearline/archive: Nimbus squinted the DC100 as a flash HDD alternative for nearline and archive – slower than top NVMe SSDs but far denser and more power-efficient per TB than any prior enterprise flash of similar endurance
- NAND scale-out, proof of concept: 100TB in a single consumer-accessible device, how NAND stacking and controller design turned these features from challenges into enablers of extreme-capacity, high-density devices — an important step up as vendors followed up on QLC with ever-denser packages.
Price and availability
The ExaDrive DC100 type-limited the cost as expected for top-of-the-line enterprise flash when it first came out. Pricing covered in hobbyist and industry outlets set the drive’s cost in the tens of thousands of dollars range (with figures in the $40k–$46k vicinity depending on vendor/configuration), indicating it was an enterprise/institutional-oriented device and not a consumer one.
What’s changed since 2018 — and is 100TB still the max?
Nimbus’s 100TB drive grabbed headlines in 2018, but storage was moving fast. Over the last couple of years, some of the larger flash makers have managed to break through the 100TB barrier with drives for datacenter density and cost-efficiency. While devices in the ~122TB and even 200–245TB class were announced or sampled by companies (Solidigm, Phison, Kioxia, etc.) in 2024–2025, most were based on NVMe/PCIe 4.0/5.0 controllers and paired with denser QLC stacks or advanced packaging. This means that Nimbus’s DC100 may have only held the intense “world’s largest SSD” title for a couple of months, but it still retains the honors of the first 100TB SSD you can actually buy, and a significant engineering milestone.
Practical takeaway
For those wondering, what SSD in the world holds the most SSD space —100TB? Well, the canonical, factually correct response is Nimbus Data’s ExaDrive DC100 (announced March 19, 2018) — the original and market-leading 100TB enterprise SSD. However, if you are asking what the largest SSD currently is that might actually end up in consumers’ hands, the industry has moved on: vendors are sampling and shipping SSDs far higher than 100TB for hyperscale and AI workloads, making 100TB a target and not the end game.