SAN refresh cycles were already painful—now they’re becoming unpredictable and prohibitively expensive. With memory and storage prices spiking faster than most IT budgets can adapt, many organizations are being forced to rethink how and where their data lives. If you manage on‑premises SAN infrastructure—or are facing an upcoming refresh—this shift in storage economics directly affects your roadmap.
Background
Hardware prices have risen sharply over the past year because memory and storage costs have spiked at unprecedented levels, driven by AI‑driven demand and severe DRAM/NAND shortages. Major OEMs—including Dell, Lenovo, HP, and HPE—are implementing 15%+ server price increases as memory makers shift production toward high‑bandwidth AI components, leaving commodity DRAM and SSDs in short supply.
This same pressure is hitting storage infrastructure: SAN hardware costs are rising as HDDs, SSDs, controllers, and networking components all inherit the same supply‑chain inflation, with vendors warning that cost increases are “more dramatic than any player can mitigate.”
Note: I specialize in NetApp Azure solutions so examples will be Azure solutions.
It’s time to leverage the Cloud!
Shifting storage to Azure means you’re no longer stuck buying big SAN refreshes or guessing how much capacity you’ll need years from now. Instead, you scale up or down on demand, pay only for what you use, and get built‑in security, backup, and high availability without adding more tools or hardware. With on-premises storage getting pricier and harder to maintain, Azure gives you a cleaner, more flexible foundation that grows with your organization.
There are several ways to leverage the cloud – from low effort to high effort. Below are some key options to get you thinking.
Cloud Tiering
Cloud tiering is compelling when SAN hardware prices are rising and budgets are tight—it lets you extend the life of your existing investment while shifting growth to a more flexible, cost‑efficient platform.
When cloud tiering makes the biggest impact:
- SANs nearing capacity or approaching a refresh cycle
- Workloads with large amounts of cold or archival data
- Organizations facing rising SSD/HDD and controller costs
- Environments where data growth is unpredictable
- Teams trying to stretch existing infrastructure
For many teams, this approach delays a SAN refresh by years while giving them immediate breathing room for growth.
Moving DR Storage into the Cloud
Moving DR data into the cloud helps you sidestep SAN refreshes because you’re no longer trying to squeeze years of backup copies, replicas, and retention policies onto hardware that was never designed to grow at cloud scale. Instead of buying a second SAN—or expanding the one you already have—your DR footprint shifts to a platform where capacity, durability, and geographic redundancy are already built in. And you can by moving DR into the cloud, existing DR hardware can be used for production data.
Why cloud‑based DR takes pressure off your SAN
- No more duplicate hardware — Traditional DR means buying a second SAN just to hold copies of data you hope you never need. Cloud DR replaces that with managed multi‑copy storage across zones or regions.
- Capacity growth stops driving hardware purchases — As production data grows, DR copies grow too. Cloud storage absorbs that growth instantly, so you’re not adding shelves, controllers, or SSDs just to keep up.
- Refresh cycles shift to the cloud provider — SAN refreshes are expensive and unavoidable on-premises. In the cloud, the provider handles hardware lifecycle behind the scenes, so your DR environment is always on modern infrastructure without you buying anything.
- Built‑in durability and geographic protection — Cloud redundancy tiers (like zone‑redundant or geo‑redundant storage) give you protection that would require major infrastructure investment if you tried to build it yourself.
Store Backups in the Cloud
Most enterprise backup solutions have an option to store backups in the cloud. If you’re not already leveraging this feature, it’s a great way to reduce your on-premises footprint.
Moving backups into a cloud tier takes a lot of pressure off your on-premises storage because you’re no longer forcing your SAN to hold years’ worth of data that rarely gets touched. Most restores come from the newest backups, so keeping only that “hot” layer on local hardware and letting the cloud absorb everything older gives you room to breathe, stretches the life of your existing arrays, and avoids the cycle of buying more shelves or controllers just to keep up with retention policies.
Why cloud‑tiered backups feel lighter to manage
- You free up expensive SAN space — Older backups move to low‑cost cloud storage, so your SAN isn’t clogged with data you almost never restore.
- You avoid big hardware purchases — Instead of expanding your array every time retention grows, the cloud simply scales with you.
- You shift from capex to predictable opex — Cloud tiers turn “surprise” storage purchases into steady, usage‑based costs.
- You get built‑in durability — Cloud storage automatically keeps multiple redundant copies, giving you off‑site protection without extra infrastructure.
- You simplify lifecycle management — Policies can automatically move backups as they age, so you’re not manually juggling storage tiers.
Archival, cold, or low‑change datasets
Data that isn’t frequently accessed—archives, backups, compliance records, and historical logs—is often the simplest to migrate because it doesn’t require tight latency or real‑time synchronization. These datasets benefit immediately from cloud durability and low‑cost storage tiers, and they avoid the complexity of moving active, constantly changing workloads.
What to consider next
The biggest differentiator isn’t the data itself but how tightly it’s tied to on‑premises applications. Start with data not tightly coupled with on-premises applications. Then consider moving applications with large datasets to get the biggest storage space savings for your effort.
Once organizations decide to shift some storage responsibility to the cloud, the next question becomes how to do it without disrupting existing workflows or retraining teams.
NetApp Azure Options
Cloud Volumes ONTAP
Cloud Volumes ONTAP on Azure is essentially a way to bring the ONTAP experience you already know into the cloud, so your data behaves the same whether it’s on-premises or in Azure. Instead of refactoring apps or juggling different storage tools, you get a familiar set of features—NFS, SMB, iSCSI, snapshots, replication, and efficiency—running as a software‑defined storage layer on Azure. It gives you the flexibility of cloud infrastructure with the comfort and control of ONTAP’s data services.
What it actually gives you in Azure
- A consistent storage experience — Your apps can use the same protocols and workflows they use on-premises, which makes migrations and hybrid setups feel much smoother.
- Built‑in efficiency — Thin provisioning, dedupe, compression, and automated tiering help keep cloud storage costs in check without you having to constantly tune things.
- Strong data protection — Snapshots, replication, and ransomware‑resilience features come along for the ride, so you don’t lose the safety net you rely on in your datacenter.
- Hybrid mobility — SnapMirror lets you move data back and forth between on-premises ONTAP and Azure, which is great for DR, cloud bursting, or testing workloads without committing to a full migration.
- High availability for real workloads — Databases, business apps, DevOps pipelines, and Kubernetes clusters can all run on CVO with the performance and reliability they expect.
Azure NetApp Files
Azure NetApp Files is an Azure‑native, high‑performance file service that gives you the feel of on-premises enterprise storage without the hardware, making it easy to run demanding workloads in the cloud using the same NFS and SMB protocols you already rely on. It delivers all‑flash performance, sub‑millisecond latency, and multiple performance tiers you can switch between on the fly, so you can match cost and performance as your needs change. It’s designed for everything from home directories and shared file services to databases and HPC, and it supports both Linux and Windows workloads without refactoring.
What makes it easy to work with
- It behaves like the storage you already know — You can lift‑and‑shift apps into Azure without changing how they access data, thanks to full NFS, SMB, and dual‑protocol support.
- Performance is built in — ANF runs on bare‑metal flash inside Azure, giving you on-premises‑level speed for latency‑sensitive workloads.
- You can scale without planning hardware — Volumes grow from tens of GiB to 100 TiB with no downtime, and you can adjust performance tiers instantly.
- Data protection comes with the service — Snapshots, availability zones, and integrated security features help keep data safe without extra tools.
- Price Protection with Reserved Capacity – Capacity can be reserved for 1 or 3yr terms to lock in prices and protect against potential increases. They are available in 100TiB and 1PiB increments.
Next Steps
As SAN hardware costs continue to rise, the question isn’t whether storage strategies need to change—it’s how quickly organizations can adapt without increasing risk or complexity.
If you’re worried about increasing SAN hardware prices, it’s time to start planning NOW. Reach out to your NetApp Azure seller and/or your Microsoft Azure contact to review options and see what the best options for your organization are.