One additional note about the updates to TR-3633: It includes a section on free space recommendations. How much free space do you need on a storage system?
We’re recommending you keep 10% free on an all-Flash system used with databases. This is mostly because there’s a point where you’re running the risk of filling the whole system up. From a performance point of view, you could probably run at 99% full without problems. The problem is sooner or later someone will make a new datafile or add a LUN to an ASM diskgroup and *POW* everything will crash because there is no more room for writes.
We’ve set 10% free space as a conservative threshold. Even 95% would probably be fine for 99% of database footprints, but somewhere out there is a system with really high IO, lots of CIFS and NFS activity, complex LDAP work, lots of snapvault and other activity that consumes CPU. If you add the minimal extra work of managing a system that is 95% full then maybe possibly perhaps that would lead to a performance complaint. Maybe. If you hold the line at 90% there should be no issues.
With spinning disk, we set the threshold at 85% based on practical experience. There are virtually zero workloads that would have issues related to utilization at less than 85%. On rare occasion we’ve seen evidence in the ONTAP statistics that utilization aoproaching 90% led to problems. It depends on the workload, of course. I used to work for Oracle, and we had systems that were 98% full and had no complaints because the workload was close to 100% random reads. That type of IO isn’t really affected by utilization.