Eventually, as more files are changed, more and more blocks will be assigned from the snapshot reserve. The df –k command will show the amount of space being used within the active file system and in the .snapshot directory.
There is nothing magical about the 20% number. Sometimes we may want to decrease the percentage allocated to the snap reserve. Sometimes we may want to increase it. It all depends on the rate at which data changes in the volume and how long we want to keep our snapshots available.
Suppose I want to change the snapshot reserve in vol3 to 30% instead of 20%.
Notice the space allocation on vol3 has now changed.
As you can see, it is possible that more than 20% of the blocks within the volume may actually be assigned to the snapshot reserve. This reduces the amount of space available to the active file system. In this case we increased the space allocated to the snapshot reserve. We could just as easily reduce the size of the snapshot reserve.
If these are NAS volumes – used for CIFS or NFS – then the creation and deletion of snapshots is probably being controlled by Data ONTAP’s snapshot scheduler. This is configured either with the snap sched command or from filerview. Filerview’s screen to modify the parameters for vol3 looks like this:
Notice we can change the percentage of space preallocated for snapshot blocks here as well as with the snap reserve command.
What I’m interested in here is the Number of Scheduled Snapshots to Keep options. Logically, the longer I keep snapshots around the greater the difference will be between the blocks in the active filesystem and the oldest snapshot. Remember that blocks pointed to by any snapshot cannot be updated – they are read-only – as long as the snapshot exists.
The snapshot scheduler controls how long the snapshots will be kept and when they will occur. For example, weekly snapshots occur at midnight on Sunday nights. If the scheduler is set to retain the last 6 weekly snapshots it will delete the oldest weekly snapshot after it has accumulated 6 weekly snapshots. The last 6 weekly snapshots will be maintained and older snapshots will be automatically deleted.
The same is true for nightly snapshots (which are taken at midnight every day except Sunday) and hourly backups (which are taken on the hours specified). The number scheduled to be kept will limit how far back we can go to recover files.
Ideally equilibrium is reached, with the percentage of blocks allocated to the snapshot reserve adequate for the length of time we want to have the snapshots available. As snapshots are automatically deleted blocks which are pointed to uniquely from that snapshot will be returned to the free space pool in the active file system.
If more blocks are needed than have been set aside by the snapshot reserve than they will be taken from the active file system. (This is the default behavior and can be changed.) A user will see a situation where deleting files may not actually free any space. This is because there are no available free blocks in the snapshot reserve. Free blocks will not be available until some snapshots are deleted and, if no other snapshot points to them, the blocks will be returned for reuse.
In extreme situations, it is possible that the active file system may actually run out of blocks and it will be impossible to change files in the volume or add new files. The storage system administrator will have to intervene, either by growing the volume or by deleting old snapshots so that blocks associated with them can be reused.
For volumes used to support NAS applications this is usually adequate. This situation should be extremely rare. After the system administrator deletes some snapshots or grows the volume, users can proceed normally. For volumes that contain LUNs, the situation is more complex and we’ll be looking at some additional options that might be useful for these volumes next time.
No related posts.
Related posts brought to you by Yet Another Related Posts Plugin.