Overview of ZFS file system as well as a logical volume manager developed by Sun Microsystems. The ZFS offers features like high storage capability, data protection, data compression, volume management, copy-on-write clones, data integrity checking, and automatic repair.
Features
-
US Pooled Storage Model
This file system uses the storage pool model, which defines the storage characteristics and acts as a random data store from the point of creation of file systems. The storage allows the ZFS file systems to share disk space with all other file systems. The users do not have to know the file system size, because the file systems are present within the disk space allotted to the storage pool. On addition of a new storage, all file systems in the pool will be able to use this additional disk space.
-
Data integrity Model
It protects disk data from data damage caused by current spikes, data degradation, phantom writes, bugs in disk firmware, DMA parity errors between the array, and server memory or from the driver, driver errors, etc.
Related Product : Computer Hacking Forensic Investigator | CHFI
-
Simplified Administration
ZFS provides a simple administration model. It simplifies creation and handling of file systems by using the property inheritance, hierarchical file system layout, NFS share semantics, and automatic management of mount points. These features allow it to manage the file systems without multiple commands or editing configuration files.
- Copy-on-Write transactional model
The file system implements a copy-on-write transactional object model. All the block pointers in the file system have a 256-bit checksum or 256-bit hash of the target block. It does not overwrite the blocks containing active data but allocates a new block and writes the modified data to it. Then it reads, reallocates, and writes any metadata blocks referencing to it in the same manner. In order to reduce the overhead of this process, the file system gathers several updates into transaction groups and uses a ZIL (intent log) write cache when it needs synchronous write semantics. - End-to-End Checksums and Self-Healing Data
ZFS verifies all data and metadata using a user-selectable checksum algorithm. The traditional file systems used per-block checksum algorithm leading to data errors. ZFS performs checksums at the file system layer making them transparent to the applications and to minimize the shortcomings.
ZFS provides self-healing data and supports storage pools with different levels of data redundancy. When IFS detects a bad data block, it gets similar data from a redundant copy, repairs the bad data and replaces it.
-
Un-paralleled Scalability
ZFS is 128-bit file system, which allows 256 quadrillion zettabytes of storage, It allocates all the metadata dynamically and does not require pre-allocation of inodes or limit the scalability of the file system during creation. The directories can contain up to 256 trillion entries with no limits on the number of files in a file system.
-
Snapshots and Clones
ZFS includes the copy-on-write capability, which allows it to take space-efficient snapshots that acquire less storage and require very less capture. The file system tracks only the changes made to the data.
It can also capture writeable snapshots or clones that result in two independent file systems, which share a set of blocks. When these clone file systems observe any changes, the file system creates new data blocks to reflect those changes.
Also Read : Overview of Mac OS X File Systems
-
Deduplication, ZFS and Solid-State Storage and Compression
ZFS enables data deduplication, which requires large RAM capacity of up to 1 and 5 GB for every TB of storage for better use of deduplication. The lack required physical memory or ZFS cache will lead to virtual memory thrashing at the time of deduplication and also lower performance or result in complete memory starvation. Use speed up deduplication solid-state drives (SSDs) to cache deduplication tables and speed up deduplication.
-
Encryption
ZFS has the encryption feature embedded into the I/O pipeline and the encryption policy set at the dataset level. The file system can compress, encrypt, checksum, and de-duplicate a block during writes. It also allows the users to change the encryption keys at any time without taking the file system offline.
Questions related to this topic
- What file system is ZFS?
- Is ZFS the best file system?
- Is ZFS better than ext4?
- How do I create a ZFS file system?
This Blog Article is posted by
Infosavvy, 2nd Floor, Sai Niketan, Chandavalkar Road Opp. Gora Gandhi Hotel, Above Jumbo King, beside Speakwell Institute, Borivali West, Mumbai, Maharashtra 400092
Contact us – www.info-savvy.com