From Fedora Project Wiki

ThinProvisioning

Summary

Provide the thin provisioning Device Mapper (DM) target and supporting userspace utilities. This DM target allows a single pool of storage to be the backing store of multiple thinly provisioned volumes. Numerous snapshots (and snapshots of snapshots) may be taken of the thinly provisioned volumes.

Owner

  • Name: Joe Thornber and Mike Snitzer
  • Email:
    • thornber AT redhat DOT com
    • snitzer AT redhat DOT com

Current status

  • Targeted release: Fedora 17
  • Last updated: 2011-12-08
  • Percentage of completion:
    • kernel: 80%
    • device-mapper-persistent-data tools: 50%
    • LVM2 thinp support: 30%

Detailed Description

The main highlight of this implementation, compared to the previous implementation of snapshots, is that it allows many virtual devices to be stored on the same data volume. This simplifies administration and allows the sharing of data between volumes, thus reducing disk usage.

Another significant feature is support for an arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots ...). The previous implementation of snapshots did this by chaining together lookup tables, and so performance was O(depth). This new implementation uses a single data structure to avoid this degradation with depth. Fragmentation may still be an issue, however, in some scenarios.

Metadata is stored on a separate device from data, giving the administrator some freedom, for example to:

  • Improve metadata resilience by storing metadata on a mirrored volume but data on a non-mirrored one.
  • Improve performance by storing the metadata on SSD.

Benefit to Fedora

Scalable snapshots of thinly provisioned volumes may be used as the foundation of compelling virtualization and/or cloud services. Fedora would be positioned to be the first distribution to provide this unique advance in Linux block storage.

Scope

The bulk of the change is in the kernel (localized to the DM layer) but userspace tools for dumping, restoring and repairing the metadata are also under development. These tools will be provided in a new 'device-mapper-persistent-data' package. In addition the lvm2 package will be updated to ease configuration and management of thin provisioned volumes and their associated snapshots.

How To Test

A comprehensive test suite has been developed to verify the kernel code works as expected (depends on ruby, dt and dmsetup).

Any additional IO workloads (or benchmarks that model real workloads) that the community has an interest in would be welcomed tests. Data integrity is of utmost importance so all tests that increase confidence in the feature are encouraged.

See Documention for pointers to "how to" style usage/test guidance.

User Experience

Users will create a shared pool of storage that will host all thin provisioned volumes and their associated snapshots. So in contrast to the old dm-snapshot implementation the user will not need to manage or monitor the free space of N snapshot volumes -- the storage for thin and snapshot volumes is allocated on-demand from the backing shared pool of storage.

Dependencies

No other packages depend on this feature (and vice-versa). If not ready the associated lvm2 thinp code, if included in lvm2, will error out accordingly.

Contingency Plan

None necessary, no other packages or capabilities will depend on this feature.

Documentation

See documentation that will be in the kernel tree:

  • persistent-data -- some details on the kernel library that enables storing metadata for DM targets (block and transaction manager, data structures, etc).

Release Notes

Comments and Discussion