devicemapper
storage driverleverages the thin provisioning and snapshotting capabilities of this frameworkfor image and container management. This article refers to the Device Mapperstorage driver as devicemapper
, and the kernel framework as Device Mapper.devicemapper
support is included inthe Linux kernel. However, specific configuration is required to use it withDocker.devicemapper
driver uses block devices dedicated to Docker and operates atthe block level, rather than the file level. These devices can be extended byadding physical storage to your Docker host, and they perform better than usinga filesystem at the operating system (OS) level.devicemapper
storage driver is a supported storage driver for DockerEE on many OS distribution. See theProduct compatibility matrix for details.devicemapper
is also supported on Docker Engine - Community running on CentOS, Fedora,Ubuntu, or Debian.devicemapper
requires the lvm2
and device-mapper-persistent-data
packagesto be installed.docker save
to save containers,and push existing images to Docker Hub or a private repository, so you donot need to recreate them later.devicemapper
storage driverloop-lvm
mode for testingloop-lvm
mode makesuse of a ‘loopback’ mechanism that allows files on the local disk to beread from and written to as if they were an actual physical disk or blockdevice.However, the addition of the loopback mechanism, and interaction with the OSfilesystem layer, means that IO operations can be slow and resource-intensive.Use of loopback devices can also introduce race conditions.However, setting up loop-lvm
mode can help identify basic issues (such asmissing user space packages, kernel drivers, etc.) ahead of attempting the morecomplex set up required to enable direct-lvm
mode. loop-lvm
mode shouldtherefore only be used to perform rudimentary testing prior to configuringdirect-lvm
./etc/docker/daemon.json
. If it does not yet exist, create it. Assumingthat the file was empty, add the following contents.daemon.json
file contains badly-formed JSON.devicemapper
storage driver. Use thedocker info
command and look for Storage Driver
.loop-lvm
mode, which is not supported on production systems. This is indicated by the fact that the Data loop file
and a Metadata loop file
are on files under /var/lib/docker/devicemapper
. These are loopback-mounted sparse files. For production systems, see Configure direct-lvm mode for production.devicemapper
storage driver must use direct-lvm
mode. This mode uses block devices to create the thin pool. This is faster thanusing loopback devices, uses system resources more efficiently, and blockdevices can grow as needed. However, more setup is required than in loop-lvm
mode.devicemapper
storage driver indirect-lvm
mode.docker save
to save containers, and push existing images to Docker Hub or a private repository, so you do not need to recreate them later.17.06
and higher, Docker can manage the block device for you,simplifying configuration of direct-lvm
mode. This is appropriate for fresh Docker setups only. You can only use a single block device. If you need touse multiple block devices, configure direct-lvm modemanually instead. The following newconfiguration options have been added:Option | Description | Required? | Default | Example |
---|---|---|---|---|
dm.directlvm_device | The path to the block device to configure for direct-lvm . | Yes | dm.directlvm_device='/dev/xvdf' | |
dm.thinp_percent | The percentage of space to use for storage from the passed in block device. | No | 95 | dm.thinp_percent=95 |
dm.thinp_metapercent | The percentage of space to use for metadata storage from the passed-in block device. | No | 1 | dm.thinp_metapercent=1 |
dm.thinp_autoextend_threshold | The threshold for when lvm should automatically extend the thin pool as a percentage of the total storage space. | No | 80 | dm.thinp_autoextend_threshold=80 |
dm.thinp_autoextend_percent | The percentage to increase the thin pool by when an autoextend is triggered. | No | 20 | dm.thinp_autoextend_percent=20 |
dm.directlvm_device_force | Whether to format the block device even if a filesystem already exists on it. If set to false and a filesystem is present, an error is logged and the filesystem is left intact. | No | false | dm.directlvm_device_force=true |
daemon.json
file and set the appropriate options, then restart Dockerfor the changes to take effect. The following daemon.json
configuration sets all of theoptions in the table above./dev/xvdf
with enough free space to complete the task. The deviceidentifier and volume sizes may be different in your environment and youshould substitute your own values throughout the procedure. The procedure alsoassumes that the Docker daemon is in the stopped
state./dev/
(such as /dev/xvdf
) and needs enough free space to store theimages and container layers for the workloads that host runs.A solid state drive is ideal.device-mapper-persistent-data
, lvm2
, and alldependenciesthin-provisioning-tools
, lvm2
, and alldependenciespvcreate
command. Substitute your device name for /dev/xvdf
.Warning: The next few steps are destructive, so be sure that you havespecified the correct device!
docker
volume group on the same device, using the vgcreate
command.thinpool
and thinpoolmeta
using thelvcreate
command. The last parameter specifies the amount of free spaceto allow for automatic expanding of the data or metadata if space runs low,as a temporary stop-gap. These are the recommended values.lvconvert
command.lvm
profile.thin_pool_autoextend_threshold
and thin_pool_autoextend_percent
values.thin_pool_autoextend_threshold
is the percentage of space used before lvm
attempts to autoextend the available space (100 = disabled, not recommended).thin_pool_autoextend_percent
is the amount of space to add to the devicewhen automatically extending (0 = disabled).lvchange
command.Monitor
column reports, as above, that the volume isnot monitored
, then monitoring needs to be explicitly enabled. Withoutthis step, automatic extension of the logical volume will not occur,regardless of any settings in the applied profile.sudo lvs -o+seg_monitor
command a second time. The Monitor
columnshould now report the logical volume is being monitored
./var/lib/docker/
exists, move it out of the way so that Docker can use the new LVM pool tostore the contents of image and containers./var/lib/docker
and replace it with /var/lib/docker.bk
./etc/docker/daemon.json
and configure the options needed for thedevicemapper
storage driver. If the file was previously empty, it shouldnow contain the following contents:docker info
.Data file
and Metadata file
isblank, and the pool name is docker-thinpool
./var/lib/docker.bk
directory which contains the previous configuration.lvs
or lvs -a
. Consider using a monitoringtool at the OS level, such as Nagios.journalctl
:dm.min_free_space
to a value (representing a percentage) in/etc/docker/daemon.json
. For instance, setting it to 10
ensuresthat operations fail with a warning when the free space is at or near 10%.See thestorage driver options in the Engine daemon reference.loop-lvm
thin pool is touse the device_tool utility,but you can use operating system utilitiesinstead.device_tool.go
is available in themoby/mobyGithub repository. You can use this tool to resize a loop-lvm
thin pool,avoiding the long process above. This tool is not guaranteed to work, but youshould only be using loop-lvm
on non-production systems.device_tool
, you can resize the thin pool manually instead.contrib/docker-device-tool
, and follow the instructions in the README.md
to compile the tool.loop-lvm
thin pool manually using the following procedure.loop-lvm
mode, a loopback device is used to store the data, and anotherto store the metadata. loop-lvm
mode is only supported for testing, becauseit has significant performance and stability drawbacks.loop-lvm
mode, the output of docker info
shows filepaths for Data loop file
and Metadata loop file
:data
file to 200 G using the truncate
command,which is used to increase or decrease the size of a file. Note thatdecreasing the size is a destructive operation.dmsetup
commands.direct-lvm
thin pool, you need to first attach a new block deviceto the Docker host, and make note of the name assigned to it by the kernel. Inthis example, the new block device is /dev/xvdg
.direct-lvm
thin pool, substituting yourblock device and other parameters to suit your situation.pvdisplay
command to find the physical block devices currently inuse by your thin pool, and the volume group’s name.vgextend
command with the VG Name
from the previous step, and the name of your new block device.docker/thinpool
logical volume. This command uses 100% of thevolume right away, without auto-extend. To extend the metadata thinpoolinstead, use docker/thinpool_tmeta
.Data Space Available
field in theoutput of docker info
. If you extended the docker/thinpool_tmeta
logicalvolume instead, look for Metadata Space Available
.devicemapper
after rebootdocker
service failed to start,look for the error, “Non existing device”. You need to re-activate thelogical volumes with this command:devicemapper
storage driver works/var/lib/docker/
. These files and directories are managed by Docker.lsblk
command to see the devices and their pools, from the operatingsystem’s point of view:mount
command to see the mount-point Docker is using:devicemapper
, Docker stores image and layer contents in thethinpool, and exposes them to containers by mounting them undersubdirectories of /var/lib/docker/devicemapper/
./var/lib/docker/devicemapper/metadata/
directory contains metadata aboutthe Devicemapper configuration itself and about each image and container layerthat exist. The devicemapper
storage driver uses snapshots, and this metadatainclude information about those snapshots. These files are in JSON format./var/lib/docker/devicemapper/mnt/
directory contains a mount point for each imageand container layer that exists. Image layer mount points are empty, but acontainer’s mount point shows the container’s filesystem as it appears fromwithin the container.devicemapper
storage driver uses dedicated block devices rather thanformatted filesystems, and operates on files at the block level for maximumperformance during copy-on-write (CoW) operations.devicemapper
is its use of snapshots (also sometimes calledthin devices or virtual devices), which store the differences introduced ineach layer as very small, lightweight thin pools. Snapshots provide manybenefits:alpine
, the alpine
image and all itsparent images are only stored once each on disk.devicemapper
operates at the block level, multiple blocks in awritable layer can be modified simultaneously./var/lib/docker/devicemapper/
.devicemapper
storage driver, all objectsrelated to image and container layers are stored in/var/lib/docker/devicemapper/
, which is backed by one or more block-leveldevices, either loopback devices (testing only) or physical disks.docker info
. It contains a filesystem. This basedevice is the starting point for every image and container layer. The basedevice is a Device Mapper implementation detail, rather than a Docker layer./var/lib/docker/devicemapper/metadata/
in JSON format. These layers arecopy-on-write snapshots, which means that they are empty until they divergefrom their parent layers./var/lib/docker/devicemapper/mnt/
. An empty directory exists for eachread-only image layer and each stopped container.ubuntu
container and the second is a busybox
container.devicemapper
devicemapper
, reads happen at the block level. The diagram below showsthe high level process for reading a single block (0x44f
) in an examplecontainer.0x44f
in the container. Becausethe container is a thin snapshot of an image, it doesn’t have the block, but ithas a pointer to the block on the nearest parent image where it does exist, andit reads the block from there. The block now exists in the container’s memory.devicemapper
driver, writing new data to acontainer is accomplished by an allocate-on-demand operation. Each block ofthe new file is allocated in the container’s writable layer and the block iswritten there.devicemapper
storage driver intercepts further readattempts on that file or directory and responds that the file or directory doesnot exist.direct-lvm
, the blocks are freed. If youuse loop-lvm
, the blocks may not be freed. This is another reason not to useloop-lvm
in production.allocate-on demand
performance impact:devicemapper
storage driver uses an allocate-on-demand
operation toallocate new blocks from the thin pool into a container’s writable layer.Each block is 64KB, so this is the minimum amount of space that is usedfor a write.devicemapper
storage driver mayactually perform worse than other storage drivers in this scenario. Forwrite-heavy workloads, you should use data volumes, which bypass the storagedriver completely.devicemapper
storage driver.direct-lvm
: The loop-lvm
mode is not performant and should neverbe used in production.devicemapper
uses more memory than some other storagedrivers. Each launched container loads one or more copies of its files intomemory, depending on how many blocks of the same file are being modified atthe same time. Due to the memory pressure, the devicemapper
storage drivermay not be the right choice for certain workloads in high-density use cases.devicemapper
and the json-file
log driver, the logfiles generated by a container are still stored in Docker’s dataroot directory, by default /var/lib/docker
. If your containers generate lots of log messages, this may lead to increased disk usage or the inability to manage your system dueto a full disk. You can configure a log driver to store your containerlogs externally.Contents
|