In my previous post, I talked about BTRFS, a modern and exciting filesystem for Linux. In this new post, I’m going to give you a quick walkthrough on what you can do with it.
Create a new BTRFS filesystem
In my lab, I’ve created a CentOS 7 virtual machine with 4 disks:
[root@linux-repo ~]# lsscsi -s [2:0:0:0] disk VMware Virtual disk 1.0 /dev/sda 17.1GB [2:0:1:0] disk VMware Virtual disk 1.0 /dev/sdb 75.1GB [2:0:2:0] disk VMware Virtual disk 1.0 /dev/sdc 75.1GB [2:0:3:0] disk VMware Virtual disk 1.0 /dev/sdd 75.1GB
mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd
[root@linux-repo ~]# mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd Btrfs v3.16.2 See http://btrfs.wiki.kernel.org for more information. Turning ON incompat feature 'extref': increased hardlink limit per file to 65536 adding device /dev/sdc id 2 adding device /dev/sdd id 3 fs created label (null) on /dev/sdb nodesize 16384 leafsize 16384 sectorsize 4096 size 210.00GiB
mount /dev/sdb /mnt/btrfs/
[root@linux-repo ~]# df -h /dev/sdb Filesystem Size Used Avail Use% Mounted on /dev/sdb 210G 18M 207G 1% /mnt/btrfs
# btrfs fi df /mnt/btrfs/ Data, RAID0: total=3.00GiB, used=1.00MiB Data, single: total=8.00MiB, used=0.00 System, RAID1: total=8.00MiB, used=16.00KiB System, single: total=4.00MiB, used=0.00 Metadata, RAID1: total=1.00GiB, used=112.00KiB Metadata, single: total=8.00MiB, used=0.00 GlobalReserve, single: total=16.00MiB, used=0.00
[root@linux-repo ~]# btrfs fi balance start -dconvert=raid1 /mnt/btrfs Done, had to relocate 2 out of 6 chunks
[root@linux-repo ~]# btrfs fi df /mnt/btrfs/ Data, RAID1: total=2.00GiB, used=512.00KiB System, RAID1: total=8.00MiB, used=16.00KiB System, single: total=4.00MiB, used=0.00 Metadata, RAID1: total=1.00GiB, used=112.00KiB Metadata, single: total=8.00MiB, used=0.00 GlobalReserve, single: total=16.00MiB, used=0.00
Now also Data is configured in RAID1. And if we check the available mountpoint:
[root@linux-repo ~]# df -h /dev/sdb Filesystem Size Used Avail Use% Mounted on /dev/sdb 105G 18M 70G 1% /mnt/btrfs
Add another device
Now, if at some point we are running out of space, BTRFS can also be used to expand dinamically the filesystem. By the way, there’s a nice way to test out and fill a filesystem: I’ve found this page with a nice bash script that generates multiple files with random size by simply setting some parameters in the script. I’m copying the script here just in case the page may disappear in the future. Be careful with the MAXSIZE parameter, this is bytes, so you want to use a large number otherwise it will take ages to fill the disk. In my modified version, I’m using up to 100MB per file.
#!/bin/bash # Created by Ben Okopnik on Wed Jul 16 18:04:33 EDT 2008 ######## User settings ############ MAXDIRS=5 MAXDEPTH=2 MAXFILES=100000 MAXSIZE=100000000 ######## End of user settings ############ # How deep in the file system are we now? TOP=`pwd|tr -cd '/'|wc -c` populate() { cd $1 curdir=$PWD files=$(($RANDOM*$MAXFILES/32767)) for n in `seq $files` do f=`mktemp XXXXXX` size=$(($RANDOM*$MAXSIZE/32767)) head -c $size /dev/urandom > $f done depth=`pwd|tr -cd '/'|wc -c` if [ $(($depth-$TOP)) -ge $MAXDEPTH ] then return fi unset dirlist dirs=$(($RANDOM*$MAXDIRS/32767)) for n in `seq $dirs` do d=`mktemp -d XXXXXX` dirlist="$dirlist${dirlist:+ }$PWD/$d" done for dir in $dirlist do populate "$dir" done } populate $PWD
We just need to save this script in the btrfs partition, and run it multiple times until the filesystem is completely filled:
head: write error: No space left on device head: write error
Meanwhile, the partition is completely filled:
[root@linux-repo btrfs]# df -h /dev/sdb Filesystem Size Used Avail Use% Mounted on /dev/sdb 105G 105G 64K 100% /mnt/btrfs
Now, to fix the problem and regain some free space, first we need to add a new device to the virtual machine:
[root@linux-repo ~]# lsscsi -s [2:0:0:0] disk VMware Virtual disk 1.0 /dev/sda 17.1GB [2:0:1:0] disk VMware Virtual disk 1.0 /dev/sdb 75.1GB [2:0:2:0] disk VMware Virtual disk 1.0 /dev/sdc 75.1GB [2:0:3:0] disk VMware Virtual disk 1.0 /dev/sdd 75.1GB [2:0:4:0] disk VMware Virtual disk 1.0 /dev/sde 75.1GB
Then, we add /dev/sde to the btrfs filesystem:
[root@linux-repo ~]# btrfs device add /dev/sde /mnt/btrfs/
The disk is now part of the btrfs tree:
[root@linux-repo btrfs]# btrfs fi show Label: none uuid: 421171a1-bb75-40a4-9e48-527be91dc143 Total devices 4 FS bytes used 104.14GiB devid 1 size 70.00GiB used 70.00GiB path /dev/sdb devid 2 size 70.00GiB used 70.00GiB path /dev/sdc devid 3 size 70.00GiB used 70.00GiB path /dev/sdd devid 4 size 70.00GiB used 6.00MiB path /dev/sde Btrfs v3.16.2
[root@linux-repo btrfs]# df -h /dev/sdb Filesystem Size Used Avail Use% Mounted on /dev/sdb 140G 105G 5.9M 100% /mnt/btrfs
btrfs fi balance start -dusage=5 /mnt/btrfs/
[root@linux-repo ~]# while :; do btrfs balance status -v /mnt/btrfs; sleep 60; done Balance on '/mnt/btrfs' is running 10 out of about 77 chunks balanced (16 considered), 87% left Dumping filters: flags 0x1, state 0x1, force is off DATA (flags 0x2): balancing, usage=100
[root@linux-repo btrfs]# btrfs balance start -v -dusage=100 /mnt/btrfs Dumping filters: flags 0x1, state 0x0, force is off DATA (flags 0x2): balancing, usage=100 Done, had to relocate 77 out of 109 chunks
[root@linux-repo btrfs]# btrfs fi show Label: none uuid: 421171a1-bb75-40a4-9e48-527be91dc143 Total devices 4 FS bytes used 98.44GiB devid 1 size 70.00GiB used 50.99GiB path /dev/sdb devid 2 size 70.00GiB used 49.00GiB path /dev/sdc devid 3 size 70.00GiB used 49.01GiB path /dev/sdd devid 4 size 70.00GiB used 50.99GiB path /dev/sde Btrfs v3.16.2
Final notes
Those are few examples of what you can do with BTRFS, but the new filesystem and its tools really allow for many more operations.
There are many tutorials all over the Internet, and the best way to learn more about BTRFS is to build a small virtual machine like I did to play with it.