Rich0's Gentoo Blog

A Random Btrfs Experience

with 8 comments

After all the buzz about btrfs and Ubuntu I figured I’d try experimenting with it a little. Aside: I think the hype over Ubuntu is a bit overblown – more power to them if they can pull it off but I suspect that it won’t be more than an option in the end.

I’m really looking forward to btrfs maturing enough for everyday use. I’ve been battling with md and lvm for ages and btrfs has a lot of promise. Some of the things I’m looking forward to are:

  • The resizeability and reshapeability of linux md.
  • The ability to create subvolumes as in lvm.
  • The potential performance boost of COW.
  • The snapshot ability and performance as compared to lvm.
  • Full barrier support down to the metal. Right now I can’t do that as md doesn’t support barriers.

So, how did it go?

I figured I’d give it a test in a use case similar to how I would first experience it. Bear with me as it sounds messy, but I have a lot of existing data on ext3 on raid5 that I’d want to migrate.

To test out my migration strategy I created two 250MB and one 320MB image files (dd if=/dev/zero of=image bs=1M count=250/320). Then I mapped each to a loop, and created a raid5 across them. Then I put an ext3 on that (with 4096 byte blocks – very important), and put some data on it, filling up 90% of the drive. This is what I’m starting with, at 1/1000th scale.

My plan was to convert that to btrfs, then create a 1GB additional drive and add it to the btrfs in multiple device mode, and then start moving drives out of the old array and into the btrfs “array” until I’m free of md and just running btrfs.

So, step 1 was to run btrfs-convert on the ext3. That didn’t go so well. In my first attempt the convert aborted with a failed assert error. I ended up with a corrupted image that I couldn’t do anything with.

Then I tried again, and this time the conversion went fine and was almost instant. I then mounted that image and it looked great – all my files were there. Then I tried to diff one of them just to see how the conversion went, and that’s where things started going bad.

The diff seemed to be taking a while, so I looked at atop and saw that it wasn’t doing anything (no disk io, no nothing). So, I tried to kill it, and it wouldn’t die. Then I tried deleting the ext3 recovery image that btrfs-convert leaves behind, and that ended up as a zombie as well. In the end I had to manually shut down the system (stopping all processes, unmounting everything I could, etc), and hard rebooted with a few dirty filesystems. No data loss (no surprise – I did get just about everything turned off but the hung processes and gave the system plenty of time to sync), but not a good showing for btrfs.

I’ll be honest – I’m not on the latest kernel, and for something experimental like btrfs that makes a big difference. My use case is hardly a simple scenario – I tried just making a plain multiple-device filesystem with it and it worked fine. However, if this had been a real conversion I’d be out half of my data right now – twice.

I still look forward to the promise of btrfs. I’m impressed with how far it has come, and it holds great promise. However, I just can’t see this being production-ready quite yet. At least not without heavy backups (which I can’t afford right now – at least not doing it right).

However, rest assured I’ll be trying this again over the coming months. I really can’t wait to migrate as my current setup is going to be limiting soon.

Written by rich0

May 16, 2010 at 9:03 pm

Posted in linux

8 Responses

Subscribe to comments with RSS.

  1. Hi.

    Barrier support for raid level 1 has been present in md for quite some time now and w/ 2.6.33 it was also added for level 5. AFAICT lvm has had barrier support also recently completed.

    I’m running a level 5 md raid w/ ext4 and barriers. Works fine… at least better than w/o barriers and write cache off. 😉

    So long
    matthias.

    Matthias Dahl

    May 17, 2010 at 4:49 am

    • That is great to hear! Looking forward to 2.6.33 going stable!

      rich0

      May 17, 2010 at 5:47 pm

  2. FWIW, I’ve been using btrfs on my dev system since 2.6.32. I’m not doing anything flashy with volumes or snapshots, but it’s working great for casual use.

    Donnie Berkholz

    May 17, 2010 at 9:01 am

    • How snappy does btrfs feel compared to ext3 or reiserfs? How about Portage ops — searches, syncs, cvs/git checkouts, etc. Any palpable difference?

      nightmorph

      May 17, 2010 at 10:46 pm

  3. […] A Random Btrfs Experience I still look forward to the promise of btrfs. I’m impressed with how far it has come, and it holds great promise. However, I just can’t see this being production-ready quite yet. At least not without heavy backups (which I can’t afford right now – at least not doing it right). […]

  4. I don’t think that btrfs was designed for such preposterously small devices. Your test case is completely unrealistic, and it’s no wonder that it failed miserably; you can’t do laps in a blow-up baby pool.

    Btrfs is desigend for modern and future storage device sizes.

    NobodyYou Know

    June 24, 2011 at 12:16 am

    • If btrfs has a minimum size, then it should fail gracefully – not cause a kernel panic. I could have done the same thing with mdadm, lvm, and ext3 and not had any issues, even though RAID is not typically used with small drives. In any case, test cases involving the unusual are usually the best way to sort out whether something is ready for prime-time. Would you put your valuable data on a filesystem that fails when you do something out of the mainstream with it?

      I could certainly see issues with trying to put filesystems on 800kb worth of devices (not that a panic should result), but 800MB?

      In any case, I’d be curious to repeat the test now that some time has passed. Btrfs clearly is a very new technology, and no doubt will have evolved. Perhaps one day it will even have fsck. 🙂

      rich0

      June 24, 2011 at 5:15 am


Leave a reply to NobodyYou Know Cancel reply