Rich0's Gentoo Blog

An Appeal to Devs – Please Use News

with 8 comments

Well, I spent half of today rebuilding my system, and upgrading mysql.

I figured that I might use the opportunity of my newly-found spare time while running revdep-rebuild to perhaps put out a general plea for developers to make use of the news feature in portage.

Upgrading to mysql 5.1 requires doing a full dump of your databases, some manual cleanup, an upgrade, and then some manual restore steps. Oh, and that dump has to be done BEFORE the upgrade or you end up having to revert back to 5.0 (which I ended up doing). Usually mysql upgrades are relatively painless, but jumps between major versions (0.1 level) are often not.

The upgrade also breaks anything that links to libmysql, which is quite a bit on a system that runs any number of services (mail, mythtv, ulog, etc).

It might have been nice if a news item were published a day or two before stabilizing mysql 5.1 so that users might have some advance warning and could plan accordingly.

Now, this upgrade didn’t rise to the level of some of the past breakages that actually were very painful to recover from and could result in unbootable systems/etc. Still, it never hurts to give users notice. The beauty of news items is that they only pester users who will actually be impacted by them. I don’t think anybody running mysql would mind a reminder that an upcoming upgrade requires careful planning – this is far more relevant to users than half the stuff we put in elogs/etc.

On the other hand, I do appreciate the mysql upgrade guide on the gentoo website (might not hurt to update it a tiny bit), and Peter Davies’s blog entry from 1.5 years ago was very helpful. If these had been pointed out before stabilizing the build the stable experience would have been a little smoother.

Written by rich0

September 4, 2010 at 12:58 pm

Posted in gentoo

Control Over Application Distribution

leave a comment »

I was giving some thought to something that flameeyes wrote regarding quality control and application distribution, and rather than a condensed comment I thought I’d elaborate a little on my thoughts.

Before reading on, I’d encourage you to read what he wrote, as I think he gets a lot of things right.

However, where I’d like to add something is where we get into providing a complete platform vs providing a particular user experience built on a platform. What is the difference? Well, let’s take android as an easy example.

Android is a platform, which is open source, although developed arguably in a less than open manner. The Google branded phones are a particular user experience built on the Android platform. There is a certain tendency for users to confuse the two, which is what leads to shouts of “foul” when Google does something to their Market.

The Google Market is not part of Android, so in a sense their control over the Market is part of improving the user experience, and doesn’t reflect a lack of openness on the platform.

The problem with this is that if you look at the platform as ONLY being Android, then the platform turns out to be fairly lacking. Android actually has no package distribution and management system at all. That means that absent the Market all you have is some odd 3rd party clones of the market or the ability to do a one-time install of apks from a website/etc, none of which are really filling the need for a package manager.

How do other platforms handle this? Well, let’s look at Ubuntu, which delivers both a platform, and a default user experience built upon it. In their case, the user experience is really nothing more than a particular default configuration of the platform. In Ubuntu (and most popular linux distros, certainly including Gentoo) the package manager is part of the platform – not the experience. The package manager is open source, and while Ubuntu controls access to their repository, they do not control the package manager. If users want to use another repository (or create their own) they need only add a URL to their package manager, and the new repository gets seamlessly merged into the package database – perhaps with even greater priority than Ubuntu’s official repository if so configured.

If Google’s market operated in the same manner, then it would be part of the platform, and the experience is the quality assurance they provide to it. I think we’d see fewer complaints in this case. The problem is that Google does not allow users to configure their market to include apps published from alternate sources, which means that since Android doesn’t provide a package manager that users effectively have no way to address this capability gap.

Then if you look at the fact that many phone distributors disable parts of the platform, such as the ability to install apps via sources other than the market, you compound the issue.

I see the situation with mozilla in the same way. As long as a mozilla product user can install an extension from any number of sources and receive automatic updates of this extension, then I have no issue with mozzila providing a default experience that has a level of QA. If, on the other hand, mozilla designs their products so that you can only install extensions from their site, or only extensions from their site receive automatic updates/etc, then they’re essentially limiting their platform to intentionally constrain users to have a particular experience.

This is a debate that has also raged on the Gentoo mailing lists. Different people have different attitudes towards QA, and as a result we have a plethora of overlays in Gentoo that provide levels of QA that are different from the official policy. This has the downside of fragmenting development work, and the upside of taking advantage of the flexibility of the platform.

What do you think? What is the best way to provide the best of both worlds? How can a platform provide a “just right” level of QA filtering appropriate for every end user?

Written by rich0

July 20, 2010 at 10:57 am

Posted in linux

EC2 Custom Kernels

with 28 comments

One minor issue with EC2 is that they supply the kernel, and that already caused difficulties with my first EC2 tutorial – the image I created doesn’t let you create a new snapshot from a running image since the EC2 kernel lacks loopback support, and I didn’t supply a matching kernel module.

Amazon has a nice guide on how to do it – here is a gentoo-specific one.
Read the rest of this entry »

Written by rich0

July 18, 2010 at 7:57 am

Posted in gentoo

A Google Rant

leave a comment »

I love what Google has been doing, and they’ve made huge contributions to FOSS. However, I have to join the chorus of those who are concerned with their lack of distro-friendliness.

The start of my saga was Gentoo bug 320407. Apparently Google re-bundled swt in their android SDK, and the version they re-bundled breaks sometimes.

So, the solution is to not install swt, and patch their android script so that it uses the system library. I still have to figure out which version of swt they re-bundled so that I can try to match it. Maybe that won’t be much work.

Then I need to look at all those other libraries and see which of those can go. I’ll need to patch in their paths, and I’ll need to figure out which upstream versions they re-bundled so that I can set the correct dependencies. Maybe each of those won’t be much work. Maybe some poor user will get burned when it turns out that they modified one of them and I miss it in testing.

Oh, and every time they do a new release they’re not going to tell me if they upgraded one of those bundled libs to a newer API/etc, so maybe if I’m lucky I’ll spot problems during testing and not burn users. Maybe that isn’t too much work either.

Maybe each of these things won’t be much work, but this is already sounding like a royal pain to me. It is also a recipe for end-user problems.

Let me pick my next favorite Google package – chromium. I have a chromium upgrade pending that I’ve been postponing. Building and installing chromium takes hours on my system (an old Athlon 64 3200+). Actually, building chromium probably isn’t the problem – it is building the other half-gigabyte of re-bundled dependencies that get rebuilt every time I upgrade chromium, even though I already have most of them on my system (and if I didn’t the package manager would take care of that for me – ONCE).

My hat is off to the chromium maintainers because they’ve done a good job managing it, and I understand that they’re trying to strip out the embedded libs. However, the project facing them makes my android headaches seem like a trifle.

Google – just use and list dependencies! If you want to have an alternate all-in-one package for those without package managers, feel free – other projects do it. However, if Mozilla can play nice with distros, you can do it too.

All that said, I have no objections to embedding contributed libraries in the sdk itself – the part used to build and test android apps. In this case app developers need to build and test their apps against the libraries that will be on target devices and not their development workstation. Since no code will run natively (except perhaps on an emulator) there aren’t really the usual compatibility and security issues associated with this.

Written by rich0

May 26, 2010 at 9:53 am

Posted in gentoo

A Random Btrfs Experience

with 8 comments

After all the buzz about btrfs and Ubuntu I figured I’d try experimenting with it a little. Aside: I think the hype over Ubuntu is a bit overblown – more power to them if they can pull it off but I suspect that it won’t be more than an option in the end.

I’m really looking forward to btrfs maturing enough for everyday use. I’ve been battling with md and lvm for ages and btrfs has a lot of promise. Some of the things I’m looking forward to are:

  • The resizeability and reshapeability of linux md.
  • The ability to create subvolumes as in lvm.
  • The potential performance boost of COW.
  • The snapshot ability and performance as compared to lvm.
  • Full barrier support down to the metal. Right now I can’t do that as md doesn’t support barriers.

So, how did it go?

Read the rest of this entry »

Written by rich0

May 16, 2010 at 9:03 pm

Posted in linux

Gentoo on EC2 From Scratch

with 30 comments

I’ve been beginning to tinker with Amazon EC2 and I figured that many might benefit from a Gentoo-from-scratch recipe. There aren’t too many gotchas. Credit is due to this blog for providing a good non-Gentoo-specific overview.

Before trying any of this, you need to be at least a little familiar with EC2. They have a great getting started walkthrough here. Once you know how to start instances and connect to them you’re halfway there. You’ll also need to set up an S3 account and have an access key and secret key to store your images. On your gentoo box install ec2-ami-tools and ec2-api-tools.

The biggest issue we’re going to have to deal with is that you can’t supply your own kernel. The two issues this creates are udev compatibility and kernel modules. The former is dealt with by setting package.mask, and the latter by hunting down module tarballs online. The modules are fairly optional unless you want to bundle a running image (this requires loop support, which isn’t built in the EC2 kernel).

You might be tempted to start from an existing EC2 AMI. If you find a good one, that isn’t a bad idea, but most of them are VERY out of date. Updating a gentoo install with a pre-EAPI portage and glibc/gcc versions that aren’t even in the tree any longer will be painful. Creating one from scratch isn’t actually that hard.

So, without further ado, here is the recipe (the steps generally follow the Gentoo handbook, so be sure to follow along in that):

  1. Get yourself a stage3 and portage snapshot per the handbook. x86 or amd64 is fine, but be sure to check the Amazon EC2 product page to see which of their offerings are 32/64-bit. This guide is written for 64-bit.
  2. Create yourself a disk image with dd if=/dev/zero of=image.fs bs=1M count=5000 – feel free to tailor the size to your needs but mind the EC2 root filesystem limitations unless you want to run it on elastic storage. The uploaded image will be compressed so you won’t be paying for any unused space in your image.
  3. Point a loopback at your image, format it with a supported filesystem of your choice, and mount it.
  4. Extract the stage3 into your mount. Extract your portage snapshot as well.
  5. Tailor your make.conf as desired – set mirrors/etc. Copy your resolv.conf so that you can chroot.
  6. Mount /proc and /dev per the handbook. If you’re going to install screen bind mount /dev/pts as well.
  7. Chroot into your new environment. Env-update and source /etc/profile. Do an emerge –sync.
  8. Select a profile. Honestly, for most EC2 applications you’re going to want the default 10.0 profile, but it is up to you.
  9. Edit your locales and run locale-gen – sooner or later you’re going to have to update glibc so no sense taking all day to do it.
  10. Set your timezone – I believe EC2 uses GMT but let me know if I’m wrong on that.
  11. Set up your fstab (this is for 64-bit – see the EC2 docs for more details):
    /dev/sda1 / ext3 defaults 1 1
    /dev/sdb /mnt ext3 defaults 0 0
    none /dev/pts devpts gid=5,mode=620 0 0
    none /dev/shm tmpfs defaults 0 0
    none /proc proc defaults 0 0
    none /sys sysfs defaults 0 0
  12. Set net.eth0 to start in the default runlevel
  13. Set a root password. Create and properly set permissions on /root/.ssh/authorized_keys and put anything appropriate in there.
  14. Emerge and set to run syslog-ng.
  15. If you need cron install the cron daemon of your choice. Keep in mind that EC2 instances are volatile so cron is probably less useful than it normally would be.
  16. Add >=sys-fs/udev-125 to your package.mask file.
  17. Populate your world file. A dhcp client like dhcpcd is essential, everything else here is optional:
  18. Do an emerge -au world to install your software, it will downgrade udev and this is a good thing.
  19. Update all your .config files
  20. Remove any unnecessary tarballs, configure user accounts if needed, and do any application setup based upon the purpose of your system.
  21. Add the following to your local.start (credits to whoever created the ami I ripped this out of – I’ll be happy to post them if you identify yourself – as far as I can tell ami owners can’t be identified):

    [ ! -e /root ] && cp -r /etc/skel /root
    if [ ! -d /root/.ssh ] ; then
    mkdir -p /root/.ssh
    chmod 700 /root/.ssh
    curl > /tmp/my-key
    if [ $? -eq 0 ] ; then
    cat /tmp/my-key >> /root/.ssh/authorized_keys
    chmod 600 /root/.ssh/authorized_keys
    rm /tmp/my-key
    killall nash-hotplug
  22. Configure sshd to run at startup, and edit your sshd config to allow root to login.
  23. Exit your chroot, umount anything mounted inside of it.
  24. Clean up tmp, var/tmp, and usr/portage/distfiles, and any other messes you have made. I suspect that to compress the image fully you probably need to zero the free space (dd if=/dev/zero of=file-inside-filesystem BS=1M count=5000 ; rm file-inside-filesystem).
  25. Umount your image and delete any loops you created. Congratulations, you now have a raw image file suitable for EC2. Now we just need to bundle, upload, and register it.
  26. Bundle your image with:
    mkdir out
    ec2-bundle-image --image path-to-image --prefix give-it-a-name --cert path-to-cert-file --privatekey path-to-pk-file --user youramazonaccountnumber --destination out/ --arch x86_64
  27. Upload your image with:
    ec2-upload-bundle --manifest out/prefix.manifest.xml --bucket S3-bucket-name --access-key S3accesskeyhere --secret-key S3secretkeyhere
  28. Register your image with:
    ec2-register S3-bucket-name/prefix.manifest.xml

Congratulations, you now have what should be a working ami. If you ever need to update it you can just chroot into your image, adjust it, and then re-bundle, upload, and register. If you need to delete an ami there is a command that will do it, but I usually just use s3cmd.

You might want to see my updated guide on building with a custom kernel.

Written by rich0

April 2, 2010 at 10:41 am

Posted in gentoo