Archive for April 2010
Gentoo on EC2 From Scratch
I’ve been beginning to tinker with Amazon EC2 and I figured that many might benefit from a Gentoo-from-scratch recipe. There aren’t too many gotchas. Credit is due to this blog for providing a good non-Gentoo-specific overview.
Before trying any of this, you need to be at least a little familiar with EC2. They have a great getting started walkthrough here. Once you know how to start instances and connect to them you’re halfway there. You’ll also need to set up an S3 account and have an access key and secret key to store your images. On your gentoo box install ec2-ami-tools and ec2-api-tools.
The biggest issue we’re going to have to deal with is that you can’t supply your own kernel. The two issues this creates are udev compatibility and kernel modules. The former is dealt with by setting package.mask, and the latter by hunting down module tarballs online. The modules are fairly optional unless you want to bundle a running image (this requires loop support, which isn’t built in the EC2 kernel).
You might be tempted to start from an existing EC2 AMI. If you find a good one, that isn’t a bad idea, but most of them are VERY out of date. Updating a gentoo install with a pre-EAPI portage and glibc/gcc versions that aren’t even in the tree any longer will be painful. Creating one from scratch isn’t actually that hard.
So, without further ado, here is the recipe (the steps generally follow the Gentoo handbook, so be sure to follow along in that):
- Get yourself a stage3 and portage snapshot per the handbook. x86 or amd64 is fine, but be sure to check the Amazon EC2 product page to see which of their offerings are 32/64-bit. This guide is written for 64-bit.
- Create yourself a disk image with
dd if=/dev/zero of=image.fs bs=1M count=5000
– feel free to tailor the size to your needs but mind the EC2 root filesystem limitations unless you want to run it on elastic storage. The uploaded image will be compressed so you won’t be paying for any unused space in your image. - Point a loopback at your image, format it with a supported filesystem of your choice, and mount it.
- Extract the stage3 into your mount. Extract your portage snapshot as well.
- Tailor your make.conf as desired – set mirrors/etc. Copy your resolv.conf so that you can chroot.
- Mount /proc and /dev per the handbook. If you’re going to install screen bind mount /dev/pts as well.
- Chroot into your new environment. Env-update and source /etc/profile. Do an emerge –sync.
- Select a profile. Honestly, for most EC2 applications you’re going to want the default 10.0 profile, but it is up to you.
- Edit your locales and run locale-gen – sooner or later you’re going to have to update glibc so no sense taking all day to do it.
- Set your timezone – I believe EC2 uses GMT but let me know if I’m wrong on that.
- Set up your fstab (this is for 64-bit – see the EC2 docs for more details):
/dev/sda1 / ext3 defaults 1 1
/dev/sdb /mnt ext3 defaults 0 0
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
- Set net.eth0 to start in the default runlevel
- Set a root password. Create and properly set permissions on /root/.ssh/authorized_keys and put anything appropriate in there.
- Emerge and set to run syslog-ng.
- If you need cron install the cron daemon of your choice. Keep in mind that EC2 instances are volatile so cron is probably less useful than it normally would be.
- Add
>=sys-fs/udev-125
to your package.mask file. - Populate your world file. A dhcp client like dhcpcd is essential, everything else here is optional:
app-admin/ec2-ami-tools
app-admin/ec2-api-tools
app-admin/syslog-ng
app-editors/vim
app-misc/screen
app-portage/cfg-update
net-misc/dhcpcd
sys-process/atop - Do an emerge -au world to install your software, it will downgrade udev and this is a good thing.
- Update all your .config files
- Remove any unnecessary tarballs, configure user accounts if needed, and do any application setup based upon the purpose of your system.
- Add the following to your local.start (credits to whoever created the ami I ripped this out of – I’ll be happy to post them if you identify yourself – as far as I can tell ami owners can’t be identified):
[ ! -e /root ] && cp -r /etc/skel /root
if [ ! -d /root/.ssh ] ; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
curl http://169.254.169.254/2008-02-01//meta-data/public-keys/0/openssh-key > /tmp/my-key
if [ $? -eq 0 ] ; then
cat /tmp/my-key >> /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
rm /tmp/my-key
fi
killall nash-hotplug
- Configure sshd to run at startup, and edit your sshd config to allow root to login.
- Exit your chroot, umount anything mounted inside of it.
- Clean up tmp, var/tmp, and usr/portage/distfiles, and any other messes you have made. I suspect that to compress the image fully you probably need to zero the free space (
dd if=/dev/zero of=file-inside-filesystem BS=1M count=5000 ; rm file-inside-filesystem
). - Umount your image and delete any loops you created. Congratulations, you now have a raw image file suitable for EC2. Now we just need to bundle, upload, and register it.
- Bundle your image with:
mkdir out
ec2-bundle-image --image path-to-image --prefix give-it-a-name --cert path-to-cert-file --privatekey path-to-pk-file --user youramazonaccountnumber --destination out/ --arch x86_64 - Upload your image with:
ec2-upload-bundle --manifest out/prefix.manifest.xml --bucket S3-bucket-name --access-key S3accesskeyhere --secret-key S3secretkeyhere
- Register your image with:
ec2-register S3-bucket-name/prefix.manifest.xml
Congratulations, you now have what should be a working ami. If you ever need to update it you can just chroot into your image, adjust it, and then re-bundle, upload, and register. If you need to delete an ami there is a command that will do it, but I usually just use s3cmd.
You might want to see my updated guide on building with a custom kernel.