The reason I chose Ubuntu is for one important reason: they publish nightly builds of their OS for cloud machine deployment. These images contain a full base OS install and only require a bit of tweaking. Once deployed in QEMU they work just like any other Ubuntu installed device.
When researching this I used a git checkout of QEMU HEAD as 35858955e6c6f9ef41c199d15457c13426ac6434. I also used Ubuntu 14.04 LTS on an x86_64 system to run QEMU and to prepare the Ubuntu guest OS image.
First up we need to fetch an appropriate image. I like using LTS as Ubuntu is a little more careful about package updates/changes to their long-term releases. Grabbing the latest nightly image means fewer package updates once we're done customizing the image. With that in mind I selected this 300MB image. It's important to note the "-disk1.img" files are qcow2 images which are QEMU's native disk format.
Once the image is downloaded we need to resize the filesystem. By default Ubuntu only leaves about 400MB of space free for use and becomes pretty cramped for any real work. We have to perform three steps to resize the image. Resize the qcow2 disk size to whatever you want. I used 8GB:
$ $HOME/git/qemu/qemu-img resize trusty-server-cloudimg-arm64-disk1.img 8G Image resized. $ $HOME/git/qemu/qemu-img info trusty-server-cloudimg-arm64-disk1.img | grep size: virtual size: 8.0G (8589934592 bytes) disk size: 301M cluster_size: 65536Next, resize the disk partition layout. We use qemu-nbd to allow us to mount the filesystem as a block device. Ubuntu's kernel has support for nbd compiled in and we just need to load the module.
$ sudo modprobe nbd max_part=8 $ sudo $HOME/git/qemu/qemu-nbd --connect=/dev/nbd0 trusty-server-cloudimg-arm64-disk1.img $ sudo fdisk /dev/nbd0 Command (m for help): p Disk /dev/nbd0: 8589 MB, 8589934592 bytes 4 heads, 32 sectors/track, 131072 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0008bae4 Device Boot Start End Blocks Id System /dev/nbd0p1 * 2048 2889727 1443840 83 LinuxIn the above output we need to save off the starting block of the partition /dev/nbd0p1. It's the number 2048 in the last line above. I've only ever seen 2048 but it can possibly change in the future so be sure to check what it is with fdisk. Now we delete (and re-create) the partition entry:
command (m for help): d Selected partition 1 Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): 1 First sector (2048-16777215, default 2048): 2048 Last sector, +sectors or +size{K,M,G} (2048-16777215, default 16777215): Using default value 16777215 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.Note the use of 2048 as the First Sector as mentioned previously. The last step is to resize the actual filesystem contained within the first partition:
$ sudo e2fsck -f /dev/nbd0p1 ... $ sudo resize2fs /dev/nbd0p1While we have the filesystem available through nbd we need to obtain the Linux kernel and initrd.img files:
$ sudo mkdir -p /mnt/virt && sudo mount /dev/nbd0p1 /mnt/virt $ cd /some/path $ sudo cp /mnt/virt/boot/*-generic . $ sudo chown $USER:$USER *-generic $ ln -s *disk1.img disk1.img $ ln -s vmlinuz-* vmlinuz $ ln -s initrd.img-* initrd.imgAs far as I can tell QEMU doesn't like having to search the virtual hard drive for the bootable images which necessitates that step. The symlinks just make invocations to QEMU a bit easier and shorter. And now we can remove the nbd mounted filesystem:
$ sudo umount /mnt/virt && rmdir /mnt/virt $ sudo $HOME/git/qemu/qemu-nbd -d /dev/nbd0We're now ready for our first boot of QEMU. Here's the command line I used:
$ $HOME/git/qemu/aarch64-softmmu/qemu-system-aarch64 \ -machine type=virt -cpu cortex-a57 -smp 1 -m 2048 -nographic \ -rtc driftfix=slew -kernel vmlinuz \ -append "console=ttyAMA0 root=LABEL=cloudimg-rootfs rw init=/bin/sh" \ -initrd initrd.img -device virtio-scsi-device,id=scsi \ -device scsi-hd,drive=hd -drive if=none,id=hd,file=disk1.imgThe boot -append line is important here: the LABEL= is necessary for Unbuntu to mount the root filesystem and we short-circuit the boot process into an emergency shell. If we didn't boot into a shell the image would try to come up as a cloud slave node and try to phone home. Not very useful for our needs. If all goes well you should see a prompt like the following:
Begin: Running /scripts/init-bottom ... done. #At this point there are some customization steps that need to be done before we can boot this image normally. In order to set things up we have to mount the soon-to-be root filesystem and then chroot into it:
# mount /dev/sda1 /mnt # chroot /mnt /bin/bash bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell root@(none):/#Set root's password. Ubuntu images don't have one:
root@(none):/# passwd root Enter new UNIX password: Retype new UNIX password: passwd: password updated successfullyRemove the cloud packages from the Ubuntu image. This speeds up boot time and also gives you a console login prompt:
root@(none):/# dpkg -P cloud-guest-utils cloud-init pollinate landscape-client landscape-common apparmor apport apport-symptoms ufw ... root@(none):/# rm -f /etc/init/pollinate.conf /etc/default/pollinate root@(none):/# rm -rf /etc/cloudDisable the VESA framebuffer. It doesn't work with QEMU and just produces boot-up errors.
root@(none):/# perl -p -i -e 's/modprobe -q -b vesafb/true/g;' /etc/init/udev-fallback-graphics.confAllow all users to SSH into the OS with passwords and keys:
root@(none):/# perl -p -i -e 's/^PermitRootLogin.+$/PermitRootLogin yes/g;' /etc/ssh/sshd_config root@(none):/# perl -p -i -e 's/^PasswordAuthentication.+$/PasswordAuthentication yes/g;' /etc/ssh/sshd_configGenerate new, unique SSH server keys for each guest OS:
root@(none):/# rm -f /etc/ssh/ssh_host_*_key* root@(none):/# dpkg-reconfigure openssh-serverChange your timezone with the ncurses menu:
root@(none):/# dpkg-reconfigure tzdataI also recommend setting up NTP on the Guest OS, especially if you're sharing files between the guest and host OS. The change below will tell Ubuntu to synchronize time with ntp.ubuntu.com, the default setting in the "NTPSERVERS=" variable in the same file:
root@(none):/# perl -p -i -e 's/^NTPDATE_USE_NTP_CONF=.+$/NTPDATE_USE_NTP_CONF=no/g;' /etc/default/ntpdate(optional) I like the next two commands to make root logins much more responsive. Ubuntu will try to gather OS information on every login and display it to you if you don't disable it. These commands remove that feature. Don't run these if you like seeing Ubuntu's login information:
root@(none):/# >/root/.hushlogin root@(none):/# rm -f /etc/update-motd.d/*(optional) Setup a shared storage location to copy files to/from the Guest and Host OS. QEMU has a nice feature called virtio to accomplish this:
root@(none):/# mkdir /shared root@(none):/# vi /etc/rc.local /bin/mount -t 9p -o trans=virtio lvdisk0 /shared # add this somewhere before the 'exit 0' lineDon't add this entry to /etc/fstab. There seem to be module loading conflicts during boot time which prevent it from succeeding.
(optional) Create a non-root admin user:
root@(none):/# adduser ubuntu ... Is the information correct? [Y/n] y root@(none):/# usermod -aG adm ubuntu root@(none):/# usermod -aG sudo ubuntu(optional) Statically set the hostname. You also can choose to pass in a hostname at boot (see below):
root@(none):/# echo "newhost" >/etc/hostnameWhen you're done you need to exit the chroot and halt the system to perform the final normal boot:
root@(none):/# exit # sync; sync; sync; haltI've had the halt command fail on me which is why I have the old-fashioned three syncs before executing it.
You can exit the QEMU console with "ctrl-a c" to get the QEMU console, and then typing "quit". Finally you can try to boot the image normally. Here's an example command line I use:
$ BASE=$PWD; HOST=ubuntu; mac=52:54:00:00:00:00; sshport=22000 $ $HOME/git/qemu/aarch64-softmmu/qemu-system-aarch64 \ -machine type=virt -cpu cortex-a57 -smp 1 -m 2048 -nographic \ -rtc driftfix=slew -kernel "$BASE"/vmlinuz \ -append "console=ttyAMA0 root=LABEL=cloudimg-rootfs rw" \ -initrd "$BASE"/initrd.img \ -device virtio-scsi-device,id=scsi -device scsi-hd,drive=hd \ -drive if=none,id=hd,file="$BASE"/disk1.img \ -fsdev local,id=vdisk0,path="$BASE"/shared,security_model=none \ -device virtio-9p-device,fsdev=vdisk0,mount_tag=lvdisk0 \ -netdev user,hostfwd=tcp::${sshport}-:22,hostname=$HOST,id=net0 \ -device virtio-net-device,mac=$mac,netdev=net0 \ -monitor telnet:0.0.0.0:24000,server,nowait \ -serial telnet:0.0.0.0:23000,server,nowaitThere are a few differences from the previous QEMU invocation worthy of note. I have setup a virtio shared directory at $BASE/shared using the line starting with "-fsdev". The netdev line sets a MAC address for the device and also port-forwards traffic on the host OS port $sshport to the guest OS port 22. This port-forwarding allows both remote and local SSH connections to the running guest OS over SSH. The HOST= parameter sets the guest OS hostname.
The "-monitor" and "-serial" lines give you telnet-like interfaces to the QEMU console and Linux console respectively. It's a great way to expose those interfaces as long as your network is secure, like a lab environment. I wouldn't expose those interfaces if you don't trust others on your network as those ports will act like telnet sessions without any authentication.
At this point you should be able to SSH into $sshport on the QEMU host and login as either root or your admin user. You can then apt-get and natively compile to your heart's content.
If you want to run multiple instances of QEMU on the same host just change $sshport and $BASE to unique values and copy the kernel, initrd.img, and disk1.img files separately for each running instance. I've run 5 instances on the same host OS with no problems using this technique and I'm sure I could have run more as needed.
Feedback and comments are appreciated. I know it's a bit of a brain dump but I figured it'd be useful for others to have this information. Due to the scarce available of aarch64 hardware this is a great stop-gap until those cheap SoCs start hitting the market.
Have fun!