Tuesday, December 16, 2008

An Encrypted File-backed File System on FreeBSD

The following is a compilation of information, largely based on the FreeBSD Handbook, Section 18.13 and Section 18.16. This post describes how a file-backed, encrypted file system can be built and used on FreeBSD.

Prerequisites

In order to follow the steps below, the following prerequisites must be met:
  • md(4) in the Kernel
  • gbde(4) in the Kernel, i.e. kldload geom_bde
  • The /etc/gbde directory must exist

First time installation

After those requirements are fullfilled, it's time to take the first step which is creating a file that will serve as the basis for the file system. There is no support for growing images so you need to allocate all space now. This command creates a 128 MByte file filled with zeros:
$ dd if=/dev/zero of=encrypted.img bs=4k count=32768
Next, create a memory disk which is based on the the image file created above. As root, do:
# mdconfig -a -t vnode -f encrypted.img -u <unit>
In the example above, the parametr -u <unit> is optional and specifies a number which determines the number of the md(4) device. For example, if you use 4, then md4 will be created.

Now create a partition table which, e.g. one with an automatic layout:
# bsdlabel -w md<unit> auto
At this point, you have the equivalent of a hard disk w/ one or more FreeBSD partitions on it. Note that there is no filesystem, yet. In order to create an encrypted file system, an initialization step must be performed:
# gbde init /dev/md0c -i -L /etc/gbde/encrypted.lock
The initialization step opens an editor where the user is asked to enter a few parameters. Most notably it is probably sensible to change the sector_size to 4096, i.e. the page size on i386. When the editor is closed, the gbde(8) program asks for a password. This password will be used to encrypt the disk, so do not lose it. Note that the /dev/md0c parameter corresponds to the md(4) device which was previously created. The file of the lock name can be arbitrarily named as long as its ending is .lock. Also note that the lock file must be backed up as the file system cannot be easily accessed without the file.

Now the encrypted device can be attached by running
# gbde attach /dev/md0c -l /etc/gbde/encrypted.lock
You'll be prompted for the password set in the previous step. If the password is accepted, you'll end up with a new disk device at /dev/md0c.bde on which you can operate the same way as on a regular disk. That means you'll need to create a file system, first.
# newfs -U -O2 /dev/md0c.bde
Make sure you use the .bde device node and not the raw memory disk as you'd end up without encryption. When you're done, it's time to mount the file system:
# mkdir /encrypted
# mount /dev/md0c.bde /encrypted

Unmounting the encrypted file system

Unmounting the file system is easy, but the gbde(4) device needs to be detached before the md(4) device can be destroyed.
# umount /encrypted
# gbde detach /dev/md0c
# mdconfig -d -u 0

Re-mounting an encrypted file system

Re-mounting is simple, but note that the FreeBSD handbook suggests that the file system be checked for errors before mounting:
# mdconfig -a -t vnode -f encrypted.img
md0
# gbde attach /dev/md0c -l /etc/gbde/encrypted.lock
# fsck -p -t ffs /dev/md0c.bde
# mount /dev/md0c.bde encrypted

Saturday, November 22, 2008

Generating random passwords

Here are a couple of ways of generating random passwords without using a "password generator". First, generate a random string like this:
$ dd if=/dev/urandom count=500 bs=1 | tr "\n" " " | sed 's/[^a-zA-Z0-9]//g'
or like this
$ dd if=/dev/urandom count=500 bs=1 | md5
Then adjust the length by piping the output through cut(1):
... | cut -c-8
While the first option is more to type, it generates lower and upper case letters. The second option is easier to type but only generates lower-case passwords.

Update (Dec 12th, 2008): Fixed error. cut(1) must be used, not cat(1).

Thursday, November 13, 2008

Big R Radio 90's Alternative

Just to save the link somewhere... This command tunes in on the Big R Radio 90's Alternative station.
mplayer http://livestream2.bigrradio.com/90salt

Friday, November 7, 2008

The new GenFw Tool

I've re-written the GenFw tool part of the TianoCore BaseTools project. The source code can be found here. In order to use the tool, the file Source/C/GenFw/GenFw.c must be replaced with the re-written one. Then, the base tools must be re-built. After that, the EDK2 build process can be started. It will automatically pick up the new tool which will brand an ELF file with an UEFI file type.

Currently, the re-written tool will not compile on Linux. The reason is that Linux lacks implementations of err(3), errx(3), warn(3), etc. library functions which the BSDs have. It should be easy to add some compatibility macros using a combination of fprintf(3), strerror(3) and exit(3). I might add those should the need arise.

Update (Dec 3rd, 2008): I've added the compatibility macros for Linux. An updated version of the source code can be downloaded here.

Friday, October 31, 2008

More on TianoCore for coreboot

It's been a while since I last worked on combining TianoCore and coreboot. Tonight I had some spare time and tried to pursue the project.

The previously mentioned build failure does indeed stem from the fact that the build tools cannot cope with ELF binaries. Especially problematic is the GenFw tool which is supposed to convert the binary file into an UEFI firmware volume file. In order to do that, it parses the header information of the input binary executable file and encodes the type of file (in UEFI terms) in a spare header field. The tool expects to work on PE32 files but the TianoCore developers have added code which converts an ELF image into a PE32 image internally if the tool is pointed at an ELF file. However, this facility is only compiled in if #defined(Linux) is true. Of course, that won't work on FreeBSD but changing the relevant pre-processor condition allowed me to produce an UEFI firmware volume without any further changes to the code.

However, this shortcut will only work on x86 and only if the target platform is x86, too. The real solution is to avoid the conversion and instead encode the UEFI file type directly into the ELF header. I've already done this for my thesis project (*) and back then it seemed that re-writing the GenFw tool was easier than fixing the existing implementation. Well, here's the next item on the ToDo list...

(*) I used the Java-based tools for the thesis project which means that a different tool with essentially the same functionality was the culprit.

Friday, October 24, 2008

"Parallels" for Linux

Ben has an interesting post on how to boot Windows XP using KVM on Fedora Core 9. The interesting part is that Windows XP is installed on the host's hard disk. His instructions almost work verbatim, but there's one exception. Since I'm using KVM-73, the QEMU command is:
$ qemu-system-x86_64 -hda /dev/sda -net nic -net user -m 1024 \
    -cdrom fixntldr.iso -boot d -std-vga 
This will also give the guest system access to the network.

Thursday, October 23, 2008

Encrypted Devices/Filesystems on Linux

Yesterday I tried to encrypt a complete USB Stick under Linux. I followed this tutorial and it worked quite well.

Mounting the encrypted device isn't as obvious as could be, so here it goes:
$ cryptsetup create <symbolic name> <device name>
$ mount /dev/mapper/<symbolic name> <mountpoint>

Wednesday, September 24, 2008

The beginnings of coreboot and TianoCore

In order to create a UEFI payload for coreboot, I've started a coreboot platform as part of the TianoCore EDK II. The sources for the platform can be obtained here. Note that the CorebootPkg directory must be placed in the TianoCore $WORKSPACE directory.

To build the package on FreeBSD, a GNU toolchain from vendor sources must be used. This is because the TianoCore tools use some compiler/linker flags unknown to the toolchain included in the FreeBSD base system. The path as well as the names of the toolchain binaries must be adjusted in Conf/tools_def.txt. Because I built the toolchain according to these instructions, the preprocessor will not look in /usr/include for headers which causes errors in the ProcessorBind.h header when it attempts to include stdint.h. This patch can be applied to fix this.

Note that the build process still cannot complete as the tools producing the final Firmware Volume (FV) cannot cope with the ELF binaries produced by the GNU toolchain.

Tuesday, September 23, 2008

TianoCore and the Python-based Build Process, Part 3

This is part III of my attempts to build the TianoCore EDK II with the Python-based tools. In order to circumvent the error that stopped me in part II, the build process needs to be taught to use GNU make, i.e. gmake, on FreeBSD instead of make, which is BSD make. This can be done by editing the *_ELFGCC_*_MAKE_PATH variable in Conf/tools_def.txt.

The tools_def.txt file is automatically copied from a template part of the BaseTools sources. This patch fixes the template so the changes described above do not have to be applied manually.

At this point, the build process starts and does actually build some modules. However, the UnixPkg cannot be built completely on FreeBSD. This is because the code makes some assumptions only true on Linux, e.g. the presence of the sys/vfs.h header.

Saturday, September 20, 2008

TianoCore and the Python-based Build Process, Part 2

So here's the "sequel" to Part One. This time I'm trying to actually build a Firmware Volume with the Python-based tools.

Prerequisites for build.py

The core of the tools is build.py, a Python script which invokes the tools in order to build a Firmware Volume (FV). On FreeBSD, build.py cannot be run until the following requirements are met:
  • SQLite3 for Python, which can be installed through the databases/py-sqlite3 port.
  • The Python module for ANTLR, a parser generator.
  • Installing the module mentioned above requires EasyInstall, or rather: I don't know how it can be done otherwise.
Because I could not find a port for EasyInstall, I did this to install the script on FreeBSD:
$ fetch http://peak.telecommunity.com/dist/ez_setup.py
$ chmod +x ez_setup.py
$ ./ez_setup.py
Note that this isn't the whole truth as the path to the interpreter in the script, i.e. the first line aka "shebang", must be adjusted to /usr/local/bin/python before the script can be executed.
After that, the easy_install command is available and the ANTLR module can be installed by running this:
$ eazy_install \
  http://www.antlr.org/download/Python/antlr_python_runtime-3.0.1-py2.5.egg

Running build.py

In theory, running build.py and thus building a Firmware Volume should be as easy as this:
$ cd path/to/edk2
$ export PYTHONPATH=/path/to/basetools/Source/Python
$ . edksetup.sh BaseTools
$ python $PYTHONPATH/build/build.py
Unfortunately, the last step initially aborted with this error:
build...
 : error 5000: Please execute /home/phs/sandbox/basetools/Bin/FreeBSD-i386:/sbin: \
/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin: \
/home/phs/bin/edksetup.bat to set /home/phs/sandbox/basetools/Bin/Freebsd7 in \
environment variable: PATH!



- Failed -
After some try'n'error time, I think that the above error is caused by user error: I had previously copied the compiled C programs from Source/C/bin to bin/FreeBSD-i386 (paths relative to /path/to/basetools). After removing bin/FreeBSD-i386, I created a link to BinWrappers/PosixLike at the same location:
$ cd /path/to/basetools
$ ln -s BinWrappers/PosixLike Bin/FreeBSD-i386
I then re-ran build.py (see above) and it produced some output that didn't look like errors:
00:44:09, Sep.21 2008 [FreeBSD-7.1-PRERELEASE-i386-32bit-ELF]

WORKSPACE                = /usr/home/phs/sandbox/edk2
EDK_SOURCE               = /usr/home/phs/sandbox/edk2/EdkCompatibilityPkg
EFI_SOURCE               = /usr/home/phs/sandbox/edk2/EdkCompatibilityPkg
EDK_TOOLS_PATH           = /home/phs/sandbox/basetools

TARGET_ARCH              = IA32
TARGET                   = DEBUG
TOOL_CHAIN_TAG           = ELFGCC

Active Platform          = UnixPkg/UnixPkg.dsc
Flash Image Definition   = UnixPkg/UnixPkg.fdf

Processing meta-data . . . . . . .
Unfortunately, though, right after the dots, an error occured:
build...
UnixPkg/UnixPkg.dsc(...): error 4000: Instance of library class [NetLib] is not found
        in [MdeModulePkg/Universal/Network/ArpDxe/ArpDxe.inf] [IA32]
        consumed by module [MdeModulePkg/Universal/Network/ArpDxe/ArpDxe.inf]
 

- Failed -
00:44:17, Sep.21 2008 [00:08]

Fixing the UnixPkg

The UnixPkg part of the EDK II seems to be broken as the error above indicates a dependency error between modules which is caused by an incorrect platform definition file (*.dsc). Applying this patch fixes the problem.

The patch ensures that all dependencies are met, but the build process still fails with this error:
Processing meta-data . . . . . . . . done!
make: don't know how to make pbuild. Stop


build...
 : error 7000: Failed to execute command
        make pbuild [/usr/home/phs/sandbox/edk2/Build/Unix/DEBUG_ELFGCC/IA32/MdePkg/Library/BaseTimerLibNullTemplate/BaseTimerLibNullTemplate]

Waiting for all build threads exit...
make: don't know how to make pbuild. Stop


build...
 : error 7000: Failed to execute command
        make pbuild [/usr/home/phs/sandbox/edk2/Build/Unix/DEBUG_ELFGCC/IA32/MdePkg/Library/BaseLib/BaseLib]


build...
 : error F002: Failed to build module
        MdePkg/Library/BaseTimerLibNullTemplate/BaseTimerLibNullTemplate.inf [IA32, ELFGCC, DEBUG]

- Failed -
01:01:43, Sep.21 2008 [00:09]
Oh, well, to be continued...

Sunday, September 14, 2008

TianoCore and the Python-based Build Process

Now that I can use coreboot and libpayload on FreeBSD, it's time to try the new Python-based build process for the TianoCore EDK II on FreeBSD.

Prerequisites are: Note that Subversion access requires a user account at the TianoCore project.

Installing the e2fs-libuuid port

This is trivially easy, just do:
$ cd /usr/ports/misc/e2fs-libuuid
$ sudo make install
That's all. The headers and libraries are installed under /usr/local.

Building the Base Tools

Compiling the Base Tools, i.e. the Python-based TianoCore build tools, isn't complicated but doesn't work out of the box, either. First, these two patches (patch 1, patch 2) must be applied:
$ cd /path/to/basetools
$ patch -p0 < basetools_include.diff
$ patch -p0 < basetools_make.diff
The first patch adjusts some include paths so that /usr/local/include is searched, too, which is required in order to find the uuid/uuid.h header. The second patch replaces invocations of make to use the $(MAKE) variable which holds the name of invoked the make binary. This is required as in FreeBSD (and other BSDs), make is not GNU make, however the latter is required to build the Base Tools. Consequently, when building the project, make sure that gmake is used:
$ gmake

Compiling the EDK II

To be continued...

Friday, September 12, 2008

Hacking coreboot and libpayload

After some quiet time, I picked up a project I started a while ago: Hacking coreboot and libpayload on FreeBSD.

Building coreboot

On FreeBSD, building coreboot requires a toolchain built from the GNU sources. If the stock toolchain is used, the build process dies with this error:
CC      build/stage0.init
/usr/home/phs/sandbox/coreboot-v3/build/arch/x86/stage0_i586.o(.text+0xf): In function `_stage0':
: relocation truncated to fit: R_386_16 gdtptr
gmake: *** [/usr/home/phs/sandbox/coreboot-v3/build/stage0.init] Error 1
The solution is to build a compiler as described here and here and then set the $CROSS environment variable accordingly, e.g. like this:
$ export CROSS=/opt/bin/i386-unknown-linux-gnu-
Note that the above requires that this patch is applied. After that, build the bios.bin binary using GNU make.
$ gmake menuconfig
$ gmake

Building libpayload

Compiling libpayload is trickier than building coreboot as the build files assume that the world is Linux. First, sh is not always bash. Second, the header and library search paths are screwed up as they don't include /usr/local subdirectores. Third, the gettext library is installed as libgettextlib.so on FreeBSD and must be linked against the program explicitly. And finally, the install(1) tool has different parameters than on Linux. Oh, and there are no stdarg.h and stddef.h headers.

I've hacked around those issues, the Mercurial repository is available at https://phs.subgra.de/hg/libpayload.

Monday, August 25, 2008

Hacking SLOF Clients

SLOF includes a few clients. Especially the net-snk client is interesting as it allows one to add applications which can be started by specifying their name on the command line when booting the kernel.

In order to hack SLOF client applications, the sec-client target must be built in the clients/net-snk subdirectory of the SLOF source tree. Before this can succeed, the CPUARCH environment variable must be set, like this:
$ export CPUARCH=ppc970

Sunday, August 10, 2008

Latest Version of KOMAscript

The Tex Live distribution Mac OS X is getting more outdated by the day. Maybe there's a way to update it, but I didn't bother to check.

For a reasonably good layout of letters without a footer I found that a recent version of KOMAscript is required. I'm using Version 2.98 obtained from http://dante.ctan.org/.

Installing the new version proved to be quite easy:
$ cd /usr/local/texlive/2007/texmf-dist
$ sudo unzip komascript.tds.zip
$ sudo texconfig rehash

Thursday, July 10, 2008

Cross compiling the FreeBSD Kernel

Wow, two post in one day already ;-)

There are two things I'd like to note. First, I noticed that cross-compiling seems a major issue for me. Don't know why that is.

Second, I need to remember Warner Losh's post on cross compiling FreeBSD. Essentialy, the procedure is:
$ export TARGET=powerpc
$ export TARGET_ARCH=powerpc
$ make kernel-toolchain
$ make buildkernel
Addendum: Unfortunately, this procedure works only on architectures already supported by FreeBSD and its build system. Therefore, it doesn't work for me. So here's the full story on how I got FreeBSD to at least compile.

Building the Toolchain

Building the toolchain is pretty straight forward. I've already written about how to build a cross compiler. On FreeBSD however, four things are different.
  • The target is powerpc64-unknown-freebsd. I don't know if powerpc64-unknown-elf would have worked, too.
  • The target is new, so a patch to the binutils sources is required.
  • The GNU MP Bignum Library (GMP) is required. I used version GMP 4.2.1 and installed it in $PREFIX
  • Finally, the MPFT Library needs to be built. I used version MPFT 2.3.0 and installed it in $PREFIX
Note that those steps have to be performed before the compiler is built. Since I didn't install the libraries in the standard locations, the LD_LIBRARY_PATH variable needs to be set before the compiler can be used:
$ export LD_LIBRARY_PATH=$PREFIX/lib

Building the Kernel

The basic procedure of building a kernel is outlined in the FreeBSD Developer's Handbook. Provided that the cross compiler has been installed in $PREFIX, these steps are required:
$ export MACHINE=powerpc64
$ export MACHINE_ARCH=powerpc64
$ export CC=${PREFIX}/${TARGET_PREFIX}-gcc
$ export NM=${PREFIX}/${TARGET_PREFIX}-nm
$ export LD=${PREFIX}/${TARGET_PREFIX}-ld
$ export SYSTEM_LD=${LD}
$ export OBJCOPY=${PREFIX}/${TARGET_PREFIX}-objcopy
$ cd /usr/src/sys/powerpc64/conf
$ config KERNCONF
$ cd ../compile/KERNCONF
$ make cleandepend && make depend
$ make kernel
Oh, of course this doesn't work with the stock FreeBSD sources. Instead, my FreeBSD 64-Bit PowerPC Mercurial repository needs to be used. Note that for convenience reasons, that tree includes a crossenv script which, when sourced, sets up the required environment variables.

Linux KVM (kvm-70) on IBM Open Client 2.2

The Linux kernel-based virtual machine (KVM) is a great way for virtualization on computers running Linux. It requires virtualization support by the host processor (most modern x86 CPUs have this) and a kernel module. The kernel module can be built from the KVM sources.

Unfortunately, compiling the module on the IBM Open Client 2.2 distribution doesn't work out of the box. Instead, a patch is required. The patch is an extended version of this commit to the KVM repository and applies against the KVM-70 release tar ball.

Networking

The KVM networking documentation lists brctl(8) and tunctl(8) as requirements for a bridge between the host and the guest. On the Open Client distribution, the brctl utility is part of the bridge-utils package whereas the tunctl tool is part of uml-utils - on other distributions, that is. However, there is a Fedora Core 9 package available which seems to work.

Before starting the KVM guest, make sure that the tun kernel module is loaded. These are the steps I use to start the guest:
$ sudo modprobe tun
$ MACADDR=`genmac`
$ sudo qemu-system-x86_64 -hda freebsd-7.0.img \
   -net nic,macaddr=$MACADDR -net tap,script=qemu-ifup
Note that the genmac and qemu-ifup scripts are the examples from the KVM documentation.

NAT on the bridge

Because I cannot put the KVM guest on the host network, I need to do NAT on the host. I've found this quick tutorial on NAT with iptables. The four steps are:
# echo 1 > /proc/sys/net/ipv4/ip_forward
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
# iptables -A FORWARD -i eth0 -o tap0 -m state \
     --state RELATED,ESTABLISHED -j ACCEPT
# iptables -A FORWARD -i tap0 -o eth0 -j ACCEPT
Also, make sure the tap0 interface has an IP address:
$ sudo ifconfig tap0 192.168.0.1/24

Wednesday, July 2, 2008

PowerPC Device Tree Compiler

The Device Tree Compiler (DTC) project is hosted at OzLabs. The website seems to be unavailable at the moment, but the git repository at git://ozlabs.org/srv/projects/dtc/dtc.git seems to work.

Cross-building the tools works fine. This is what I did:
$ export CC=ppu-gcc
$ make
This will create the dtc and ftdump tools which can then be copied to the target machine.

Tuesday, July 1, 2008

Cross-compiling the Linux kernel

I need to cross-compile a PowerPC Linux kernel on an x86 laptop. I've found instructions on how to compile (not cross-compile) the Linux kernel at this website. Further, there is a post to a mailing list here which shows how to cros-compile the kernel. The mailing list post mentions a ccontrol file, but I have no clue what that is. Luckily I've found this blog post, which seems to be more accurate.

Thursday, June 26, 2008

Building a PowerPC Cross Compiler

I need to build my own cross compiler which will run on i386 and produce 64-Bit PowerPC Binaries. I've found a pretty neat introduction to building a cross compiler on IBM's developerWorks site (registration required). The tutorial isn't a step-by-step guide, but it helped me a lot.

The basic procedure for building a cross-compiler is:
  • Obtain headers specific to the operating system
  • Build binutils for the target platform
  • Build a bootstrap compiler
  • Build a standard C library for the target
  • Build the final cross-compiler, using the C library just built
The developerWorks tutorial doesn't mention this, but the first three steps can easily be run in parallel. Anyways, before starting, I've set these environment variables:
$ export TARGET=powerpc64-unknown-linux-gnu
$ export PREFIX=/opt/crosscompiler
$ export TARGET_PREFIX=$PREFIX/$TARGET
$ export PATH=$PATH:$PREFIX/bin

Obtaining Linux-specific Headers

I've followed the developerWorks tutorial on this one: Downloaded and extracting the Linux kernel sources, then copied the relevant files. Here are the commands I ran:
$ wget http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.25.9.tar.bz2
$ tar xvjf linux-2.6.25.9.tar.bz2
$ cd linux-2.6.25.9
$ make ARCH=powerpc CROSS_COMPILE=powerpc64-linux- menuconfig
(configure options, but tweaking isn't neccessary)
$ mkdir -p $TARGET_PREFIX/include
$ cp -r include/linux $TARGET_PREFIX/include
$ cp -r include/asm-powerpc $TARGET_PRFIX/include/asm
$ cp -r include/asm-generic $TARGET_PREFIX/include
If you read to the end of this post, then you'll realize that this step wouldn't be required (for now).

Building GNU binutils

I'm using GNU binutils 2.18, available from the GNU website. These are the steps required to build binutils.
$ wget 
$ tar xjvf
$ ./configure --prefix=$PREFIX --target=$TARGET --disable-nls -v
$ make all
$ make install
While building binutils did take a while, it wasn't as long as the tutorial makes you believe. On a IBM Thinkpad T60p built around a Centrino Duo CPU running at 2.16 MHz it took less than 10 minutes. Also note the last command ("make install"), which is missing from the developerWorks tutorial.

Building a bootstrap compiler

For my project I need GCC 4.x, the latest version at the time of writing is 4.3.1 which is available from a GNU mirror near you. Downloading and extracting is easy:
$ wget ftp://ftp.gwdg.de/pub/misc/gcc/releases/gcc-4.3.1/gcc-4.3.1.tar.bz2
$ tar xjvf gcc-4.3.1.tar.bz2
$ cd gcc-4.3.1
Here are the steps required to build a bootstrap compiler.
$ ./configure --prefix=$PREFIX --target=$TARGET --without-headers \
  --with-newlib -v
$ make all-gcc
$ make install-gcc
This took longer than building binutils, however it took less than 30 minutes (as opposed to the hours the tutorial talks about).

Building the GNU C Library (glibc)



$ CC=${TARGET}-gcc ./configure --target=$TARGET --prefix=$PREFIX \
  --with-headers=${TARGET_PREFIX}/include
Unfortunately, this command failed with the following error:
(...)
checking whether __attribute__((visibility())) is supported... no
configure: error: compiler support for visibility attribute is required
However, this isn't important as I won't need a standard C library for now - I'm building with -ffreestanding and -nostdlib anyways. Therefore I've decided that I won't pursue this futher but may come back later.

UEFI Adoption

As a note to myself: There is a press release of the UEFI forum available which shows which vendors already have adopted UEFI or will in the near future.

Sunday, June 22, 2008

Qemu, FreeBSD and coreboot

Since my attempts at getting Qemu running on Mac OS X were unsuccessfull, I've decided to go a different route. I'm now trying to build it on FreeBSD again.

Some time ago, I took some notes on how to get Qemu running on FreeBSD and added them to the coreboot wiki. Then some time later, I tried to build Qemu per those instructions, but had to discover that the port had been updated to a newer version of Qemu and no longer works.

So I've decided so maintain my own copy of the Qemu sources. The goal is to have a working version of Qemu which can be built on FreeBSD and can run coreboot. The repository is at svn+ssh://phs.subgra.de/usr/svnroot/qemu, a web frontend is available at https://phs.subgra.de/svnweb/index.cgi/qemu. Since the repository is not (yet?) public, here is a tar-ball of the latest version.

Building Qemu for FreeBSD from "my" sources is pretty straight forward. However, it's not as straight forward as building from Linux or from a FreeBSD port, so here are the full instructions ;-)
$ export BSD_MAKE=`which make`
$ ./configure (your options here)
$ gmake
$ gmake install
Have fun.

Sunday, June 15, 2008

GCC 3.3 on Mac OS X

I just tried to build Qemu on an Intel Mac. Qemu needs GCC 3.x and won't compile with GCC 4.x. The configure script for Qemu automatically detects that the GCC 3.x is installed as /usr/bin/gcc-3.3 and tries to use that. The problem is that the compiler is actually a cross-compiler for PowerPC and that it cannot produce x86 binaries. Per this post I found out that using the -arch ppc flag allows the compiler to be used on an Intel machine. Obviously the resulting binary will be a PowerPC binary, but should run under Rosetta on an Intel machine.

Note that I still don't have a solution for building Qemu on Mac OS X. This post is just a note about one aspect of the whole problem.

Wednesday, June 11, 2008

Leaky Abstractions

Today I came across an interesting article that sums up why abstractions still require one to know the lower levels. In essence, the author claims that abstractions are good and help us build complex systems, but are a great burden if something fails in the lower levels. I couldn't agree more, especially when I think of the TianoCore code and the UEFI software model.

Friday, June 6, 2008

Profiling Tools for Mac OS X

I came accross this page today, where Amit Singh talks about some profiling tools for Mac OS X. Particularily interesting seem to be the Computer Hardware Understanding Development (CHUD) Tools. I haven't used them yet, but will check them out soon.

Thursday, May 29, 2008

The infamous memory hole

Ok, so I've always suspected it, i.e. had a theory, but the CPC945 manual (section 7.2) confirms it.

If a machine has 4 GBytes of memory installed, and say, 1 GByte of I/O Memory is mapped at 0x80000000 (2 GBytes) upwards, then the physical memory will still be fully accessible. It will respond to read requests in the region 0x0 thru 0x7fffffff (i.e. 0 thru 2 GBytes - 1) and to the region 0xC0000000 thru 0x140000000 (i.e. 3 GBytes thru 5 GBytes).

This will of course only work if the CPU can make requests in that range, i.e. has a large enough address bus. Hence there is an actual hole that shadows physical memory when installing 4 GBytes in a x86-based System.

Dual Monitor on T60 (Internal + DVI)

I think I've found a way to make my T60p use the internal display and also drive an external monitor via the DVI port (on Linux). For some reason, this does not work automatically upon reboot, but has to be done from the command line.
aticonfig --dtop=horizontal --screen-layout=left --enable-monitor=lvds,tmds1
Now restart the X server and you should see video output on both monitors. I have the external monitor left of the laptop, so I need to run this command as well:
aticonfig --swap-monitor
Then, both monitors work. Unfortunately, I seem to have broken suspend/resume somewhere along the way. It seems that a combination of the things listed below make suspend/resume work again. I don't know if both are required or if either helps.
  • Update the fglrx driver. I'm using the kernel-module-ATI-upstream-fglrx (carries version numbers 8.452.1 and 2.6.18_53.1) as well as the ATI-upstream-fglrx package (version number 8.452.1-3.oc2) from the repository the IBM Open Client uses by default.
  • Disable the AIGLX extension

Friday, May 23, 2008

On some revision control systems

So I've long wondered about the advantages of those shiny, modern (distributed) revision control systems that have seemingly become quite fashionable these days. I started with CVS and liked it, but once I moved to Subversion I have started to feel a dissatisfaction with my version control system, the kind that would not go away if I went back to CVS. It's like driving a car. Once you drive a faster car, you realize the faster car is not fast enough. Obviously, going back to the original car is a step in the wrong direction.

CVS

The first revision control system I used was CVS. I liked it when I got used to it. It let me keep track of my files, was easy to back up and was fast enough for my little projects. There were good workarounds for CVS' limitations such as "renaming" files by performing an explicit add and then remove operation or by doing a "repo-copy" (i.e. copying the files in the repository itself). Empty directories could be pruned on updates. What else would one want?

Subversion

Well, I have long wondered why I should be using Subversion instead of CVS. After all, CVS has worked well for me in the past and simply "because it can rename files" hasn't been too convincing. In fact, I have heard that argument so often and from so many people, it started to become a reason not to switch to Subversion. Well, then I gave Subversion a shot and I have to say, I like it - with some limitations.

But let me first say what I think is good about Subversion. I like Subversion's speed advantage over CVS. The reason I started using Subversion was that I wanted a way to track my own changes to a rather large source tree that is kept in CVS. I wanted to make periodic imports of snapshots and merge my local changes - similar to how vendor branches work in CVS. When trying to accomplish this with CVS, it became apparent that it would be very time consuming: An import and merge session could take several hours. Doing the same thing with Subversion accellerated the process quite a bit - an update session would still take about an hour, but mostly because I had to download the updated source tree from the net.

Ok, that's about it when it comes to subversion's advantages. What I don't like is subversion's poor handling of branches. I don't think a branch is another directory, I think a branch is a branch. The same holds true for a tag. Also, merging branches is a major pain - while simple at first, it will get to the point where keeping track of what has been merged and what needs merging is a complex task . Granted, CVS isn't a whole lot better at that.

To set things straight: I'm not saying Subversion is bad. All I'm saying is that it isn't a lot better than CVS for my purposes.

Mercurial

So now on to the reason I started this post. I realize, it has become a lot longer than anticipated, but who cares?! I've read about the advantages that distributed revision controls offer some time ago. Having all the history available in with every working copy is one of them. The ability to keep track of merges between branches is another one. It's the latter that got me interested in Mercurial. While I realize that upcoming versions of Subversion will support merge tracking as well, the version(s) installed on my various computers don't support it - and I don't want to compile some development branch of a critical (to me) piece of software.

So I looked at other choices, e.g. git and Mercurial. To be honest, I haven't looked at git because I heard it is supposed to be hard to use with more than 100 commands. So I started to look at Mercurial and played around with it. I like it, so I don't think I'll look at git anytime soon.

Mercurial has (almost) everythig I need: It's fast, it's easy to use and it handles merges between branches well. I'm the "almost" can be erased as soon as I dig further. What's still missing is an easy setup for a permanent, central "master" repository (short of the "hg serve" command which starts a minimal HTTP server). I'm also missing a web frontent similar to CVSWeb and SVNWeb - I'm sure such thing does exist, but I haven't found it yet. The third thing I haven't figured out yet is how to do backups.

I'd like to write a bit about the differences between Mercurial and the other systems I've used. First, a revision of a file has up to two parents. Actually, it always has two parents, but one may be the null parent, and that doesn't count. You start out with a file that has two null parents. Once you commit a change to the file, the file's parent becomes the previous revision, and so on. If a revision has only one parent, and no other revision uses it as it's parent, then that revision is called a head.

The second parent comes in the play, when you have multiple branches. Creating a branch is also different from other systems. You first switch a working copy to a new branch by issuing hg branch <name<. The default branch, i.e. what's called "HEAD" in CVS and "trunk" in Subversion is called "default" in Mercurial. The default branch always exists. Switching a working copy to a new branch does not create the branch, yet. Only commiting a first change in that working copy does. Note that the first revision's parent will be the branchpoint revision.

So what happens when you merge between branches? When you merge and commit, the current branch will contain all changes of the original branch since the last merge (or the branchpoint). You don't need to remember which changes you need to merge into the current branch - Mercurial does that automatically for you. This is possible because when merging, a file's second parent is filled in with the head revision of the originating branch. That also means, that when you merge changes from branch A into branch B, the head revision of branch A is no longer a head. Don't worry, though, once you commit something on branch A again, it will have a head again.

Now on to the distributed part. The concept of branches is taken further in a distributed revision control system. Multiple instances of the repository can have dependencies between each other. A new copy of a repository is created by running the "hg clone" command. Then, changes to that repository copy can be made as if the original never existed. But, if you want to, you can also pull any changes the original repository has incorporated. Do this by running "hg pull" - it's really just merging all changes from your master repository into your copy. It also works the other way around: You can push your changes upstream by running "hg push" (if the upstream repository allows it).

All in all, I find Mercurial very easy to use once the basic concepts have been understood. I'm not sure yet whether I'll convert any of my Subversion repositories to Mercurial or if I'll use seriously at all. But for future reference, here's a link to a free book on Mercurial.

Tuesday, May 20, 2008

Why Open Firmware is pretty neat

I've just been impressed by the power of Open Firmware again. I'm currently tinkering with the decrementer and time base registers on a PowerPC processor and I need to find out if some of my assumptions are correct.

One way to do that is to compile my C code, load and start it over the network and see if things work the way I think they should work. While this works, it's somewhat time consuming.

Another way of doing this is to use the Open Firmware user interface - a full featured Forth system. As such, it offers very powerful features during development. In fact, everything entered at the client interface could also be compiled into a forth word, which could even be included in the firmware's dictionary.

So let's take a look at the base conversion features Forth offers.
0> decimal
OK
0> 63 19 - dup .
44 OK
1> hex .
2c OK
The code above switches the default number base to decimal. Then the (decimal) numbers 63 and 19 are placed on the stack and a subtraction (63 - 19) is performed. What ends up on the stack is the result of the math operation. We duplicate the item (basically saving a copy for later use) and then pop and display the top value. The result is 44, i.e. the result of the subtraction when calculating with decimal numbers.
Now we're switching the number base to hexadecimal again, and display the stack's topmost value (we saved the calculation result before). The result is 2c, i.e. 44 displayed as a hexadecimal number.

Next up, logical operations. A left shift is defined as
lshift (value count -- value)
meaning you place value on the stack, place the amount of bits you want it to be shifted (count) on the stack and when the lshift word returns, the shifted value will be on the stack. So take a look at this:
o> decimal 63 19 - hex
OK
1> 1 swap lshift
OK
1> dup .
100000000000  OK
The first line is the subtraction explained above. Then, we push a 1 on the stack and swap the two top most items. The stack now looks like ( 1 2c ) which we'll feed the lshift operator. We duplicate the result and display one copy. And there's bitmask, where the 44th bit is set.

Going further to the more firmware specific parts. The Open Firmware implementation I'm using right now offers a word that let's me read the boot processor's HID0 register. The word is hid0@, it takes no input and places the register's value on the stack. Similarily, there's a word that let's me write the register, it's hid0!. It takes one argument from the stack and doesn't place anything on the stack.
So take the next sequence. I'm assuming it's executed right after the previously quoted sequence, so the calculated bitmask should be on the stack.
1>hid0@
OK
2> dup .
100080000000 OK
2> or dup .
100080000000 OK
1>hid0!
OK
0>
1> First, we read the HID0's value and display it in a non-destructive manner. Then we or the bitmask and the register value and display it's result. Note that the result is the same, meaning the 44th bit was already set. Then, we write the value back to the register.

This is just an example of the power of Open Firmware. I'm going to play some other tricks right now, but I wanted this to be written down first so I can look it up again.

Monday, May 19, 2008

The TianoCore Contributor's Agreement

So, I finally found some time to crawl through the TianoCore project's Contributor's Agreement. Here's what I think it means.

  • Preample: So Intel has decided to release some code under what it calls the "BSD license". Personally, I have think the BSD license is something else or maybe even something like this. I don't think a link to an incomplete license stub is enough, though. But enough of the ranting.
    Just to be clear here, I think it is safe to assume that Intel released their code under the following license (note that it's just the stub they provide a link to, filled in with some meaningful values):
    Copyright (c) 2006-2008, Intel Corp.
    All rights reserved.
    
    Redistribution and use in source and binary forms, with or without
    modification, are permitted provided that the following conditions are
    met:
    
    * Redistributions of source code must retain the above copyright notice, this
      list of conditions and the following disclaimer.
    
    * Redistributions in binary form must reproduce the above copyright notice,
      this list of conditions and the following disclaimer in the documentation
      and/or other materials provided with the distribution.
    
    * Neither the name of Intel Corp. nor the names of its contributors may be
      used to endorse or promote products derived from this software without
      specific prior written permission.
    
    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
    AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
    IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
    ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
    LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
    CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
    SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
    INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
    CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGE.
    
    In addition to their own code which they release under the "BSD license", there is some code in the TianoCore tree that is released under other licenses. Specifically the FAT32 code, which is apparently covered by some patents. If other licenses apply, and that's the key point here, the license is either in the source files themselves or packaged with the source files.
  • Definitions: I'm a "contributor" if I hold the copyright on some code that I contribute to the TianoCore project. If I represent a company, all my employees are considered part of my company and not separate contributors. A "contribution" is anything I sent to the TianoCore project, e.g. via mail, snail mail, telephone, etc. as long as I don't explicitly mark it as "not a contribution".
  • License for Contributions: So when I decide to contribute something to the TianoCore project, I automatically agree to provide it under a license. That can be found in the contributor's agreement. The bullet points a) and b) are pretty clear: The permisssion to use and redistribute my contributions, provided that the three conditions laid out in the "BSD license" quoted above are met.
    The next bullet point, c), is somewhat harder to understand. I interpret is as: If I hold a patent on something, and my contribution infringes that patent, I automatically grant a patent license. I grant it to everybody, who wants to exercise his rights I granted him with the copyrigth license mentioned above. However, here's the catch: That patent license applies only to my unmodified contribution.
    I'm not sure what to think about that. I think, Intel is trying to protect their own patents. So if they release some code to the TianoCore project which is covered by a patent they own, the only grant a minimum patent license. What remains unclear is whether the patent license is still valid, even if I modify their code as permitted by the copyright license they granted.
    The last bullet point, d), is an easy one again. It's simply the "provided 'as is'" part in the copyright license cited above.
  • Representations: By signing the agreement, I promise that I created the code myself. If my employer does have any rights, I promise that it explicitly permitted me to contribute my code.
  • Third Party Contributions: If I chose to contribute third party code, I need to explicitly mark it as such. It also must be separate from my own contributions.
  • Miscellaneous. The agreement is in English and translation are not authorative. If I want to take the whole thing to court, I need to to it in Delaware (US).
So what's the conclusion here? I think Intel is pretty open about releasing their code. However, they are not so open about creating an open source project around their code. What I mean is that there are quite some legal hurdles one has to pass when contributing code to the TianoCore project. In effect, they force the BSD license on any code I contribute and I think that's OK. On the other hand, however, they prevent me from forking the project by introducing that stupid patent clause since I have no easy way of checking whether a specific piece of code infringes one of their patents.
I really wonder if they only want to protect themselves from getting sued over some code contributed the project from non-Intel employees. Or are they really trying to create an impression of an Open Source project when it's really not?

Saturday, May 17, 2008

What to do when Parallels brings the System to a halt

I don't know when this started, but recently whenever I try to start Parallels (which admittedly doesn't happen very often), my whole System comes down to a halt. Well, not completely, the system is still running, but it won't even let me switch between applications in a responsive manner. Even the mouse movement isn't smooth anymore.

Note that this is with Parallels 2 on Mac OS X 10.5. The system is a first generation Mac Pro w/ two 2.66 GHz Core 2 CPUs and 3 GBytes of RAM. So the system could use a little more RAM, but apart from that it shouldn't have any issues running Parallels. And in fact, things used to work just fine.

Anyways, here's a workaround I've discovered today:
$ sudo /Library/StartupItems/Parallels/Parallels stop
$ sudo /Library/StartupItems/Parallels/Parallels start

Tuesday, May 13, 2008

Porting TianoCore to a new platform, part 2

So it took me exactly one day to start the TianoCore DXE Core on a new platform. Of course, this doesn't count the roughly 10 weeks it took me to understand how the TianoCore Codebase works ;-) Also, it took me a fair amount of work to fix one thing or the other.

Anyways, I wanted to take a note that the generic memory test module included in the TianoCore Codebase is nowhere near "quick", despite the fact that it has a mode called QuickMode, when you attempt to throw it at a 8 GByte memory range.

Reminder about porting TianoCore to a new Platform

Just a quick note to remind myself that when porting the TianoCore stack to a new platform, the PEI needs an implementation of the PEI_LOAD_FILE_PPI before it can load any PEIMs from the FV.

For the platforms I have worked with so far, the PPI was implemented in the SEC.

Sunday, May 11, 2008

The beauty of security extensions

I just spent a good day debugging a problem that eventually turned out to be (most likely) caused by some Linux security extensions deployed on the machine I test my code on.

The code loads an ELF image at runtime and then transfers control to it. Previously, I have worked with 32-Bit PowerPC executables that I ran on a 64-Bit PowerPC host. I recently changed this so that my code (as well as the ELF images it loads) would be a 64-Bit PowerPC executables.

In order to obtain memory where the ELF image could be loaded into, I previously used the malloc(3) call. I didn't want to use mmap(2) since I was going to port the code to an environment where mmap(2) would not be available. That worked fine in the 32-Bit case.

Anyways, it turns out that, in the 64-Bit case, trying to execute code in a malloc(3)-ed buffer instantly results in a segmentation fault. Using a mmap(2)-ed buffer (with the PROT_EXEC flag) fixes the issue.

I would still like to know why there is a difference between the 32-Bit and the 64-Bit case.

Thursday, May 8, 2008

Building e2fsprogs shared libraries

I found myself in need to build the e2fsprogs package (again) and found out that it doesn't build shared libraries by default. However, I need shared libraries, so this is what it takes to build the package:
$ ./configure --prefix=/opt --enable-elf-shlibs
$ make
$ make install
$ make install-libraries

Wednesday, May 7, 2008

Open Source SLOF

I just tried to download and build the SLOF open source release. The SLOF release itself can be downloaded here (note that an IBM ID, aka registration, is required). Let me just say that is has been a very pleasant experience - everything worked out of the box!

The release tar ball contains a file INSTALL with pretty good instructions on what needs to be done to compile the code. What's missing is a link to the required JS20/JS21/Bimini System-Interface Code. It's on the SLOF download page but it took me a moment to realize it's there.

Once the system interface code has been downloaded and extracted, run the install-oco.sh script that's included in the tar ball. It takes one parameter, the path to the extracted SLOF source code.

Also, the x86emu code needs to be pulled from the coreboot subversion repository. Execute the x86emu_download.sh script in other-licence/x86emu/ and it will do that for you.

Finally, export the CROSS environment variable. In my case I had to set it to "ppu-" by running
export CROSS=ppu-
Then, just run this command to compile the code:
make js2x
Almost all of the information above is included in the INSTALL file with the exception of the missing link. Again, this is a very pleasant surprise to me. There are other vendors where Open Source does not work so flawlessly. Hey there, Intel ;-).

Sunday, April 20, 2008

Fedora Firewall Management 101

On Fedora Core, the iptables rules are stored in /etc/sysconfig/iptables. To change the rules, the iptables(8) utility is used. When done, store the updated rules in /etc/sysconfig/iptables by running the following as root:
# /sbin/iptables-save > /etc/sysconfig/iptables
I'm sure I'll need this again, and I'm sure I'll have forgotten all of it by then.

Thursday, April 17, 2008

D'oh! Stepping over functions in gdb

I feel really dumb at the moment. While debugging, I frequently find myself in need of a "step over function call" functionality. It has always bothered me that the gdb command "step" steps into functions, while I wasn't interested at all in what was happening inside the function. It turns out that gdb has the command I've always wanted, and it is:
(gdb) next

Friday, April 11, 2008

The GNU Debugger (gdb) and Files loaded at Runtime

I'm working on a project where one program loads an ELF file at runtime and then transfers control to that dynamically loaded file. The file will be loaded at an address obtained by malloc(3), so the address will be arbitrary (from a user perspective).

Normally, if a crash occurs while a program is being run under a debugger, the debugger automatically shows you where in the source code the crash occurred. This works only if the debugged program includes debug information. That debug information is stored in the executable file itself. It is ignored by the operating system, but the debugger can use it to resolve addresses to symbols, i.e. to variable and function names.

In my case, if a crash is caused by the dynamically loaded file, then all I see is the register contents and maybe a plain stack trace. That's because the debugger does not know where to find the debug information. Seeing the addresses is useful as it can be used to look up the problematic instruction with objdump. However, debugging would be a lot easier if the addresses could be automatically resolved to symbols by the debugger.

The GNU Debugger (gdb), my debugger of choice, has a add-symbol-file command that instructs the debugger to load symbol information from a file on disk. This command can be issued before the program to be debugged is started, while it's running or even after it has crashed. There are two parameters required by the command. The first one will tell it where to load the debug information from, i.e. the path to the ELF file on disk. The other one will tell it at what address the ELF .text section of the ELF file has been loaded. So the syntax is:
(gdb) add-symbol-file file.elf <address>
For my project, I'm expecting frequent crashes in the dynamically loaded program. So in order to make debugging less time consuming, I created a little .gdbinit file in my home directory that does these things:
  • Tell gdb which program is being debugged. This is the program that will load another one at runtime.
  • Set a breakpoint in that first stage program. The breakpoint needs to be somewhere where the address of the .text section of the dynamically loaded file can be determined.
  • Then start the program.
  • When the breakpoint is hit, print the address of the .text section of the dynamically loaded ELF file.
My .gdbinit file that does the above looks like this:
file primary_stage
break jump_to_second_stage
run
p/x entry_point
Now anytime the debugger is started, it will automatically load the primary stage ELF file, set a breakpoint and start the program. As soon as the breakpoint is hit, the address of the .text section will be printed. Since that's the first print instruction in the debug session, it can now be accessed through the use of a placeholder: $1.

Unfortunately, automatically loading the second stage ELF program from the .gdbinit file does not work. So I then need to manually enter this command:
(gdb) add-symbol-file second_stage $1
Well, it turns out that you can add the above command to the .gdbinit and it will work just fine. I have no idea why that did not work the last time I tried it. Now, a crash in the second_stage program will be reported with full symbol information available to the debugger.

Friday, April 4, 2008

DHCP, DNS and dynamic updates

Today I updated my router, a Soekris net4801 (I think), to OpenBSD 4.2. I know it's dumb to upgrade to OpenBSD 4.2 about three weeks before OpenBSD 4.3 is officially released, but today I actually had time and the box was in desperate need of an update. Also, I have recently moved and the network changed. I used to have a designated server running FreeBSD that also handled DNS and DHCP. When I moved, I shut the server down, and so for the last few weeks I had only very basic DHCP services running and no local DNS at all. Anyways, to make the long story short, I needed a DNS server that would resolve local names as well as a DHCP server that does dynamic DNS upates.

First, I installed OpenBSD 4.2 by netbooting the Soekris box from another OpenBSD "box" running inside Parallels. The instructions for that can be found in the OpenBSD FAQ. There is one thing to remember, though: The Soekris doesn't have a VGA console but only serial, so the PXE-booted kernel needs to be told that it should only use the serial console for output. So the boot.conf file the FAQ mentions needs to look like this:
set tty com0
set stty 9600
boot bsd.rd
Now on to the DHCP and DNS installation. The DHCP server included with OpenBSD will not do dynamic DNS updates, so the ISC DHCP Server is needed. It can be installed by running (as root):
# ftp ftp://ftp.de.openbsd.org/pub/OpenBSD/4.2/packages/i386/isc-dhcp-server-3.0.4p0.tgz
# pkg_add -v isc-dhcp-server-3.0.4p0.tgz
The dhcpd binary will be installed in /usr/local/sbin. Be aware that the base dhcpd included in OpenBSD lives in /usr/sbin, so simply typing "dhcpd" at the command line will most certainly start the OpenBSD DHCP Server! In order to automatically start the DHCP server on boot, these lines need to be added to /etc/rc.local:
if [ -x /usr/local/sbin/dhcpd ] ; then
        echo -n ' dhcpd' ; /usr/local/sbin/dhcpd
fi
Next, before the Server can be fully configured, a key that will be shared between DHCP and DNS server needs to be created:
$ dnssec-keygen -a HMAC-MD5 -b 128 -n USER <name>
This command generates to files in the current working directory. The file with the extension *.private will include the key required later. Wherever the configuration files include a "secret" statement, that value needs to be inserted. The parameter <name> determines the name of the key. This will be used later, but I don't know if the name actually has to be reused. For the rest of this posting, we'll use key_name to represent the generated key's name.

So now on to the DHCP Server configuration. My /etc/dhcpd.conf now looks like this:
option  domain-name "local.deadc0.de";

ddns-update-style ad-hoc;

key key_name {
        algorithm       hmac-md5;
        secret          "...";
}

zone local.deadc0.de. {
        primary 127.0.0.1;
        key key_name;
}

zone 1.168.192.in-addr.arpa. {
        primary 127.0.0.1;
        key key_name;
}

zone 2.168.192.in-addr.arpa. {
        primary 127.0.0.1;
        key key_name;
}

subnet 192.168.1.0 netmask 255.255.255.0 {
        range 192.168.1.1 192.168.1.127;
        
        option domain-name-servers 192.168.1.254;
        option routers 192.168.1.254;
}
        
subnet 192.168.2.0 netmask 255.255.255.0 {
        range 192.168.2.1 192.168.2.127;

        option domain-name-servers 192.168.2.254;
        option routers 192.168.2.254;
}
Now that the DHCP server is configured, the DNS server needs configuration. In OpenBSD, the DNS Server is BIND, but it's started in a chroot environment, so its configuration files live under /var/named. The server configuration file is /var/named/etc/named.conf and looks like this on my system:
include "etc/rndc.key";

controls {
        inet 127.0.0.1 allow {
                localhost;
        } keys {
                key_name;
        };
};

acl clients {
        localnets;
        ::1;
};

options {
        listen-on    { any; };
        listen-on-v6 { any; };

        allow-recursion { clients; };
};

// Standard zones ommited.

zone "local.deadc0.de" {
        type master;
        file "master/local.deadc0.de";
        
        allow-update {
                key     "key_name";
        };
};

zone "1.168.192.in-addr.arpa" {
        type master;
        file "master/1.168.192.in-addr.arpa";

        allow-update {
                key     "key_name";
        };
};

zone "2.168.192.in-addr.arpa" {
        type master;
        file "master/2.168.192.in-addr.arpa";

        allow-update {
                key     "key_name";
        };
};
The actual zone files will not be posted, they are just standard zone declarations, nothing special. However, notice the include statement at top of the file. It includes the key declaration file /var/named/etc/rndc.key that looks like this:
key key_name {
        algorithm hmac-md5;
        secret "...";
};
In order to supress some warning when the DNS server starts, the file /var/named/etc/rndc.conf needs to be created. It should look like this:
options {
        default-server  localhost;
        default-key     "key_name";
};

server localhost {
        key     "key_name";
};

include "rndc.key";
Finally, everything under /var/named/etc and /var/named/master needs to be owned by the user "named", so as root run this:
# chown -R named:named /var/named/etc
# chown -R named:named /var/named/master
Now make sure that the DNS server is enabled by including this line in /etc/rc.conf.local:
named_flags=""
Then reboot the box and that should be it.

Wednesday, March 12, 2008

Embedding binary BLOBs into an ELF file

I needed this yesterday, found a link describing it - and forgot it by today :-(

For a project I'm working on, I need to embed a file into an ELF executable. The executable then needs to do things with the embedded file, i.e. it has to know where in memory the file resides and how large it is.

So here it goes, largely copied from the link mentioned above.
  • Create an object file from the binary blob:
    $ ld -r -b binary -o example.o example.bin
  • In the sources, declare the symbols:
    extern char _binary_example_bin_start[];
    extern char _binary_example_bin_end[];
    
  • Make sure the object file is linked with the other sources:
    $ gcc -o example example.c example.o
    

Tuesday, March 11, 2008

TianoCore on QEMU

There is a bios.bin binary file for use with QEMU available at http://fabrice.bellard.free.fr/qemu/efi-bios.tar.bz2. It is meant to be used as a BIOS replacement for QEMU and provides an EFI interface. It is compiled from the TianoCore sources, at least that's what the QEMU homepage suggests.

The problem with this file is that it can only be used with very few versions of QEMU, that's why I'm writing this.

I've had success with version 0.9.0 when the patches linked from the coreboot wiki were applied. I've also had success with a CVS snapshot from July 3rd, 2007. Version 0.9.1 or the stock 0.9.0 do not work.

QEMU and kqemu on IBM's OpenClient

Today I wanted to try QEMU on IBM's OpenClient Linux distribution. Unfortunately, I was unable to install a binary package through yum because the default package repositories don't provide one. So I ended up installing QEMU from the sources.

Getting the sources is easy. To fetch the latest sources (i.e. CVS HEAD), just run:
$ cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/qemu \
 co qemu
In my case, I wanted version QEMU version 0.9.1, so I did this:
$ cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/qemu \
 co -rrelease_0_9_1 qemu
Building the sources is trivial as well. The usual three step process (configure, make, make install) works like a charm. If PREFIX isn't set, QEMU installs in /usr/local, but I want it in /opt. So here's what I did:
$ ./configure --prefix=/opt
$ make
$ sudo make install
Now I had a bunch of QEMU executables in /opt/bin, each one for a different architecture. But I wanted kqemu, the kernel accellerator for QEMU, as well. Through the QEMU home page, I found this site which provides kqemu RPMs for RHEL and Fedora.

For the IBM OpenClient distribution, I had to do this:
$ wget http://dl.atrpms.net/all/kqemu-1.3.0-2.el5.i386.rpm
$ wget http://dl.atrpms.net/all/kqemu-kmdl-2.6.18-53.1.13.el5-1.3.0-2.el5.i686.rpm
$ sudo rpm -iv kqemu-1.3.0-2.el5.i386.rpm kqemu-kmdl-2.6.18-53.1.13.el5-1.3.0-2.el5.i686.rpm
In case the links to the RPMs are truncated, there is a kqemu RPM and a kqemu-kmdl RPM.

Finally, in order to actually load the kernel module, I did this:
$ sudo modprobe kqemu
Everything described here is pretty straight forward, but I wanted to make sure I document the installation of the kqemu module somwhere, hence this post.

Friday, March 7, 2008

Cell SDK 3.0 on IBM's OpenClient Distribution

Today I was finally able to build a simple HelloWorld-Application with and within the TianoCore build environment. This is good news, as it leads me to the next task: Building the same HelloWorld-Application, but this time for Linux on 64-Bit PowerPC.

Since I do not yet have access to a 64-Bit PowerPC machine running Linux, I'm going to use the Cell SDK 3.0 for now. It can be used from an i386 machine and includes a toolchain as well as the full system simulator. The toolchain includes a cross-compiler that is cabable of producing binaries for the Cell BE's PPU, which is essentially a 64-Bit PowerPC processor. The system simulator simulates a Linux-installation running on Cell.

I'm still on IBM's OpenClient Linux distribution, which is apparently based on RHEL 5.1, at least according to /etc/redhat-release
Red Hat Enterprise Linux Client release 5.1 (Tikanga)
This is good on one hand, but made things slightly more complicated on the other hand. But first things first. Here's what I did to prepare the Cell SDK installation:
  • I went to the developerWorks download page for the Cell SDK 3.0 and downloaded the RHEL 5.1 packages.
  • I had to download the "basic libraries and headers for cross-compiling to Cell Broadband Engine's PPU", both the 32-Bit version and the 64-Bit version, from the Barcelona Supercomputer Center (BSC). Note that I could have built those RPMs myself, but only if I had a few other required RPMs like e.g. a glibc for PowerPC. Apparently those required RPMs are provided on the RHEL installation CDs, however, I'm on IBM's OpenClient and thus do not have access to the installation CDs. The good thing is, the Fedora RPMs provided by the BSC turned out to work just fine.
  • For the full system simulator, I had to download the sysroot image from the BSC website.
So that's it for the preparation part, now to actually installing the SDK.
  • I installed the installer RPM like this:
    # rpm -ivh cell-install-3.0.0-1.0.noarch.rpm
    This installs the installer to /opt/cell.
  • Now I needed to install the cross-compilation libraries and headers: # rpm -ivh ppu-sysroot-f7-2.noarch.rpm # rpm -ivh ppu-sysroot64-f7-2.noarch.rpm
  • Next I ran the installer as instructed by the installation manual: # cd /opt/cell # ./cellskd --iso /home/phs/Downloads install
After successfully running the installer, I found a functioning cross-compiler in /opt/cell/toolchain/bin.

For the system simulator, I had to install the sysroot imae RPM like this:
# rpm -ivh sysroot_image-3.0-7.noarch.rpm
Unfortunately, I wasn't able to make the system simulator work because of a missing dependency on a simulation library.

By the way, there's also official documentation available here.

Thursday, March 6, 2008

Setting up a Subversion Server

Ok, there's probably a million Blog posts, tutorials and HowTos on how to do this already. Yet I still find it hard to find short instructions on how to set up a subversion server quickly. I just had to do it again, and it took me longer than expected, so here it goes.

My requirements are pretty basic:
  • WebDAV so I can access the repository over HTTP.
  • I don't need too much security, so SSL won't be needed.
  • I want SVN:Web, a web front end to the repositories.
Here's what I did for the Subversion+WebDAV part. Note that I'm currently on IBM's OpenClient Linux Distribution which is based on RedHat.
  • I made sure that the httpd package is installed. I didn't have to install it, so I guess it's installed by default.
  • I had to install the mod_dav_svn package with yum:
    $ sudo yum install mod_dav_svn
  • The package installs a config file at /etc/httpd/conf.d/subversion.conf. By default, the <Location> tag is commented out. I just copied it, removed the comment signs and adjusted the values to my needs. This is what it now looks like:
    LoadModule dav_svn_module     modules/mod_dav_svn.so
    LoadModule authz_svn_module   modules/mod_authz_svn.so
    
    <Location /repos>
       DAV svn
       SVNParentPath /var/www/svn
    
       <LimitExcept GET PROPFIND OPTIONS REPORT>
          AuthType Basic
          AuthName "Authorization Realm"
          AuthUserFile /var/www/passwd
          Require valid-user
       </LimitExcept>
    </Location>
    
  • Now the Apache HTTP Server will serve the contents of /var/www/svn via WebDAV (I think), but it will query for a valid user entry. The entry must be created as follows:
    # cd /var/www
    # htpasswd -c passwd <username>
    
  • Next, I created the direcories for the repositories. As root, I ran these commands:
    # cd /var/www
    # mkdir svn
    # cd svn
    # svnadmin create <name of repository>
    # chown -R apache:apache <name of repository>
    
  • I then activated the HTTP service so the Apache Web Server would be started at boot time. I did this using a GUI tool called system-config-services. I had to check the box next to the httpd entry. I didn't want to reboot right aways, so I clicked Start in the toolbar. The tool told me that the service would now be running and I could verify this by going to http://localhost/. Testing the Subversion part was easy, too. Navigating to http://localhost/repos/example did the trick. Note that the actual repository name must be used instead of "example".
  • Oh, and I want at least minimal security. That is, I want the HTTP Server to serve pages only to the local machine. Therefore, I changed the Listen directive in the server config uration file to this:
    Listen 127.0.0.1:80
    
The second part was installing the SVN::Web CGI Script. Here's what I did to do it.
  • First, I had to install the subversion-perl package with yum. As root, I ran
    # yum install subversion-perl
  • Second, I installed the actuall SVN::Web script through CPAN. Again as root, I did
    # cpan install SVN::Web
  • I then created a directory that would hold all the SVN::Web files:
    #cd /var/www
    #mkdir svnweb
    
  • In that directory, I let the Perl script set itself up using default values:
    # cd /var/www/svnweb
    # svnweb-install
    
  • The last step created a file called config.yaml. It must be edited so the CGI script finds the repositories. Near the end, I edited the reposparent value:
    reposparent: '/var/www/svn'
  • Now, as the final step, the script needs to be introduced to the Apache Server. I created a file svnweb.conf in /etc/httpd/conf.d with the following contents:
    Alias /svnweb /var/www/svnweb
    
    AddHandler cgi-script .cgi
    
    <Directory /var/www/svnweb>
            Options All ExecCGI
            DirectoryIndex index.cgi
    </Directory>
    
After restarting the Apache HTTP Server, I could access http://localhost/svnweb and see the repositories.

Tuesday, March 4, 2008

TianoCore on IBM's OpenClient

I've started work at IBM yesterday. Today I found a room and got all my user accounts and passwords set up so I can actually start to work.

I'm trying to build the TianoCore EDKII (SVN Revision 4792) on IBM's own Linux distribution, called OpenClient. The distribution sucks but it's the only kind of Linux we're allowed to use around here. It's based on Red Hat (I think), but it feels "different".

To bootstrap the TianoCore Build Environment, I sort of followed my FreeBSD and Fedora Core notes. Here's what I did:

  • Installed Java JDK 6 Update 4 (JDK 1.6.0_04) for Linux through the self-extracting binary file (not the RPM) that Sun provides. I placed the JDK in /opt.
  • Installed the binary distribution of Apache Ant 1.7.0, Saxon 8.1.1, XMLBeans 2.1.0 as well as ant-contrib 1.0b3 and placed all of them in /opt.
  • Created the symlinks:
    $ cd /opt/apache-ant-1.7.0/lib
    $ sudo ln -s /opt/ant-contrib/ant-contrib-1.0b3.jar ant-contrib.jar
    $ sudo ln -sf /opt/saxonb8.1.1/saxon8.jar /opt/xmlbeans-2.1.0/lib/saxon8.jar
    
I then created a small script that automates the build tool bootstrapping process:
export JAVA_HOME=/opt/jdk1.6.0_04
export XMLBEANS_HOME=/opt/xmlbeans-2.1.0
export ANT_HOME=/opt/apache-ant-1.7.0
export WORKSPACE=/home/phs/Sources/edk2
export PATH=$PATH:$XMLBEANS_HOME/bin:$ANT_HOME/bin
. edksetup.sh ForceRebuild
Sourcing the script successfully builds the build tools. Then, in the file $WORKSPACE/Tools/Conf/target.txt, two settings need to be adjusted:
ACTIVE_PLATFORM=EdkUnixPkg/Unix.fpd
TOOL_CHAIN_TAG=ELFGCC
Of course, the previously mentioned patch needs to be applied. After that, the EDKII Unix Emulation Package can be built and run as described in the tutorial:
$ cd $WORKSPACE/
$ build
$ cd Build/Unix
$ . run.cmd
I found that the IBM OpenClient distribution already includes the e2fsprogs-devel package as well as the relevant X11 development packages. Please also note that it is not neccessary to build an PE32+ cross compiler on Linux.

Thursday, February 7, 2008

TianoCore on Fredora Core 8

My attempts building anything of the TianoCore EDK2 codebase on FreeBSD/amd64 have been extremely frustrating so far. I guess trying to build on an unsupported operating system and on an unsupported architecture may have been too much of a hurdle for the beginning. So I figured I'd try building the EDK Unix Simulator (Trunk Revision 4679) on Fredora Core 8. Here's what I've done.

I pretty much followed this tutorial. It's for Gentoo, so a little tweaking was needed.

First, I downloaded and installed the JDK 6 from SUN via the RPMs they provide. The JDK ends up in /usr/java/jdk1.6.0_04, so the JAVA_HOME environment variable needs to be set to that.

Second, I didn't bother installing the needed (Java) tools through the Fedora Package Manager but instead downloaded the files manually and placed them under /opt. This is similar to what I did for my FreeBSD attempts. I needed two symlinks like the tutorial says:
$ cd /opt/apache-ant-1.7.0/lib
$ sudo ln -s /opt/ant-contrib/ant-contrib-1.0b3.jar ant-contrib.jar
$ sudo ln -sf /opt/saxonb8.1.1/saxon8.jar /opt/xmlbeans-2.1.0/lib/saxon8.jar
Third, I needed to install the e2fsprogs-devel package. The e2fsprogs package (without the "-devel" suffix) isn't enough. Also, I had to install the X development packages. I don't know what exact package was needed, but the Fedora Core 8 package manager has this option that lets you install some pre-selected packages related to X development.

Forth, I had to apply the following patch (the tutorial mentions this):
Index: EdkModulePkg/Bus/Pci/PciBus/Dxe/PciHotPlugSupport.c
===================================================================
--- EdkModulePkg/Bus/Pci/PciBus/Dxe/PciHotPlugSupport.c (Revision 4679)
+++ EdkModulePkg/Bus/Pci/PciBus/Dxe/PciHotPlugSupport.c (Arbeitskopie)
@@ -21,7 +21,7 @@

 --*/

-#include "Pcibus.h"
+#include "pcibus.h"
 #include "PciHotPlugSupport.h"

 EFI_PCI_HOT_PLUG_INIT_PROTOCOL  *gPciHotPlugInit;
Finally, contrary to what the tutorial says, I used the following script to set up the environment. Note that I didn't include the TOOL_CHAIN line and that I didn't build the PE/COFF capable GCC.
export JAVA_HOME=/usr/java/jdk1.6.0_04
export XMLBEANS_HOME=/opt/xmlbeans-2.1.0 
export ANT_HOME=/opt/apache-ant-1.7.0
export WORKSPACE=/home/phs/edk2/edk2
export PATH="$PATH:$XMLBEANS_HOME/bin:$ANT_HOME/bin" 
The build went smoothly after that and I was able to use the EDK Unix environment. Note that the thing should not be called "Unix" Package, since it heavily assumes that it runs on Linux in some areas.

Wednesday, February 6, 2008

TianoCore on FreeBSD/amd64, Take 2

Now that I have build a cross compiler, I'm trying to build the TianoCore EDK again.

In this post I list a few environment variables that need to be set in order to build the EDK. Because the build process needs to use the cross compiler, another environment variable is needed:
$ export CC=/opt/i386-tiano-pe/bin/gcc
Since I'm lazy and all, I put everything in a little script called env.sh:
export WORKSPACE=/home/phs/edk2
export JAVA_HOME=/usr/local/diablo-jdk1.5.0
export ANT_HOME=/opt/apache-ant-1.6.5
export XMLBEANS_HOME=/opt/xmlbeans-2.1.0
export PATH=$PATH:$ANT_HOME/bin:$XMLBEANS_HOME/bin
export CC=/opt/i386-tiano-pe/bin/gcc
Also, the EDK build notes mention that the default build target is the Windows NT Emulation environment. However, that target cannot be built using GCC, so I needed to edit the file Tools/Conf/target.txt and change the ACTIVE_PLATFORM to:
ACTIVE_PLATFORM       = EdkUnixPkg/Unix.fpd
Now, I can tip of the build process as follows:
$ cd ~/edk2
$ . env.sh
$ . edksetup.sh newbuild
Note that the build script must be "sourced", not executed. Unfortunately, the build process still fails, but now with a different error:
       [cc] 1 total files to be compiled.
       [cc] In file included from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.h:3,
       [cc]                  from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:17:
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:27:20: error: jni_md.h: No such file or directory
       [cc] In file included from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.h:3,
       [cc]                  from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:17:
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:45: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jsize'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:104: error: expected specifier-qualifier-list before 'jbyte'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:193: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1834: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1842: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1851: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1888: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1927: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1930: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1933: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1937: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1940: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void'
       [cc] In file included from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:17:
       [cc] /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.h:16: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jbyteArray'
       [cc] /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:29: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jbyteArray'

BUILD FAILED
/home/phs/edk2/Tools/build.xml:22: The following error occurred while executing this line:
/home/phs/edk2/Tools/CCode/Source/build.xml:254: The following error occurred while executing this line:
/home/phs/edk2/Tools/CCode/Source/CompressDll/build.xml:45: gcc failed with return code 1
The root cause of this is that the compiler can't find the jni_md.h header which is located in $JAVA_HOME/include/freebsd (at least on my system). I worked around this problem by editing $JAVA_HOME/include/jni.h as follows:
--- jni.h.orig  2008-02-06 11:50:05.000000000 +0100
+++ jni.h       2008-02-06 11:50:16.000000000 +0100
@@ -24,7 +24,7 @@
 /* jni_md.h contains the machine-dependent typedefs for jbyte, jint
    and jlong */
 
-#include "jni_md.h"
+#include "freebsd/jni_md.h"
 
 #ifdef __cplusplus
 extern "C" {
Now I'm stuck with yet another compilation error:
init:
     [echo] Building the EDK Tool Library: CompressDll

Lib:
       [cc] 1 total files to be compiled.
       [cc] /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c: In function 'Java_org_tianocore_framework_tasks_Compress_CallCompress':
       [cc] /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:57: warning: overflow in implicit constant conversion
       [cc] Starting link
       [cc] /usr/bin/ld: /home/phs/edk2/Tools/CCode/Source/Library/libCommonTools.a(EfiCompress.o): relocation R_X86_64_32S can not be used when making a shared object; recompile with -fPIC
       [cc] /home/phs/edk2/Tools/CCode/Source/Library/libCommonTools.a: could not read symbols: Bad value

BUILD FAILED
/home/phs/edk2/Tools/build.xml:22: The following error occurred while executing this line:
/home/phs/edk2/Tools/CCode/Source/build.xml:254: The following error occurred while executing this line:
/home/phs/edk2/Tools/CCode/Source/CompressDll/build.xml:45: gcc failed with return code 1
Well, I guess we'll see where this ends...

Monday, February 4, 2008

Building a Cross Compiler on FreeBSD

I'm currently trying to build a cross compiler (and other required tools) on FreeBSD. The compiler will run on FreeBSD/amd64 and should produce i386 binaries. This wouldn't be too hard since that task can easily be accomplished by using the FreeBSD source tree. However, I need the toolchain to produce binaries in the PE/COFF format instead of the default ELF format.

Building the toolchain is somewhat tricky, at least I found it to be poorly documented. But maybe I just looked in the wrong places. Building the toolchain requires:
  • Building binutils.
  • Locating some header files.
  • Building the compiler.
  • Building a C Library.
  • Building the compiler again so it uses the C library that was just built.


For my needs, I found that the last two steps weren't needed. I wrote a script that downloads the sources, extracts the archives and builds the toolchain. Here's the script in full quote (I really wish there was a way to upload files to this thing):
#/bin/sh

#
# Copyright (c) 2008 Philip Schulz 
#

#
# This script downloads, builds and installs GCC and GNU Binutils that can
# produce x86 binaries in PE/COFF Format. The cross toolchain needs some headers
# that aren't usually present on the host system. However, those headers can be
# obtained from the cygwin sources, that's why a snapshot of the cygwin sources
# is downloaded.
#
# After the script finishes, the tools will be located at
# ${PREFIX}/${TARGET_ARCH}/bin. Some other binaries will be installed in
# ${PREFIX}/bin with their own prefix of ${TARGET_ARCH} but I don't know that
# they are for.
#

# Prefix where the Cross-Tools will live
PREFIX="${PREFIX:-/opt}"

# Target architecture.
TARGET_CPU="${TARGET_CPU:-i386}"
TARGET_ARCH=${TARGET_CPU}-tiano-pe

# Program that can fetch the files.
FETCH_COMMAND="/usr/bin/ftp -V"

# GNU Make
GNU_MAKE=`which gmake`

################################################################################
#
# GCC settings.
#
################################################################################
# What version of GCC will be fetched, built and installed 
GCC_VERSION=gcc-4.2.3
# What mirror to use.
GCC_MIRROR=ftp://ftp-stud.fht-esslingen.de/pub/Mirrors/ftp.gnu.org
# File name of the GCC sources. Should probably not be changed.
GCC_ARCHIVE=$GCC_VERSION.tar.bz2
# Where the GCC Sources can be fetched from. Should probably not be changed.
GCC_URL=$GCC_MIRROR/gcc/$GCC_VERSION/$GCC_ARCHIVE
# Arguments for the GCC configure script. Should probably not be changed.
GCC_CONFIGURE_ARGS="--prefix=${PREFIX} --target=${TARGET_ARCH} "
GCC_CONFIGURE_ARGS+="--with-gnu-as --with-gnu-ld --with-newlib "
GCC_CONFIGURE_ARGS+="--disable-libssp --disable-nls --enable-languages=c "
GCC_CONFIGURE_ARGS+="--program-prefix=${TARGET_ARCH}- "
GCC_CONFIGURE_ARGS+="--program-suffix=-4.2.3 "


################################################################################
#
# Binutils settings.
#
################################################################################
# What version of the GNU binutils will be fetched, build and installed
BINUTILS_VERSION=binutils-2.18
# What mirror to use.
BINUTILS_MIRROR=ftp://ftp-stud.fht-esslingen.de/pub/Mirrors/ftp.gnu.org
# File name of the binutils sources. Should probably not be changed.
BINUTILS_ARCHIVE=$BINUTILS_VERSION.tar.gz
# Where the GCC Sources can be fetched from. Should probably not be changed.
BINUTILS_URL=$BINUTILS_MIRROR/binutils/$BINUTILS_ARCHIVE
# Arguments for the GCC configure script. Should probably not be changed.
BINUTILS_CONFIGURE_ARGS="--prefix=${PREFIX} --target=${TARGET_ARCH} "
BINUTILS_CONFIGURE_ARGS+="--disable-nls "

################################################################################
#
# Cygwin settings.
#
################################################################################
CYGWIN_SNAPSHOT=20080129
CYGWIN_ARCHIVE=cygwin-src-${CYGWIN_SNAPSHOT}.tar.bz2
CYGWIN_MIRROR=http://cygwin.com/
CYGWIN_URL=${CYGWIN_MIRROR}snapshots/${CYGWIN_ARCHIVE}
CYGWIN_DIR=cygwin-snapshot-${CYGWIN_SNAPSHOT}-1

################################################################################
#
# Batch code.
#
################################################################################
#
# Fetches the files.
#
do_fetch() {
        if [ \! \( -f $GCC_ARCHIVE \) ] ; then
                echo "Fetching ${GCC_URL}"
                ${FETCH_COMMAND} ${GCC_URL}
        else
                echo $GCC_ARCHIVE already locally present.
        fi

        if [ \! \( -f $CYGWIN_ARCHIVE \) ] ; then
                echo "Fetching ${CYGWIN_URL}"
                ${FETCH_COMMAND} ${CYGWIN_URL}
        else
                echo $CYGWIN_ARCHIVE already locally present.
        fi

        if [ \! \( -f $BINUTILS_ARCHIVE \) ] ; then
                echo "Fetching ${BINUTILS_URL}"
                ${FETCH_COMMAND} ${BINUTILS_URL}
        else
                echo $BINUTILS_ARCHIVE already locally present.
        fi
}

#
# Extracts the archives.
#
do_extract() {
        # Remove already extracted files first.
        rm -rf ${GCC_VERSION}
        rm -rf ${CYGWIN_DIR}
        rm -rf ${BINUTILS_VERSION}

        # Extract the archives
        if [ -f $GCC_ARCHIVE ] ; then
                echo "Extracting ${GCC_ARCHIVE}"
                tar -jxf ${GCC_ARCHIVE}
        fi

        if [ -f $CYGWIN_ARCHIVE ] ; then
                echo "Extracting ${CYGWIN_ARCHIVE}"
                tar -jxf ${CYGWIN_ARCHIVE}
        fi

        if [ -f $BINUTILS_ARCHIVE ] ; then
                echo "Extracting ${BINUTILS_ARCHIVE}"
                tar -xzf ${BINUTILS_ARCHIVE}
        fi
}


BUILD_DIR_PREFIX=build-

#
# Builds Binutils.
#
do_binutils_build() {
        BUILD_DIR_BINUTILS=${BUILD_DIR_PREFIX}binutils-${TARGET_ARCH}

        # Remove dir if it exists.
        if [ -d $BUILD_DIR_BINUTILS ] ; then
                rm -rf $BUILD_DIR_BINUTILS
        fi

        echo "Building binutils..."

        # Changing directory, so use a sub-shell (?)
        (
                # Create a the build directory.
                mkdir ${BUILD_DIR_BINUTILS} && cd ${BUILD_DIR_BINUTILS};
                # Configure, build and install binutils
                ../${BINUTILS_VERSION}/configure ${BINUTILS_CONFIGURE_ARGS} &&
                ${GNU_MAKE} -j 12 -w all && ${GNU_MAKE} -w install
        )

        # Remove build dir
        rm -rf $BUILD_DIR_BINUTILS

        echo "Binutils Build done."
}

#
# "Builds" cygwin. Actually, it only copies some headers around.
#
do_cygwin_build() {
        HEADERS=${PREFIX}/${TARGET_ARCH}/sys-include

        mkdir -p $HEADERS  &&
        cp -rf ${CYGWIN_DIR}/newlib/libc/include/* $HEADERS &&
        cp -rf ${CYGWIN_DIR}/winsup/cygwin/include/* $HEADERS
}

#
# Builds GCC
#
do_gcc_build() {
        BUILD_DIR_GCC=${BUILD_DIR_PREFIX}gcc-${TARGET_ARCH}

        # Remove dir if it exists.
        if [ -d $BUILD_DIR_GCC ] ; then
                rm -rf $BUILD_DIR_GCC
        fi

        echo "Building GCC..."

        # Changing directory, so use a sub-shell (?)
        (
                # Create a the build directory.
                mkdir ${BUILD_DIR_GCC} && cd ${BUILD_DIR_GCC};
                # Configure, build and install GCC.
                ../${GCC_VERSION}/configure $GCC_CONFIGURE_ARGS &&
                ${GNU_MAKE} -j 12 -w all && ${GNU_MAKE} -w install
        )
        rm -rf $BUILD_DIR_BINUTILS

        echo "GCC Build done."
}

do_fetch
do_extract
do_binutils_build
do_cygwin_build
do_gcc_build
Unfortunately, the gcc binary built by the script, located in /opt/i386-tiano-pe/bin, can't produce binaries. Invoking the compiler on a source file ("Hello, World!" program) dies with:
$ /opt/i386-tiano-pe/bin/gcc main.c -o main
/opt/lib/gcc/i386-tiano-pe/4.2.3/../../../../i386-tiano-pe/bin/ld: crt0.o: No such file: No such file or directory
collect2: ld returned 1 exit status
I assume this is because I skipped the last two steps in the list at the beginning of this post. However, using the compiler to generate an assembler file (parameter -S) and then running the assembler on that file to produce an object file does indeed produce a PE/COFF object file.
$ cd /opt/i386-tiano-pe/bin
$ ./gcc -S ~/main.c
$ ./as -o main.o main.s
$ file ./main.o
./main.o: MS Windows COFF Intel 80386 object file

TianoCore on FreeBSD/amd64

I'm attempting to build the TianoCore code base on FreeBSD/amd64. Here's what I did so far.
  • In order to be able to check out the EDK2 sources, I installed the devel/subversion port. To check out the source tree, I did this in my home directory:
    $ svn co https://edk2.tianocore.org/svn/edk2/trunk/edk2 edk2
    

  • I installed the java/diablo-jdk15 port.

  • Downloaded Apache Ant 1.6.5. I didn't install it through the port but instead downloaded the binary distribution and extracted the archive under /opt since the EDK2 build framework requires very specific versions of the tools.

  • Did the same thing with Ant-Contrib 1.0b3, XMLBeans 2.1.0 and Saxon 8.1.1.

  • I created a symbolic link at /opt/xmlbeans-2.1.0/lib to /opt/saxon-8.1.1/saxon8.jar. The build notes for the EDK said a copy was needed, but a symbolic link works just as good. I guess they were running Windows or didn't know about links. Whatever.

  • Then I set up the environment for the build process as described in the build notes. This is what I did (note that my shell is bash):
    $ export WORKSPACE=/home/phs/edk2
    $ export JAVA_HOME=/usr/local/diablo-jdk1.5.0
    $ export ANT_HOME=/opt/apache-ant-1.6.5
    $ export XMLBEANS_HOME=/opt/xmlbeans-2.1.0
    $ export PATH=$PATH:$ANT_HOME/bin:$XMLBEANS_HOME/bin
    

  • I kicked off the build process with this command:
    $ bash edksetup.sh newbuild
    
    Unfortunately, the build fails with an error:
    BUILD FAILED
    /usr/home/phs/edk2/Tools/build.xml:22: The following error occurred while executing this line:
    /usr/home/phs/edk2/Tools/CCode/Source/build.xml:247: The following error occurred while executing this line:
    /usr/home/phs/edk2/Tools/CCode/Source/PeCoffLoader/build.xml:68: ar failed with return code 139
    /phs/edk2/Tools/CCode/Source/PeCoffLoader/build.xml:68: ar failed with return code 139
    
    This error can be solved by using a GCC that produces PE/COFF binaries instead of the default ELF images.