Tuesday, July 3, 2012

Ubuntu 11.10 on Lenovo W520

At work, I got a new laptop -- a Lenovo W520. It came with Ubuntu 11.10 ("Oneiric Ocelot") pre-installed by the support team. My first impression was that it worked pretty well, but I quickly discovered that I couldn't change the brightness of the display through the Fn+Home/End keys.


The W520 uses this "Optimus" technology with an integrated on-board graphics card plus a separate NVIDIA card, where both cards can be switched on-the-fly -- on Windows. The default installation used the high performance card and I suspected that the video card driver was keeping me from adjusting the brightness. As it turns out, after switching to the on-board Intel card, things worked fine. Here's what I needed to do:


First, I needed to disable the NVIDIA card in "the BIOS" and switch to the on-board card. There's an option somewhere under "Setup", then "Display", if I recall correctly.


Next, I changed the video card section in my /etc/X11/xorg.conf to read this:


Section "Device"
  Identifier "Device0"
  Driver "intel"
  Option "Shadow" "True"
  Option "DRI" "True"
EndSection

In fact, I added the two Option lines later on and only changed the Driver line at first. I then discovered that most of the little try icons in the upper right corner of the Gnome desktop wouldn't show anymore. A look at /var/log/Xorg.0.log turned up some errors and running glxinfo yielded lines like these:


Xlib:  extension "GLX" missing on display ":0.0".

Luckily, someone else ran into the same issues over at http://theiszm.wordpress.com/2010/06/27/glx-missing-on-display/. As indicated there, I also ran these commands:


$ sudo apt-get purge nvidia*
$ sudo apt-get install --reinstall xserver-xorg-video-intel \
    libgl1-mesa-glx libgl1-mesa-dri xserver-xorg-core
$ sudo dpkg-reconfigure xserver-xorg
$ sudo update-alternatives --remove gl_conf /usr/lib/nvidia-current/ld.so.conf

The first command hinted that some "ubuntu-desktop" package would also be removed. I don't know what that is, but I don't miss it, yet. Anyways, after a final reboot, the brightness adjustment now works and all my tray icons are back in place.

Wednesday, November 2, 2011

CMake and C++ "Compile Time" Polymorphism

For a recent project of mine, I wanted to use what some people call "Compile Time Polymorphism" in C++. Here's how I implemented it.

Polymorphism in the context of programming computers usually refers to the ability to tread objects of a different data type through the same interface. In C++, this is often implemented through class inheritance and the use of virtual functions. The text book example of this concept is two classes, Cat and Dog, that inherit from a common super class Animal. Animal has a method makeSound() that is implemented by each subclass accordingly. In real software projects, polymorphism is used to hide multiple implementations behind a uniform interface. Here's an example of how this concept is usually used in C++.
class Animal {
public:
 void makeSound(void) = 0;
};

class Cat : public Animal {
public:
 void makeSound(void);
};

class Dog : public Animal {
public:
 void makeSound(void);
};
The issue with this code is that it requires the use of virtual functions which means you need a vtable for the concrete subclasses. Usually, as a programmer, you don't need to worry about vtables as the compiler takes care of that for you. But let's take a look at how this works anyways. A vtable is basically a table of function pointers. For each of the concrete classes shown above, the vtable contains a pointer to the respective makeSound method. Also, each object carries a pointer to the vtable. At runtime, when a virtual method of an object is called, the pointer to the vtable is resolved to the actual vtable. From there, the address of the method is loaded and the call to it is made indirectly. So the use of virtual methods not only increases the size of your code, but also the size of your objects. In addition to that, it forces the compiler to use indirect function calls through pointers which are usually slower than direct function calls. Again, the compiler takes care of all of that, so this is purely informational.

All of the above is okay and in fact required if you don't know the concrete type of an object until the software actually runs. Also, in most software projects, the drawbacks don't matter and aren't even noticeable. But there are situations where you may not want to pay the price of virtual methods, e.g. in a resource limited embedded system.

Also, there are situations where it is known at compile time what the concrete implementation of an interface will be. This is true for example when you have an abstraction of an interface that is specific to a certain operating system: When you compile the software, you already know what the target operating system will be, so you can simply use and link to the right implementation of the interface, instead of post-poning the decision to runtime.

So how would you use polymorphism in C++ without the use of virtual methods?

Here's how you could do it:
typedef enum OperatingSystemImpl {
 Darwin,
 FreeBSD,
 Linux
} OperatingSystemImpl_t;

template struct OperatingSystemChoice;

class DarwinImpl;
class FreeBSDImpl;
class LinuxImpl;

template<> struct OperatingSystemChoice {
 typedef DarwinImpl m_type;
};

template<> struct OperatingSystemChoice {
 typedef FreeBSDImpl m_type;
};

template<> struct OperatingSystemChoice {
 typedef LinuxImpl m_type;
};


struct OperatingSystemService {
 typedef OperatingSystemChoice< ... >::m_type m_type;
};
Of course, the ellipsis must be expanded, but more on that later. What's important is how software using this construct would use the code:
OperatingSystemService::m_type OsServiceObj;
The snipped above would create an object of the correct type, dependend on what the ellipsis expands to. The neat thing is that the template compiler ensures that the ellipsis is expanded to a valid "type" as defined in enum OperatingSystemImpl. Also, it is made sure that the actual, underlying class is declared, e.g. class DarwinImpl.

In other words: If you tried to compile the software with the ellipsis expanded to Windows, you'd get a compilation error. If you had implemented this using classic polymorphism, you'd probably have some code that dynamically created the right object depending on what input is given. That means, you have to test your compiled code, feeding it an invalid type. This mean you must run your software. I'm convinced that finding problems earlier is better, so finding an issue when code is compiled is better than finding issues when code is run.

So back to how the ellipsis is expanded. Here's how CMake, a build system, comes into play. CMake uses input files that describe how the software needs to be compiled. Those input files, as with other build systems, are able to define compiler flags. Also, CMake defines a variable that contains the operating system's name. I suspect it's the output of uname. So here's what I added to my top level CMakeList.txt file:
add_definitions("-DOPERATING_SYSTEM=${CMAKE_SYSTEM_NAME}")
This makes the OPERATING_SYSTEM macro known to the pre-processor. So the code with the ellipsis can be re-written like this:
struct OperatingSystemService {
 typedef OperatingSystemChoice::m_type m_type;
};
Et voilĂ , the right type is picked when the code is compiled.

Here are the nice things about this: There is no need for virtual methods, eliminating the need for vtables. Also, invalid or unsupported operating system types will be found at compile time vs. at runtime while the code for all supported operating systems will still be always compiled (just not used).

One downside may seem that you no longer have an enforced interface like when using purely virtual classes, i.e. the compiler may not tell you that you forgot to implement a newly added method to one of your implementation classes. However, this is more of a minor issue: You will still get a compilation error in that case, but only if you're compiling for the target system where you forgot to implement the newly added method.

Saturday, March 5, 2011

Article about Interrupt Routing on x68

Here's a good article about how PCI Interrupts for x86 Machines under FreeBSD are implemented. While the article is targeted at FreeBSD, it also looks into the various interrupt routing mechanisms on x86 which apply to all systems software such as firmware and operating systems.

Friday, December 31, 2010

TianoCore and coreboot, again

It has been over two years since the last time I worked on TianoCore for coreboot. These days I had some free time to spend and I used it to continue the project. I updated my code to the latest versions of coreboot and TianoCore and got it to work again in QEMU.

Here's a "screenshot" and a few words on what it's all about.

Tuesday, October 19, 2010

ImageMagick, libjpeg, etc. on Mac OS X

Here's how I got ImageMagick with JPEG support to compile and run on Mac OS X 10.6 (Intel).

First, I got the ImageMagick Source Code via Subversion, per the instructions from http://www.imagemagick.org/script/subversion.php. Short version:
$ svn co \ 
https://www.imagemagick.org/subversion/ImageMagick/branches/ImageMagick-6.6.5 \ 
ImageMagick-6.6.5
Then, I pulled libjpeg from the Independent JPEG Group. I had to extract the source code to a subdirectory of the ImageMagick directory called jpeg, i.e. /path/to/ImageMagick-6.6.5/jpeg.

Before I could compile any of the source code, I had to set three environment variables per this thread on the ImageMagick forums:
$  export CFLAGS="-isysroot /Developer/SDKs/MacOSX10.6.sdk \
  -arch ppc -arch i386"
$ export CXXFLAGS="-isysroot /Developer/SDKs/MacOSX10.6.sdk \
  -arch ppc -arch i386"
$  export LDFLAGS="-Wl,-syslibroot,/Developer/SDKs/MacOSX10.6.sdk \
  -arch ppc -arch i386"
Then, I compiled libjpeg via the via the standard ./configure and make dance. I used these commands:
$ cd jpeg
$ ./configure --prefix=/opt --disable-shared \
  --disable-dependency-tracking
$ make -j 16
Now, I was able to configure ImageMagick:


Be aware that the LDFLAGS path is different than the include path! If everything went well, you can now go on to build the imagemagick suite:

$ ./configure --prefix=/opt --without-x --without-perl --with-jpeg \
   --disable-shared --disable-dependency-tracking \
   --enable-delegate-build --enable-osx-universal-binary
$ make -j 16
This gave me statically linked binaries of the ImageMagick tools I was able to run on my Mac. I also tried to build dynamically linked binaries but failed. Because I don't need the dynamically linked version, I gave up after a while.