Category Archives: tech

Cocoa and Objective C

I am currently writing a graphical user interface (GUI) for my Wavelet Image Compression Library that will allow for selecting and inspecting compression settings. As my “main computer” for the past year has pretty much been an Apple Mac mini (with an x86 laptop for Linux (work) and Windows (games)), I decided to write it in Cocoa, learning the Objective-C language and the Cocoa class library / ApplicationKit at the same time.

I quite like the language (ignoring the message send syntax in square [] brackets) as you’re always aware when you’re doing OO stuff like sending messages, or writing simple C-code. It is very dynamic as essentially all binding happens at runtime and dynamically (which you of course pay a speed penalty for, but that shouldn’t matter much for high-level GUI code).
The Objective-C class library (“Foundation”) is very clean, always making a distinction between mutable and immutable versions of objects, and seems very complete with all the conveniences of modern OO.
Using the Interface Builder to create not only the actual interface, but instantiate classes and bind actions / outlets is a very nice touch. You can usually get a semi-functional mock-up of your application without writing any code at all, especially when using things like Cocoa Bindings, which allows for introspection via Key-Value-Coding (KVC) conformant objects. In essence, the AppKit “knows” how to access your member variables and set them according to the rules you defined in Interface Builder.

There are also some things which aren’t so nice. Documentation in some areas is lacking (e.g. using NSTreeController without CoreData or a dataSource), some things that look simple on the surface definitely aren’t under the hood (Key-Value-Observation replacing some of my objects by proxies, but at least they remembered to properly override isEqual: ;)). Memory management is also something which my head took some time to wrap itself around properly: every object is reference counted, but the crux is NSAutoreleasePool which is a bit like poor-man’s garbage collection. My problem with that wasn’t what it does, but when it does it. Oh, and namespace support would be nice…
Today’s challenge was to extend the NSOpenPanel of an NSDocument-based application with an accessoryView. I got it working in the end, but it wasn’t trivial (subclass NSDocumentController, changing stuff around in the Interface Builder to make sure the subclass is instantiated first, creating a separate .nib file for the accessoryPanel and praying that the View gets released properly).

Most of the time it’s a joy to work with, and a welcome change of pace from the dreadful writing of Windows GUIs in MFC. I only wish a decent Objective-C compiler with Foundation class library were available there as well (GNUStep doesn’t count, unfortunately). Might be an interesting idea to try to get Objective-C working on Microsoft’s .NET platform…

Wavelet 3.1

Another week, another release of my Wavelet image compression. I figured out how to do complete embedding, which justifies another release. This means you can compress an image once, and then get different rates by simply truncating that file! The resulting decompressed image will always have the highest possible quality. This sped the code up dramatically as it no longer needs to look for quantizers, it simply writes in the determined (“scheduled”) order until the bit-budget is used up.
I’ve also added the link to the sidebar. Go there for the documentation / changelog.

Wavelet 3.0

I’ve been steadily working on my wavelet image compression for the past few weeks, and in the process have improved it in many ways. These are largely not technical improvements, but rather a huge code refactoring, the creation of decent documentation, reducing memory usage an so on.
You can read the freshly pressed documentation or simply download the source.
It is a fairly simple but thus compact (executable with compression and decompressing is 30kb uncompressed) and relatively speedy image compression library, that provides up to 16 channels per file and combines lossless with lossy compression in a single algorithm, that can even be changed from channel to channel. As it’s based on the wavelet transform, it allows for progressive decoding (which means that if you only have the beginning of the file, you get a lower quality version of the whole file) and can also extract smaller “thumbnails” of the original image.
For encoding it also support various modes, one is to give a mean-square error for each channel (similar to the JPEG quality setting), and another one is to fit the best quality image into a given amount of bits.
Unfortunately, there is a catch with the new version, too (and this is the reason why the sidebar link still refers to the old version). As my primary development platform has moved from Windows to Mac OS (and Linux), I have not updated the Windows GUI (written in Delphi) nor the Web-Browser plugin. I plan to offer new GUIs eventually; the current plan is to write one in C# for Windows and Linux, and do a native Cocoa one for Mac OS.
Finally, I’ve changed the license from the GPL to the zlib-license, which should allow use in closed source applications. If you decide to use it, or even decide not to use it, feedback and suggestions would be much appreciated.

Hilbert Curve

I’ve been reading a bit about the spacefilling curves for my wavelet image compression (take a look here and here).
There is a very nice way to convert from the Hilbert derived key to (multi-dimensional coordinates) described by John J. Bartholdi, III and Paul Goldsman in “Vertex-Labeling Algorithms for the Hilbert Spacefilling Curve”.
They describe a recursive procedure, but in the particular case mentioned above, this can be easily unrolled. It also works very well with fixed point arithmetic. The following source code can be further optimized by storing each point’s x and y coordinate in a single unsigned int, as everything except for the final averaging is easily replicated across the vector by applying the operation to the combined integer.
Continue reading

Revision Control Systems

I’ve been looking for something (other than regular backups) to keep a history of my personal development projects.
Once upon a time, I used Perforce (the free 2 client license), but I’ve never really gotten on with it that well; probably using about 2% of its features. It also uses a rather different terminology to what I am used to.
Looking for alternatives, I ended up trying out darcs, and it is easy to use and does what I want from it. An additional bonus is, that you can work disconnectedly (as each workspace is also a repository) and that it does not try to reinvent the wheel by not having its own server / network protocol. Simply get access to the files any way you can (e.g. http, ftp, ssh, …).
This serves me quite well when developing the same project on more than one machine, although darcs most decidedly does not like keyboard-interactive ssh authentication. It simply fails to get the patches (using sftp), although manual sftp / scp and ssh work fine (using Ubuntu Linux to get a repository from Mac OS 10.4). Using non-interactive key-based authentication works fine, however.
It is written in Haskell, which has some unwieldly dependencies unless linked statically; I have to admit I’d feel slightly better were darcs written in something else (e.g. Python or Ruby). But, as long as it works… 😉
I also took a look at Codeville but that doesn’t even support binary files, so it’s a no-go.
Nevertheless, I’d quite like a comparison between recent, maintained distributed revision control systems (e.g. darcs, arch (?), codeville, monotone, git (?), …).

Automated Downloading

On Patrick‘s insistent encouragement, I’ve finished my “downloading system”. I use it to easily and quickly queue files on a remote machine for downloading (or starting (Bit-)torrent downloads). It consists of two scripts, one written in PHP and running on a web-server which presents a very basic (login protected) interface: A text-area where links to files or torrents are pasted, and a “send” button. The script then creates a cryptographically signed email-message which is sent via the web-server to a POP3 account.
The 2nd scripts runs my actual server at home and is written in Ruby (and it should work on Windows as well as Unix). This polls the POP3 box for mails, verifies their signature to check that it’s authentic, and then processes the URLs contained in the message: “.torrent” files are downloaded and passed to Azureus (which can in theory work from just the URLs, but then Azureus requires a manual intervention by pressing OK in the “Open URL” window), everything else is downloaded with wget.