Monthly Archives: June 2005

Alastair Reynolds – Chasm City

After having read Revelation Space I immediately started looking for further books from the same author. Turns out, I misspelled his name while searching Amazon.de, which just turned up the one book I had already read.
So, I’ve now been reading Alastair’s next book, Chasm City, which is just as good. It is set in the same universe (and Reynolds cleverly hands out a bit of fan-service for the people who remember his previous novel) and details a character’s road to revenge. Simple in theory, beautifully complex in practice and again raising plenty of interesting questions and concepts along the way.
With this novel, I have to revise my opinion of Reynolds, and I am now officially looking as much forward to his future books as I am excited about Vernor Vinge’s work!

The argument

I just saw a good argument for high frame-rates / instant feedback on Charles Bloom’s Rants Page. I think this is one the reasons why Gran Turismo 4 (to me) feels much better than Forza Motorsport:

[T]he response to every action is instantaneous. This is not a minor detail; Joe Ybarra used to love to talk about this with frame rate – the difference between 40 fps and 60 fps, and more exactly, the latency, is not some small numeric difference, it’s not like getting 40 chocolate chips vs. 60 chocolate chips – there’s something dramatic that happens when your interface is smooth and responsive and perceptually instantaneous. Suddenly the device is like an extension of your body & mind – it’s not some external apparatus that you’re fighting with and compensating for, it’s your tool and it’s doing your wishes and it suddenly tickles some loving part of your brain.

This is a lot of truth to that very unscientific statement (and I sincerely hope that next-gen consoles will not raise the details to a level where we are again stuck at <= 30 frames / second).

Apple’s Move to x86 vs. Security

I was fairly surprised at Apple’s announcement of their transition from PowerPC processors to using Intel chips, and I am still fairly sceptic about this whole affair. After having watched the keynote, I am seeing things in a slightly more positive light (mostly due to Steve Jobs being an excellent salesman).
Having written assembly for x86 and having read a lot of documentation about PPCs, it very much feels like moving to an inferior architecture with a superior implementation. Suddenly people have to pass arguments over the stack again, they (according to the Universal Binary guide) need to check for MMX, SSE, SSE2, SSE3 and optimise accordingly instead of a nice orthogonal AltiVec unit or no vectorisation. Welcome back to a stack-based FPU with 8 registers, same as the integer core and no more Open Firmware. I at least hope that Apple is going to ensure a certain standard in the Intel CPUs they sell (e.g. x86-64 + SSE3 guaranteed), to avoid even more ugly #ifdefs and code-paths than are already necessary for ensuring a single code-base builds on big- and little endian machines with different ABIs and capabilities.
As far as I am aware, the PowerPC stores the return address in a register, and is thus harder to exploit via buffer overflows. x86s store their return addresses on the stack, which makes them more vulnerable to these types of attacks. Recently, Microsoft has made that a bit harder by storing sentry cookies on the stack and checking them in SP2 for Windows XP and SP1 for Windows 2003 Server, but that is something of a work-around that costs you performance as well as stack-space.
Apple seem to offer the tools to make this transition less grating, but it is work with no immediately obious pay-off in sight. Certainly they are going into this with much more information than any of us have, so we’ll have to wait and see how things play out. I am well aware, that the CPU does not make a Mac; and I will hardly leave Mac OS X behind for any of the alternatives because Intel now gets a share of my money instead of Freescale / IBM.