Intention vs. Utilization

Filed under: Rants, Uncategorized — Tags: , , , , , , — halbyrd @ 01:42

I recently had a conversation with a friend of a friend at the movie theater, and as conversations among geeks will, the topic circulated around to gaming.  I mentioned how laughable Steve Jobs’ claim was that the iPod Touch was a “gaming device”, in his recent explanation for why it doesn’t have a camera.  In response, the friend of  a friend insisted that the iPod Touch was indeed a gaming platform, and worthy of respect.  This bothered me in a rather fundamental way, but at the time I couldn’t pin it down any further than to say that that didn’t really sit right with me.

In true l’esprit de l’escalier, I finally came up with the answer I was trying to formulate several hours later, as I was idly surfing the web.  The fact is, there is a real, measurable difference between a device that can play games, and a device designed for playing games–a difference of intent, as reflected in design.  The iPod Touch is a PDA running a general purpose OS.  Like any general-purpose computer, it can be used to play games, and there’s ample evidence to support the notion that making games for this popular platform is a profitable enterprise.  Despite all this, however, the iPod Touch is not a gaming device.  It is a device that can play games, among a plethora of other tasks.

The distinction might seem overly fine at first, but there’s a point to be made here.  When we call something a “gaming device”, we are asserting that this is a thing that is first and foremost designed for the playing of games.  Whatever other functions it may perform are to be considered secondary, however well it may perform them.  The Nintendo DS and the Sony PSP are gaming devices, and they make no bones about it.  Sony’s brief misadventure with UMD movies aside, neither of these devices are marketed as anything else, despite the fact that both can be made to do quite a lot besides just playing games.  The form factor, the interface, the inputs; everything about these units is designed around gaming, and both are very good at what they do.

The iPod Touch, on the other hand, is a very confused little device.  It’s named after a music player, but hardly anyone seems to care about that functionality, except when it doesn’t work for some reason.  It’s built like a smartphone, except it lacks the cellular radio and GPS that give the iPhone most of its usefulness as a networked mobile device.  It seems to fall into the much-neglected niche of PDA, but no-one in Cupertino dares call it that.  And now, after some prompting by the tech press, Word of Jobs says it’s a gaming device.

Alright, Steve, I’ll bite.  Let’s pretend that the iPod Touch is a gaming device, and evaluate it accordingly.  First up is graphics.  For a gaming device to succeed at what it does, it needs a decent-or-better screen, and enough horsepower to fill that screen with good looking visuals.  The latest generation of iPod Touch succeeds on this front, mating a 3.5″ QVGA screen to an Open-GL 2.0 ES capable graphics chip.  The PSP beats it with room to spare, and the DS probably out-powers it as well, but Nintendo’s already proven with the Wii that you don’t always need hyper-turbocharged hardware to succeed.  You also want some sound to go with those graphics, and the iPod Touch is certainly no slouch there.

The third thing you need, and one of the most important, is good controls.  Nintendo set the bar with the original Game Boy, and has since raised it with successive refinements to the controls.  Sony’s a relative newcomer to the portable gaming arena, but their experience with the PlayStation and PS2 have served it well–the PSP has a set of solid, responsive controls.  Sadly, however, this is where the iPod Touch falls hardest.  It gives you a fingers-only touchscreen, some accelerometers…and that’s it.  PopCap-style puzzle games work well enough, but most others are forced to make use of on-screen buttons, and they suffer for it.  Three and a half inches diagonal measure does not make for a large screen, and forcing people to put their thumbs over top of it only makes matters worse.  Accelerometer tilt controls help to alleviate this some, but forcing me to hold my iPod Touch at precisely the right angle in order to steer is just asking for long-term neck strain.  Put all this together, and you still end up two or three buttons short of what most games need to give you proper control of your avatar in-game.

In short, while the iPod Touch is quite a capable PDA and PMP, it is not a gaming device.  Its UI, design and controls are almost completely at odds with how a gaming device needs to behave.  I have no doubt that Apple could produce a proper gaming device, if they really tried, but this simply isn’t it.  Sorry, Steve.


What I Hate About You: some pet peeves about gaming

Filed under: Rants — Tags: , , , , , , , , , , — halbyrd @ 21:33

I love gaming.  It is one of the defining passions of my life, and the source of a lot of the better stories in my life.  Someday I’ll tell you about some of those, but today I’m going to talk about how my favorite avocation drives me crazy.  So with no further ado, here’s what I hate about you, gaming.

Half-assed PC ports:

Why is it that PC gamers, a group I would say are probably some of the most dedicated to the love of gaming, are so mistreated of late?  Games that worked perfectly fine on XboxStation360 come out on PC months late, missing features, sometimes completely non-functional–Gears of War, I am looking at you!–and laden with screw-the-customer DRM.

The piracy argument is a non-starter; people who do that probably weren’t going to put down cash for your game anyway.  The “it’s hard” argument doesn’t hold water either, porting from PS3 to PC is no harder than the other way around, and anybody who’s made their game for 360 has had Microsoft do half the work for them already!

Bottom line: if you’re going to do a PC port of your game, take the few extra weeks of time and effort to make sure it works properly.  Gamers are used to slipped release dates; we don’t even remember them most of the time.  Broken games don’t get forgotten, though.  Broken games get you blacklisted by a lot of gamers in a hurry, and that’s a blow that’s years in the mending.

On Game Price Gouging:

Why is it deemed desirable to price every game coming out at the same price-point as AAA-list blockbusters?  There are quite a few games out there that are quite enjoyable, but have been harshly panned by critics and gamers alike because they fail to deliver the premium experience we expect from a premium-priced title.  Games like Shadow Complex are a wonderful counter-example to this trend, but they are mostly relegated to the slums of console download services, which many are still leery of.

If they had tossed that game on a disc and sold it for $20, I’d bet we would now be talking about the surprise millions-seller of the year.  This is not because the game is inherently brilliant, though it is.  This is because it is not $60 or more.  I know every game is some dev team’s baby, but not every game is going to be the next Half-Life.  Setting more reasonable prices on middle-of-the-road titles would go a long way towards making this whole game publishing business more successful.

On Console Download Services:

The ability to pay for and acquire games over the Internet is a marvelous invention, and one that I partake of on a regular basis via Steam and Direct2Drive.  I will not, however, touch XBLA or PSN with a 10m pole.  Why?

It all boils down to a difference of philosophy.  Steam, and to a lesser extent Direct2Drive, thrive by offering you conveniences and extras that buying the game on a disc does not.  Not only can I pull down the game off the Internet in a half-hour or so, but I can do so on as many computers as I please (provided I only play the game on one at a time).  Should I so desire, I can generate compact backups of all of my games, in CD- or DVD-burnable chunks or as one megalithic file for storage on an external hard drive.  I don’t really need to do this though, because Steam even keeps a master list with product keys. Once a game is on my account, I never have to worry about backups, patching, product keys, activation, and all the rest.  Deleting a game to save space becomes fairly painless, since I can always bring it back with a few clicks.  Combine this with a social network/im/voip solution that succeeds where Xfire and others have failed, and losing the physical disk starts to look like a significant upgrade.

On the other side of the fence, we have XBox Live Arcade and the PlayStation Network.  These services have a thin veneer of the appeal that Steam has, but differ in several significant details.  Not only can I not download my game to more than one console, in the case of XBLA I can’t even back up my games to an external disk for safe keeping.  The PS3/PSN situation is somewhat better in this regard, as it has support for both external backups and redownloading of games.  Voice chat on these services is middling-fair: both support in-game chat, but neither supports game-independent multi-user chat rooms or cross-game chat, both of which severely hurt the social aspect of the service.  XBL also gets demerits for charging me $50 a year for basically the same matchmaking and voice-chat services that Steam and PSN give me for free.

Valve also understands the pricing game a lot better than Sony or Microsoft: price drops on older games and frequent weekend promotional discounts have kept Steam’s sales thriving.  Also, Steam is in the business of selling full games, not overpriced mini-games.  PSN and XBLA don’t have much that’s worthwhile, and what they do have tends to be overpriced.  Gems like WipEout HD and Shadow Complex are wonderful to be sure, but aside from games you could just as easily pick up at Gamestop for $20 I have yet to see anything else on these services worth buying.  DLC expansions are fine and good, but unless you’re a nutter for Rock Band/Guitar Hero, there’s not much there to sustain you.

On Games For Windows Live

This one is addressed straight to the folks at Microsoft Game Studios.  Ladies and gentlemen, why have you not yet gotten your house in order?  This service is two years old already, and still I hear frequent complaints about how your software breaks otherwise functional games.  You don’t even have the excuse of inexperience: you’re Microsoft!  You own the operating system that this platform runs on!  You are known around the world for hiring some of the best and brightest minds in the world! Why is this not fixed? I don’t hear complaints about Steam breaking games anywhere near as often, and many of these are from clueless users who have fouled their systems up and don’t want to admit it.  GFWL, on the other hand, is brittle.  Horror stories of games put onto fresh installs of Windows utterly failing to run are still far too common.  Get this fixed, or you will find yourself destroying the very Games For Windows brand you have so carefully tried to establish.

In Conclusion

I know it sounds like I’m filled with naught but bile and poison when it comes to gaming.  Therefore, my next few posts are going to be about what is good and right in gaming.  Meanwhile, sound off in the comments if there’s something about gaming that really ticks you off.

Weapons Balance and Scarcity in a Post-Scarcity Economy

Filed under: Rants, Reviews — Chrome Dragon @ 06:30

“For the first time, society is producing enough that none need to go hungry.”

Yeah, none save my lovely pulse rifle in Half-Life 2.

the Shotgun is… marginally okay.  If there’s zombies about, you had a 50% chance of having a pocketful of shells, if not a full bandolier, and shotgun shells are big.

And I’m sorry they didn’t give you Annabelle instead of just the crossbow.  It had too much arc and travel time for sniping, as it was nominally intended for.  Combat optics on it would be much more useful.  To quote a friend, “[the] Crossbow was just overall useless except for lulz.”  He was mostly right, but it was unintentionally useful – if you could reliably hit with it at snap-shots, it was how you took the first grunt out of the fight to stack the odds.  But that’s basically it.  Anything “good” was nerfed into the ground. You use the SMG, or the pulse rifle/shotgun, whichever the game drops ammo for at the moment. If it’s neither, you use the SMG until it’s out, then the USP – which somehow morphed into your main long-ranged weapon. (?!)

Everything else was in so short and so unreliable supply that you either hoarded it or wasted it.

Sometimes, both at once.

Also in bad form was allowing you to carry no superweapons – the Gauss was welded to the dune buggy, and the Pulse Cannon to the boat – and not enough ammunition for your one heavy weapon, the rocket launcher.  You only carry a few rounds for a modern ATGM, but a modern ATGM can turn a hundred-ton assault tank into modern art in one  incredibly awesome moment.  Never mind the helicopter gunship that takes three or more rockets on Easy to kill – these are not Javelins we’re talking about here.  More akin would be the M202 FLASH (for FLame ASsault SHoulder-fired) rocket launcher.  This one carried four in the tube, and allowed them to be fired semi-automatically in series, or launched as a salvo to really mess up someone’s day.


Spyro on Game Design

Bah, if your hero is a flying purple dragon, there should be no “So deep you fall and die” traps in the game.

And if your hero is a melee fighter, it’s poor form to put someone with a freeze-mortar attack in level 1 guarding a narrow, un-railed bridge which gives you no side-to-side space to dodge…

…When said enemies usually run in packs of 2-3, and have to be beaten to death three cunt gargling times, and are immune to damage while twitching on the floor on their backs to the point of introducing clipping bugs to shield them from your razor-toothed affections…

…It’s in even worse form for the unstoppable superweapon to only count as one beatdown of three, or to ignore helpless, prone enemies when the weapon consists of 360° homing fireballs the size of a Volkswagen Beetle…

…Which don’t even go far enough to take out the fire-support bastard lurking on the ledge you need to jump to…

(Oh, don’t forget that your basic melee attack winds up over about .2 dangerously vulnerable seconds, and is fairly frequently hit-cancelled.  And even bullet-time isn’t reliably fast enough to stop hitcancels.)

Also, it’s in poor form to put big pieces of terrain that look like platforms in a disappearing-platform jumping puzzle.

Because jumping puzzles are only ever more fun while taking fire.

With net-rays.

And ice bombs.

That have splash damage.

And sometimes inexplicably airburst.

Over spikes.


From WoW to meh.

Filed under: Rants — Tags: , , , , — halbyrd @ 03:51


I’ve been playing World of Warcraft since around patch 2.2, or August 2007 for those keeping score at home.  That makes it just a month shy of two years.  I’m not going to be sticking around for the anniversary, though.

It’s nothing to do with the usual complaints–that it’s a time sink, that much of the combat is repetitive, et cetera.  Grind is a central part of what makes a MMORPG what it is, and I have no particular problem with that.  Not to put too fine a point on it, but most of life is about doing the same things again and again.  WoW at least has the decency to reward me for my perserverance.

No, my problem with WoW centers around player skill.  People have held that in this game, life really begins when you hit the level cap.  From this perspective, the process of going from lv. 1 chicken chaser to lv. 60 70 80 badass is, in essence, an extended and extremely forgiving tutorial.  You have time to mess about, learn the mechanics, and see some interesting scenery along the way.  This is fine and good–in fact, I think more games outside of the MMO scene could stand to take a lesson or two from this model.

Once you’ve climbed that mountain, though, what’s there to greet you?  If life begins at 80, what does this life entail?  The answer, in WoW’s case is: not much.  You can go the hardcore PvP route, ganking noobs for fun, sharpening your skills in battlegrounds, and competing “for realz” in the arenas.  This tends to fall flat, for the simple reason that WoW was not designed around this kind of competitive play.  PvP has been shoehorned in after the fact to appease the griefer contingent, but it’s ultimately a distraction from WoW’s true focus: Raiding.

Before Arenas, before Tournament realms, before moneyhat-driven dreams of eSports fame, and even before Battlegrounds, WoW was all about Raiding.  Getting a bunch of people together, finding some godsforsaken castle or cavern, and running from one end of it to the other, with nothing but the entire population of Murder City between you and glory.  With potent magic, huge phallic swords, and ridiculously proportioned shoulderpads, it’s all designed to feed our inner Viking.

Scratch the surface a bit, though, and you begin to see why the Viking lifestyle doesn’t hold up long-term.  Coordination SNAFUs turn your engine of destruction into a tangled scrap-heap faster than you can yell “LEEEEEROY JENKINNNNNS!”.  Underperforming damage-dealers turn even routine pulls into a molasses-filed quagmire.  Inattentive healers let the raid crumble around them while they admire the scenery.  Clueless tanks soldier on, bashing away ineffectually at the boss while his minions tear through the squishies behind like a chainsaw through butter.

To a certain extent, this is expected.  Dungeon running is about teamwork, right?  Yes, but there comes a point at which it all becomes too much.  Sometimes, the game just throws too much at you at once, too hard and too fast for any but the most Borg-like raids to cope with.  Nowhere was this more apparent than in the Sunwell Plateau. This was WoW at its most brutal.  Wiping on the first trash pull was commonplace, even after everybody knew what they were doing. The vast majority of raiders never made it to Kalecgos, never mind all the way to Kil’jaeden. Guilds that made it through everything the game had thrown at them to date shattered on this dungeon.

Was it because of poor teamwork?  Insufficient preparation?  Simple inattentiveness?

No. It was because the game mechanics themselves made it all but impossible to proceed.  The tension between PvE and PvP game mechanics has been a problem in WoW ever since battlegrounds got added in 1.4.  It wasn’t until the addition of Arena combat in 2.0 that this became a real problem, however.  From that point onwards, the game designers have been pulled in two conflicting directions: the desire to avoid overpowered talents/abilities/gear for PvP balance, and the desire to boost threat/damage/healing for PvE viability.

This resulted in player classes that simply couldn’t participate in Sunwell raids, because they were carrying the PvP millstone around their necks in a dungeon that consisted of Olympic-level sprints.  Your best raid healers are Druids?  Too bad, only Shamen are allowed, because Chain Heal is required to keep up with the punishing damage auras and area-effect spells.  Want to bring some Mages or elemental-spec Shamen for damage-dealing?  Too bad, you won’t finish the DPS race alive unless you stack Shadow Priests and Warlocks, due to ridiculously short enrage timers.  Want to bring a Paladin who isn’t a tank?  Too bad, you’re SOL for damage-dealing and healing.

Blizzard has wisely backed off on this for normal raid progression in the latest expansion, but the damage has been done.  The game now has a permanent case of Dissociative Identity Disorder.  Raids routinely fall apart because half the class/spec formulations don’t function properly in their intended roles, and the people who can fill the roles properly frequently contract a nasty case of Real Life Problems.

The practical upshot of this is that you can routinely find yourself failing and having  to start over because the game itself is getting in the way of playing it.  I ran into this problem about 3 months after I first started playing, when I first started doing end-game raiding, and it has never gone away.  I’ve stuck around for quite a while hoping it would, because Blizzard has put together an extremely compelling world in this game. Compelling or no, though, this game is fundamentally broken, and Blizzard has no real intention of fixing it.

One common definition of insanity is repeating the same actions, in the same kind of circumstances, expecting different results.  I think it’s time I stopped paying Blizzard my presubscription fees for crazy pills.


Tux the Homewrecker

Filed under: Rants — Tags: , , — halbyrd @ 01:35

or: Why Linux is not Ready for the Desktop

Linux is one of the most high-profile successes of the Free & Open Source Software (FOSS) movement to date. Starting as the hobby project of Linus Torvalds while he was studying at the University of Helsinki in 1991, Linux has evolved into a massive, world-wide collaborative effort, and the most widely used UNIX-style operating system in the world. It scales from hand-held PDAs and smartphones all the way up to clustered supercomputers, and had captured 12.7% of the overall server market worldwide as of Q1 2007[*]. Yet for all its power and flexibility, it still hasn’t managed to make a serious mark in the world of consumer desktop machines.

There are several reasons for this, some controllable, some not. Inertia is one of the uncontrollables: people are, if not comfortable, at least familiar with Windows. It’s weird, wonky, and sometimes unreliable, but it’s what comes pre-installed in most machines, and it does most of what the average user needs an OS to do. This by itself is not enough to impede a switchover, as Apple’s consistently upward sales trends can attest. It is a factor to consider however, and serves to amplify the other issues.

Apple’s success is particularly significant, as adopting OSX requires not just adjustment to a new OS, but a new computer to go with it. This would seem to be an even worse situation than the one Linux is in, but OSX offers enough advantages over Windows that many are willing to make the switch. Linux offers many of the same advantages, including a more stable system, better performance on most day-to-day tasks, and a more securely-designed system architecture that avoids much of Windows’s vulnerability to malware. Linux also has a significant cost advantage over OSX: not only do you not have to buy a new computer, you don’t even have to pay for the OS itself, just for the CD or DVD to burn the install image onto.

The fault, then, must lie in large part with Linux itself.  Simply put, there are several things that the Linux community—and the desktop distros in particular—are doing wrong.

Before I continue, note that this is about what Linux is doing wrong for the desktop.  What a desktop OS needs to do is different from what a server os needs to do, which is different from what a device-embedded OS needs to do, and so on.  Linux can fill all of these roles and more, but I’m not talking about those other roles today.


The first problem that springs to mind is getting the system up and running. While getting the OS itself is simple enough, as is installing anything in the repositories of your distribution of choice, there is still no way to install arbitrary 3rd party programs in a simple, consistent way. OSX is the clear winner in this department, with a pre-defined .DMG file format for delivering the program’s components over the internet, as well as installing the program itself in a simple drag-and-drop operation. Windows has had the .MSI format since Windows 2000 was current, and it performs much the same function, but most are still using 3rd party installer programs such as InstallShield. Linux is the clear loser here, with no real agreement on what installer package format to use (.deb? .rpm?) or whether the files should just be tossed into an archive (.bzip? .tar.gz? .rar?), and whether or not they should be pre-compiled.

Install procedures can range from the very simple (apt-get install <package name goes here>) to the incredibly obtuse (unpack to source directory, find dependencies, find dependencies’ dependencies, compile, link, find out you missed a needed command-line argument, curse, rant, rave, repeat). Each distribution has been attempting to simplify the install process through online program repositories and package-management systems, but there is no consistent standard for what programs should be included, how frequently the repositories should be updated, or what optional features and supplementary packages should be included.

There also isn’t a consistent location in the file system structure to put the programs when they are installed—some go in /bin, some go in /usr/bin, some go in /opt, and some go in other random locations like /usr/share/applications. Windows has been consistently providing C:\Program Files as an agreed-upon spot for years now, and MacOS has had an Applications folder for even longer. This inconsistency in Linux makes even a simple operation like putting a program shortcut on the desktop a laborious chore.

Another area of shortfall is program removal. Windows provides the Add/Remove Programs interface, which is pretty much Exactly What It Says On the Tin.  OSX doesn’t have anything similar, but seems to get by without due to the self-enclosed nature of most OSX-based applications. Programs installed in Linux often spread out across multiple directories, however, and no consistent interface whatever is provided for programs not contained in the distribution’s package repository. For programs within this repository, removal is as simple as installation (apt-get remove <insert package name here>), but for anything else, it’s back to the bad old days of hunt-and-delete.  Oh, and you’d better be sure what you’re getting rid of isn’t being used elsewhere, or you’re screwed.


Another major issue is the handling of peripherals. USB HID devices like mice and keyboards are handled gracefully enough in their most basic forms, but many devices are handled in a manner that ranges from inconsistent (USB flash drives) to schizoid (multimedia keyboards and other non-standard devices) to outright hostile (graphics cards, displays, sound devices). I have a fairly mundane setup as peripherals go, with a keyboard, multi-button mouse, a jog-wheel for volume control, speakers, microphone, and a pair of monitors. Yet getting these configured in useable fashion is a struggle every step of the way, from getting Linux to recognize the Back and Forward buttons on the mouse, to setting up a dual-display desktop and getting 3d acceleration turned on for games—or for the much vaunted Compiz that so many love to praise as Linux’s killer feature.

Setting up 3d acceleration and dual-display is particularly gruesome, requiring a descent into the arcana of xorg.conf and related configuration text files with manpages and google at the ready. Dynamic display detection, long since considered standard on both Windows and OSX for laptop users, seems to be beyond the ability of Linux’s graphics engine. 3d acceleration, which is automatically turned on when needed in Windows and on all the time in OSX, is a further hassle, compounded by the slow driver release schedule from both Nvidia and ATI. Even something as basic as changing screen resolution can often require more prodding of the xorg.conf file, as still commonly misinterprets or ignores DDE information provided by modern displays.

Another hardware-related source of grief is the difficulty of getting most WiFi chipsets to function. This is largely due to the unwillingness of Broadcom and other chipset manufacturers to release either working drivers, or enough of an API for the Linux driver team to write their own. Some of the blame, however, can be laid at the feet of the Linux network stack, which in most cases still requires poking around in config files even for setting up basic things like static IP addresses, network SSIDs and WPA2 keys.

USB flash drives and external hard drives are another source of grief in Linux, as there is no agreed-upon default location in the filesystem for these drives to appear once mounted. (The /mnt directory is suggested, but each device must be given its own empty subdirectory, which must be made beforehand.) Windows’s habit of assigning drive letters to each partition makes this a non-issue, as does OSX’s habit of making drive icons appear directly on the desktop for interaction. Linux, in its default state, forces the user to set these mount points manually, for each drive, and requires user intervention to even begin the mount process. Recent iterations of the Gnome and KDE desktop environments have attempted to automate this process, but neither one works consistently and predictably.


Linux is quite possibly the most versatile OS to date. Its scalability, reliability, and unique anyone-can-contribute development environment, combined with its already vast popularity in the server world, give it a unique position among operating systems. It has the potential to eventually displace Windows as the OS of choice for the desktop, but for now, these issues outlined above act as major stumbling blocks to its success. Hopefully these issues will be addressed in future development efforts. Once that happens, we may be able to truly declare that Linux is ready for prime-time in the home.

Blog at

%d bloggers like this: