In which Changing The World is Its Own Reward

Filed under: Reflections — Tags: , , , , — halbyrd @ 10:15

I’ve been seriously thinking over the question of spirituality and morality; namely, does having the latter require the former? David Malki, over at Wondermark, has put up a series of rather talky web comics over the past few days that has helped to crystallize my thoughts on the matter. You can have a look over here, or just look at the reproduction of the thread below:

Now personally, I don’t agree with this position. I’m an agnostic atheist; i.e. I don’t believe there is a God–mainly due to lack of conclusive, verifiable evidence–and I’m not sure that we can even meaningfully talk about a being that exists outside of space-time altogether. That said, there are the seeds of some good ideas in here:

“…there is no Heaven, no Judgement. Your time on Earth is all you get. You better make the most of it.”

This is one of the things I believe, and I believe that this attitude, coupled with a functioning sense of personal responsibility, has done more to motivate works of great good in this world than any promises of a reward in the hereafter. After all, if there’s one thing people have a problem with, it’s delayed gratification. How can anyone expect a promise of a reward in the next life to motivate people when you can’t motivate the average shareholder to look past the next quarter’s earnings?

If we’re going to motivate people to make the world a better place, we need to be emphasizing the personal and collective benefits of those altruistic actions. You don’t fight unemployment and underemployment because it gives you warm fuzzy feelings, you fight it because making everyone a productive member of society reduces the tax burden on us all; you do it because giving people legitimate ways to succeed cuts out the huge portion of crime motivated by sheer desperation. The reward for making the world a better place to live in is that the world is a better place to live in. Promising spiritually-flavored warm fuzzies is superfluous at best, and as some behaviorists are starting to realize, may actually be counterproductive.

“…’Doing God’s will’ is no longer an excuse for hurting others.”

I don’t think I need to explain why this is a good idea, but I have some related thoughts. It’s my position that the people who curse and hate and kill ‘in the name of God’ are just as irresponsible as the nihilists who say ‘if there is no God, no punishment waiting for me after death, then screw the rules, I’m gonna get mine and screw everybody else!’ For some, ‘it will be as God wills’ or ‘it’s all going according to His plan’ are ways of accepting that the world is ultimately out of their control; these people are fine. What’s problematic is that for far too many, those same sentiments are used as justification for selfishness, callousness, cowardice, and any number of other manifestations of the abdication of responsibility both great and petty.

For a child, the threat of punishment is often enough to keep them from doing wrong. It works because the child makes a basic risk-reward comparison, and decides that avoiding the threatened punishment is more important to them than the perceived reward of the bad thing they were going to do. When you become an adult, however, you move from a pain-avoidance motivated sense of morality to one that is guided by your own internal sense of right and wrong. I don’t avoid rape, theft and murder because I’m afraid of the consequences if I get caught and prosecuted; I avoid them because my own internal moral sense tells me that these things are wrong and repugnant. I didn’t develop this moral sense out of any fear of divine punishment, I developed it from an innate sense of empathy. I don’t want people violating me sexually, taking my things, or killing me; and it’s not fair to expect others to refrain from those actions if I don’t afford them the same courtesy.

Do unto others as you would have them do unto you; likewise do not do unto others that which you would not have them do to you. This is the core of my morality, and it is also the core of my altruism. It is simple, robust, and functions with or without any external system of reward and punishment. It is this basic fusion of empathy and reason that forms the basis of all moral codes, whether religious or secular in flavor. This is how adults think; if we’re going to progress as a species, we need to focus less on adolescent games of I’m-right-and-you’re-not, and focus more on getting people to grow up.



Respecting the Solitary Life

So, StarCraft II. Much ink has been spilled over various aspects of the game, from the rebalanced multiplayer to the lack of LAN play to the curiously elevated price tag—$60 for this versus $50 for most PC games. There’s an aspect to this that many haven’t considered, however: the singleplayer game.

Most RTS games treat the singleplayer game as an extended tutorial for the multiplayer. There’s a thin veneer of plot slathered on, but the real purpose is to get you familiar with the units and tactics of each faction, so you can go forth to the online matches and, hopefully, not suck.

This is not the right way to go about it. For one thing, the AI opponents never behave like real people do in a match. This does a disservice to those looking to get ready for the MP, because it sets up a bunch of wrong expectations that have to be unlearned once you actually wade into the fray.

The multiplayer craze has been huge in the past few years, driven in no small part by games like Team Fortress 2 and Halo that manage to be relevant years after their release. For these games, their success is driven by the depth and satisfaction of their multiplayer modes. RTS games are no strangers to this, certainly. The whole genre, more or less, has been serving up the same MP oriented gameplay for years now, providing further and further refinements of essentially the same formula.

The problem with this, I feel, is how this affects singleplayer enthusiasts. The people who play games not to connect with others, but to get away from them for a little while. Sure, playing with others can be fun, engaging, uplifting and so forth. But sometimes, you don’t want to put up with that. Sometimes, you want to just fire up a game and play, without worrying about coordinating with other people’s schedules, without worrying about dropped connections and server hiccups and oh hang on guys, I have to take out the trash.

Singleplayer is an aspect of gaming that has been denigrated in recent years. The contest of man vs. machine is one that can be mishandled in so many ways: inconsistent difficulty, cheating AI opponents, Insane Troll Logic puzzles, narmful cutscenes or dialog that serve to break immersion, et cetera, et cetera, ad nauseam. Yet when it’s done right—and we have plenty of examples of how it can be—it is glorious. Games that pay attention to pacing, to challenge, to fun; these are the ones we remember for a lifetime.

This brings me back to StarCraft II. (Bet you thought I’d forgotten!) Blizzard has certainly spared no aspect of the multiplayer; they understand their fans too well to neglect that. One aspect that hasn’t gotten as much press—though there certainly has been some—is the singleplayer game. Yes, it focuses solely on the Terrans, but this is to the game’s benefit.

Instead of having to cram everything into ten missions, they have the space to let the player breathe—to absorb the game’s essence and atmosphere at a more natural pace. New elements are introduced gradually, and the player is given some agency in the progression via the research trees.

There’s certainly nothing here that hasn’t been seen in RTS games for years, but the sheer amount of care and craft that has gone into this game is phenomenal. I normally loathe RTS games; I only played through the first StarCraft with cheats on to get the story. In this one, I find myself playing the missions for their own sake.

The cutscenes can veer into cornball territory at times, but they never outstay their welcome. The shipboard scenes that serve as the mission hub are bursting at the seams with little touches put there to discover. One of these, an arcade cabinet sitting off in the corner of the Cantina, is the front end to a top-down scrolling shooter game: The Lost Viking. This bears emphasizing: Blizzard hid a whole other game inside StarCraft II, just because they could. It’s mainly meant to show off the capability of the SC2 engine, but it’s got enough depth to be enjoyable in its own right.

Blizzard didn’t have to do any of this. They could have slapped together a quick SP campaign, shipped the game with all the MP enhancements, and they still would have made a mint. They didn’t, though. They chose to put just as much time and care into the singleplayer experience as they did for the multiplayer game. The end result is a polished, refined game that’s a joy to play.

Sure, Blizzard has more money to throw at any one project than most companies have period. But that’s not what makes their games great; Call of Duty Modern Warfare 2 had just as much money thrown at it, and nobody’s going to remember it in a year save the die-hard grognards who spend their life in MP matches. StarCraft II is going to be with us for years to come, because Blizzard put care and thought into making it fun as a game, rather than an interactive special-effects reel. More companies could stand to learn from that attitude.


On Reformation

Filed under: Reflections — Tags: , , , — halbyrd @ 11:53

A while back, I made a rather lengthy post about why I was fed up with World of Warcraft. Since then, Blizzard has announced the next expansion, Cataclysm. Rather than being more of the same, however, Blizzard is using this expansion to address many of the problems I had with WoW in a way they’ve never tried: by giving the game a complete overhaul.

The Burning Crusade and Wrath of the Lich King, while they added significant chunks of content–new zones, new races, new dungeons, and even a new class–were basically patches on the same old game. Changes both major and minor happened to class mechanics on a regular basis, but the core nature of the game has not changed much. The new content zones are where much of the realm’s population resides, and everywhere else is a howling wasteland, devoid of any human presence. Home faction capitals have a small presence, but the population there is fleeting, only there long enough to finish whatever business they have in the Auction House.

Cataclysm is set to change that paradigm, however. Certain things that are expected of an expansion will still be there: the level cap will be raised, new zones will be introduced, new raid dungeons opened up, and everybody will have a new set of loot to chase after. As with The Burning Crusade, two new races will be introduced, one each for Horde and Alliance players, with their respective starting zones. Every class will get a few new abilities, and the number of available talent points will go up. The graphics engine will receive another update, adding a new layer of spit & polish to the proceedings. So far, no major surprises.

Here’s where things start getting screwy, though: the new content isn’t going to be limited to the new zones. This time around, the two major continents of the original game, Kalimdor and the Eastern Kingdoms, are getting remade top to bottom. The events of the eponymous cataclysm will serve to literally reshape the land, with old zones taking on a new complexion. New quest lines are being written for every zone, and the type of questing is shifting increasingly toward story-focused tasks. Largely gone are the old standbys of FedEx (take x to y) and bounty (kill x number of y monsters for lazy sod z), except in early levels where their simplicity allows new players to get used to their class mechanics. With two new continents’ worth of stuff to explore and do, getting around is going to be a lengthy proposition. Towards that end, Blizzard is finally implementing a long-asked-for feature: support for flying mounts in “old world” Azeroth.

Perhaps more significant even than the mass of new content are the changes to core gameplay mechanics. Aside from the usual slew of balance tweaks, the way abilities are learned and used is being changed. Previously, each ability was improved by purchasing successively more powerful ranks from the class trainer. Now, the abilities will scale with level and associated core stats, with new abilities being added where needed to further flesh out each class’ repertoire. On the subject of stats, many secondary stats on gear are being merged or eliminated, with their function being rolled into core stats where appropriate. Spell-slinging damage dealers will care about Intellect for more than their raw mana pool, healers will care about spirit for mana regeneration–yes, even Healadins. Hunters won’t care about Intellect anymore, as they’re being moved to an energy-based system that better fits their class mechanics–and incidentally allows Blizzard to put some sanity checks on their damage scaling, so they won’t be ping-ponging between impotence and infinite godlike power.

Another significant pair of changes involve talent specialization, and the very process of leveling up. The idea of glyphs to “customize” your spec was well received in Wrath, and is being expanded into the Path of the Titans. The idea here is that you will progress through the final five levels through a gated series of quests, rather than through simple XP grind, and get a chance to pick out glyphs that compliment your intended build along the way. Many of the “boring-but-essential” talents, such as those that boost damage or critical strike chance, are getting moved either here, or getting rolled into specialization mastery bonuses. In effect, such bonuses become perks handed to you for investing a certain number of points in a given talent tree. The Mastery stat, which is replacing a lot of secondary stats on gear, is designed to complement this, further boosting the potency of your spec bonuses.

They’re being rather more tight-lipped about how things are going to work over on the PvP end of things, but one presumes that area of the game is getting similar attention. My not-so-secret hope is that they will finally implement the dual-system ability mechanics the game has needed for so long. As my previous post complained, it’s nearly impossible to get the one set of ability mechanics balanced so that changes to the PvE end won’t unbalance PvP, and vice-versa. Better by far to formally acknowledge the split, and set things up so that abilities have different behavior when in world PvP zones, Battlegrounds or Arenas. This leaves random in-world PvP a bit hard up, but that stopped being interesting to anybody but gankers and griefers years ago.

Time will tell if all this pans out, naturally, but the effort is laudable. They’ve managed to rekindle genuine interest in a game I’ve been following only diffidently, and one that’s quite aged as well. Here’s hoping that Blizzard’s grand experiment pans out.


Intention vs. Utilization

Filed under: Rants, Uncategorized — Tags: , , , , , , — halbyrd @ 01:42

I recently had a conversation with a friend of a friend at the movie theater, and as conversations among geeks will, the topic circulated around to gaming.  I mentioned how laughable Steve Jobs’ claim was that the iPod Touch was a “gaming device”, in his recent explanation for why it doesn’t have a camera.  In response, the friend of  a friend insisted that the iPod Touch was indeed a gaming platform, and worthy of respect.  This bothered me in a rather fundamental way, but at the time I couldn’t pin it down any further than to say that that didn’t really sit right with me.

In true l’esprit de l’escalier, I finally came up with the answer I was trying to formulate several hours later, as I was idly surfing the web.  The fact is, there is a real, measurable difference between a device that can play games, and a device designed for playing games–a difference of intent, as reflected in design.  The iPod Touch is a PDA running a general purpose OS.  Like any general-purpose computer, it can be used to play games, and there’s ample evidence to support the notion that making games for this popular platform is a profitable enterprise.  Despite all this, however, the iPod Touch is not a gaming device.  It is a device that can play games, among a plethora of other tasks.

The distinction might seem overly fine at first, but there’s a point to be made here.  When we call something a “gaming device”, we are asserting that this is a thing that is first and foremost designed for the playing of games.  Whatever other functions it may perform are to be considered secondary, however well it may perform them.  The Nintendo DS and the Sony PSP are gaming devices, and they make no bones about it.  Sony’s brief misadventure with UMD movies aside, neither of these devices are marketed as anything else, despite the fact that both can be made to do quite a lot besides just playing games.  The form factor, the interface, the inputs; everything about these units is designed around gaming, and both are very good at what they do.

The iPod Touch, on the other hand, is a very confused little device.  It’s named after a music player, but hardly anyone seems to care about that functionality, except when it doesn’t work for some reason.  It’s built like a smartphone, except it lacks the cellular radio and GPS that give the iPhone most of its usefulness as a networked mobile device.  It seems to fall into the much-neglected niche of PDA, but no-one in Cupertino dares call it that.  And now, after some prompting by the tech press, Word of Jobs says it’s a gaming device.

Alright, Steve, I’ll bite.  Let’s pretend that the iPod Touch is a gaming device, and evaluate it accordingly.  First up is graphics.  For a gaming device to succeed at what it does, it needs a decent-or-better screen, and enough horsepower to fill that screen with good looking visuals.  The latest generation of iPod Touch succeeds on this front, mating a 3.5″ QVGA screen to an Open-GL 2.0 ES capable graphics chip.  The PSP beats it with room to spare, and the DS probably out-powers it as well, but Nintendo’s already proven with the Wii that you don’t always need hyper-turbocharged hardware to succeed.  You also want some sound to go with those graphics, and the iPod Touch is certainly no slouch there.

The third thing you need, and one of the most important, is good controls.  Nintendo set the bar with the original Game Boy, and has since raised it with successive refinements to the controls.  Sony’s a relative newcomer to the portable gaming arena, but their experience with the PlayStation and PS2 have served it well–the PSP has a set of solid, responsive controls.  Sadly, however, this is where the iPod Touch falls hardest.  It gives you a fingers-only touchscreen, some accelerometers…and that’s it.  PopCap-style puzzle games work well enough, but most others are forced to make use of on-screen buttons, and they suffer for it.  Three and a half inches diagonal measure does not make for a large screen, and forcing people to put their thumbs over top of it only makes matters worse.  Accelerometer tilt controls help to alleviate this some, but forcing me to hold my iPod Touch at precisely the right angle in order to steer is just asking for long-term neck strain.  Put all this together, and you still end up two or three buttons short of what most games need to give you proper control of your avatar in-game.

In short, while the iPod Touch is quite a capable PDA and PMP, it is not a gaming device.  Its UI, design and controls are almost completely at odds with how a gaming device needs to behave.  I have no doubt that Apple could produce a proper gaming device, if they really tried, but this simply isn’t it.  Sorry, Steve.


What I Hate About You: some pet peeves about gaming

Filed under: Rants — Tags: , , , , , , , , , , — halbyrd @ 21:33

I love gaming.  It is one of the defining passions of my life, and the source of a lot of the better stories in my life.  Someday I’ll tell you about some of those, but today I’m going to talk about how my favorite avocation drives me crazy.  So with no further ado, here’s what I hate about you, gaming.

Half-assed PC ports:

Why is it that PC gamers, a group I would say are probably some of the most dedicated to the love of gaming, are so mistreated of late?  Games that worked perfectly fine on XboxStation360 come out on PC months late, missing features, sometimes completely non-functional–Gears of War, I am looking at you!–and laden with screw-the-customer DRM.

The piracy argument is a non-starter; people who do that probably weren’t going to put down cash for your game anyway.  The “it’s hard” argument doesn’t hold water either, porting from PS3 to PC is no harder than the other way around, and anybody who’s made their game for 360 has had Microsoft do half the work for them already!

Bottom line: if you’re going to do a PC port of your game, take the few extra weeks of time and effort to make sure it works properly.  Gamers are used to slipped release dates; we don’t even remember them most of the time.  Broken games don’t get forgotten, though.  Broken games get you blacklisted by a lot of gamers in a hurry, and that’s a blow that’s years in the mending.

On Game Price Gouging:

Why is it deemed desirable to price every game coming out at the same price-point as AAA-list blockbusters?  There are quite a few games out there that are quite enjoyable, but have been harshly panned by critics and gamers alike because they fail to deliver the premium experience we expect from a premium-priced title.  Games like Shadow Complex are a wonderful counter-example to this trend, but they are mostly relegated to the slums of console download services, which many are still leery of.

If they had tossed that game on a disc and sold it for $20, I’d bet we would now be talking about the surprise millions-seller of the year.  This is not because the game is inherently brilliant, though it is.  This is because it is not $60 or more.  I know every game is some dev team’s baby, but not every game is going to be the next Half-Life.  Setting more reasonable prices on middle-of-the-road titles would go a long way towards making this whole game publishing business more successful.

On Console Download Services:

The ability to pay for and acquire games over the Internet is a marvelous invention, and one that I partake of on a regular basis via Steam and Direct2Drive.  I will not, however, touch XBLA or PSN with a 10m pole.  Why?

It all boils down to a difference of philosophy.  Steam, and to a lesser extent Direct2Drive, thrive by offering you conveniences and extras that buying the game on a disc does not.  Not only can I pull down the game off the Internet in a half-hour or so, but I can do so on as many computers as I please (provided I only play the game on one at a time).  Should I so desire, I can generate compact backups of all of my games, in CD- or DVD-burnable chunks or as one megalithic file for storage on an external hard drive.  I don’t really need to do this though, because Steam even keeps a master list with product keys. Once a game is on my account, I never have to worry about backups, patching, product keys, activation, and all the rest.  Deleting a game to save space becomes fairly painless, since I can always bring it back with a few clicks.  Combine this with a social network/im/voip solution that succeeds where Xfire and others have failed, and losing the physical disk starts to look like a significant upgrade.

On the other side of the fence, we have XBox Live Arcade and the PlayStation Network.  These services have a thin veneer of the appeal that Steam has, but differ in several significant details.  Not only can I not download my game to more than one console, in the case of XBLA I can’t even back up my games to an external disk for safe keeping.  The PS3/PSN situation is somewhat better in this regard, as it has support for both external backups and redownloading of games.  Voice chat on these services is middling-fair: both support in-game chat, but neither supports game-independent multi-user chat rooms or cross-game chat, both of which severely hurt the social aspect of the service.  XBL also gets demerits for charging me $50 a year for basically the same matchmaking and voice-chat services that Steam and PSN give me for free.

Valve also understands the pricing game a lot better than Sony or Microsoft: price drops on older games and frequent weekend promotional discounts have kept Steam’s sales thriving.  Also, Steam is in the business of selling full games, not overpriced mini-games.  PSN and XBLA don’t have much that’s worthwhile, and what they do have tends to be overpriced.  Gems like WipEout HD and Shadow Complex are wonderful to be sure, but aside from games you could just as easily pick up at Gamestop for $20 I have yet to see anything else on these services worth buying.  DLC expansions are fine and good, but unless you’re a nutter for Rock Band/Guitar Hero, there’s not much there to sustain you.

On Games For Windows Live

This one is addressed straight to the folks at Microsoft Game Studios.  Ladies and gentlemen, why have you not yet gotten your house in order?  This service is two years old already, and still I hear frequent complaints about how your software breaks otherwise functional games.  You don’t even have the excuse of inexperience: you’re Microsoft!  You own the operating system that this platform runs on!  You are known around the world for hiring some of the best and brightest minds in the world! Why is this not fixed? I don’t hear complaints about Steam breaking games anywhere near as often, and many of these are from clueless users who have fouled their systems up and don’t want to admit it.  GFWL, on the other hand, is brittle.  Horror stories of games put onto fresh installs of Windows utterly failing to run are still far too common.  Get this fixed, or you will find yourself destroying the very Games For Windows brand you have so carefully tried to establish.

In Conclusion

I know it sounds like I’m filled with naught but bile and poison when it comes to gaming.  Therefore, my next few posts are going to be about what is good and right in gaming.  Meanwhile, sound off in the comments if there’s something about gaming that really ticks you off.


From WoW to meh.

Filed under: Rants — Tags: , , , , — halbyrd @ 03:51


I’ve been playing World of Warcraft since around patch 2.2, or August 2007 for those keeping score at home.  That makes it just a month shy of two years.  I’m not going to be sticking around for the anniversary, though.

It’s nothing to do with the usual complaints–that it’s a time sink, that much of the combat is repetitive, et cetera.  Grind is a central part of what makes a MMORPG what it is, and I have no particular problem with that.  Not to put too fine a point on it, but most of life is about doing the same things again and again.  WoW at least has the decency to reward me for my perserverance.

No, my problem with WoW centers around player skill.  People have held that in this game, life really begins when you hit the level cap.  From this perspective, the process of going from lv. 1 chicken chaser to lv. 60 70 80 badass is, in essence, an extended and extremely forgiving tutorial.  You have time to mess about, learn the mechanics, and see some interesting scenery along the way.  This is fine and good–in fact, I think more games outside of the MMO scene could stand to take a lesson or two from this model.

Once you’ve climbed that mountain, though, what’s there to greet you?  If life begins at 80, what does this life entail?  The answer, in WoW’s case is: not much.  You can go the hardcore PvP route, ganking noobs for fun, sharpening your skills in battlegrounds, and competing “for realz” in the arenas.  This tends to fall flat, for the simple reason that WoW was not designed around this kind of competitive play.  PvP has been shoehorned in after the fact to appease the griefer contingent, but it’s ultimately a distraction from WoW’s true focus: Raiding.

Before Arenas, before Tournament realms, before moneyhat-driven dreams of eSports fame, and even before Battlegrounds, WoW was all about Raiding.  Getting a bunch of people together, finding some godsforsaken castle or cavern, and running from one end of it to the other, with nothing but the entire population of Murder City between you and glory.  With potent magic, huge phallic swords, and ridiculously proportioned shoulderpads, it’s all designed to feed our inner Viking.

Scratch the surface a bit, though, and you begin to see why the Viking lifestyle doesn’t hold up long-term.  Coordination SNAFUs turn your engine of destruction into a tangled scrap-heap faster than you can yell “LEEEEEROY JENKINNNNNS!”.  Underperforming damage-dealers turn even routine pulls into a molasses-filed quagmire.  Inattentive healers let the raid crumble around them while they admire the scenery.  Clueless tanks soldier on, bashing away ineffectually at the boss while his minions tear through the squishies behind like a chainsaw through butter.

To a certain extent, this is expected.  Dungeon running is about teamwork, right?  Yes, but there comes a point at which it all becomes too much.  Sometimes, the game just throws too much at you at once, too hard and too fast for any but the most Borg-like raids to cope with.  Nowhere was this more apparent than in the Sunwell Plateau. This was WoW at its most brutal.  Wiping on the first trash pull was commonplace, even after everybody knew what they were doing. The vast majority of raiders never made it to Kalecgos, never mind all the way to Kil’jaeden. Guilds that made it through everything the game had thrown at them to date shattered on this dungeon.

Was it because of poor teamwork?  Insufficient preparation?  Simple inattentiveness?

No. It was because the game mechanics themselves made it all but impossible to proceed.  The tension between PvE and PvP game mechanics has been a problem in WoW ever since battlegrounds got added in 1.4.  It wasn’t until the addition of Arena combat in 2.0 that this became a real problem, however.  From that point onwards, the game designers have been pulled in two conflicting directions: the desire to avoid overpowered talents/abilities/gear for PvP balance, and the desire to boost threat/damage/healing for PvE viability.

This resulted in player classes that simply couldn’t participate in Sunwell raids, because they were carrying the PvP millstone around their necks in a dungeon that consisted of Olympic-level sprints.  Your best raid healers are Druids?  Too bad, only Shamen are allowed, because Chain Heal is required to keep up with the punishing damage auras and area-effect spells.  Want to bring some Mages or elemental-spec Shamen for damage-dealing?  Too bad, you won’t finish the DPS race alive unless you stack Shadow Priests and Warlocks, due to ridiculously short enrage timers.  Want to bring a Paladin who isn’t a tank?  Too bad, you’re SOL for damage-dealing and healing.

Blizzard has wisely backed off on this for normal raid progression in the latest expansion, but the damage has been done.  The game now has a permanent case of Dissociative Identity Disorder.  Raids routinely fall apart because half the class/spec formulations don’t function properly in their intended roles, and the people who can fill the roles properly frequently contract a nasty case of Real Life Problems.

The practical upshot of this is that you can routinely find yourself failing and having  to start over because the game itself is getting in the way of playing it.  I ran into this problem about 3 months after I first started playing, when I first started doing end-game raiding, and it has never gone away.  I’ve stuck around for quite a while hoping it would, because Blizzard has put together an extremely compelling world in this game. Compelling or no, though, this game is fundamentally broken, and Blizzard has no real intention of fixing it.

One common definition of insanity is repeating the same actions, in the same kind of circumstances, expecting different results.  I think it’s time I stopped paying Blizzard my presubscription fees for crazy pills.


Enermax Aurora Micro keyboard

Filed under: Reviews — Tags: , , — halbyrd @ 10:42

We ought to have more things wrought out of solid chunks of black space-metal… Enermax Aurora Micro keyboard

Razer Goliathus mouse pad

Filed under: Reviews — Tags: , , — halbyrd @ 10:25

Another of my reviews up @ TWL – Razer Goliathus gaming mousepad


Razer Salmosa Gaming Mouse

Filed under: Reviews — Tags: , , — halbyrd @ 14:46

Razer Salmosa Gaming Mouse

Posted using ShareThis


Tux the Homewrecker

Filed under: Rants — Tags: , , — halbyrd @ 01:35

or: Why Linux is not Ready for the Desktop

Linux is one of the most high-profile successes of the Free & Open Source Software (FOSS) movement to date. Starting as the hobby project of Linus Torvalds while he was studying at the University of Helsinki in 1991, Linux has evolved into a massive, world-wide collaborative effort, and the most widely used UNIX-style operating system in the world. It scales from hand-held PDAs and smartphones all the way up to clustered supercomputers, and had captured 12.7% of the overall server market worldwide as of Q1 2007[*]. Yet for all its power and flexibility, it still hasn’t managed to make a serious mark in the world of consumer desktop machines.

There are several reasons for this, some controllable, some not. Inertia is one of the uncontrollables: people are, if not comfortable, at least familiar with Windows. It’s weird, wonky, and sometimes unreliable, but it’s what comes pre-installed in most machines, and it does most of what the average user needs an OS to do. This by itself is not enough to impede a switchover, as Apple’s consistently upward sales trends can attest. It is a factor to consider however, and serves to amplify the other issues.

Apple’s success is particularly significant, as adopting OSX requires not just adjustment to a new OS, but a new computer to go with it. This would seem to be an even worse situation than the one Linux is in, but OSX offers enough advantages over Windows that many are willing to make the switch. Linux offers many of the same advantages, including a more stable system, better performance on most day-to-day tasks, and a more securely-designed system architecture that avoids much of Windows’s vulnerability to malware. Linux also has a significant cost advantage over OSX: not only do you not have to buy a new computer, you don’t even have to pay for the OS itself, just for the CD or DVD to burn the install image onto.

The fault, then, must lie in large part with Linux itself.  Simply put, there are several things that the Linux community—and the desktop distros in particular—are doing wrong.

Before I continue, note that this is about what Linux is doing wrong for the desktop.  What a desktop OS needs to do is different from what a server os needs to do, which is different from what a device-embedded OS needs to do, and so on.  Linux can fill all of these roles and more, but I’m not talking about those other roles today.


The first problem that springs to mind is getting the system up and running. While getting the OS itself is simple enough, as is installing anything in the repositories of your distribution of choice, there is still no way to install arbitrary 3rd party programs in a simple, consistent way. OSX is the clear winner in this department, with a pre-defined .DMG file format for delivering the program’s components over the internet, as well as installing the program itself in a simple drag-and-drop operation. Windows has had the .MSI format since Windows 2000 was current, and it performs much the same function, but most are still using 3rd party installer programs such as InstallShield. Linux is the clear loser here, with no real agreement on what installer package format to use (.deb? .rpm?) or whether the files should just be tossed into an archive (.bzip? .tar.gz? .rar?), and whether or not they should be pre-compiled.

Install procedures can range from the very simple (apt-get install <package name goes here>) to the incredibly obtuse (unpack to source directory, find dependencies, find dependencies’ dependencies, compile, link, find out you missed a needed command-line argument, curse, rant, rave, repeat). Each distribution has been attempting to simplify the install process through online program repositories and package-management systems, but there is no consistent standard for what programs should be included, how frequently the repositories should be updated, or what optional features and supplementary packages should be included.

There also isn’t a consistent location in the file system structure to put the programs when they are installed—some go in /bin, some go in /usr/bin, some go in /opt, and some go in other random locations like /usr/share/applications. Windows has been consistently providing C:\Program Files as an agreed-upon spot for years now, and MacOS has had an Applications folder for even longer. This inconsistency in Linux makes even a simple operation like putting a program shortcut on the desktop a laborious chore.

Another area of shortfall is program removal. Windows provides the Add/Remove Programs interface, which is pretty much Exactly What It Says On the Tin.  OSX doesn’t have anything similar, but seems to get by without due to the self-enclosed nature of most OSX-based applications. Programs installed in Linux often spread out across multiple directories, however, and no consistent interface whatever is provided for programs not contained in the distribution’s package repository. For programs within this repository, removal is as simple as installation (apt-get remove <insert package name here>), but for anything else, it’s back to the bad old days of hunt-and-delete.  Oh, and you’d better be sure what you’re getting rid of isn’t being used elsewhere, or you’re screwed.


Another major issue is the handling of peripherals. USB HID devices like mice and keyboards are handled gracefully enough in their most basic forms, but many devices are handled in a manner that ranges from inconsistent (USB flash drives) to schizoid (multimedia keyboards and other non-standard devices) to outright hostile (graphics cards, displays, sound devices). I have a fairly mundane setup as peripherals go, with a keyboard, multi-button mouse, a jog-wheel for volume control, speakers, microphone, and a pair of monitors. Yet getting these configured in useable fashion is a struggle every step of the way, from getting Linux to recognize the Back and Forward buttons on the mouse, to setting up a dual-display desktop and getting 3d acceleration turned on for games—or for the much vaunted Compiz that so many love to praise as Linux’s killer feature.

Setting up 3d acceleration and dual-display is particularly gruesome, requiring a descent into the arcana of xorg.conf and related configuration text files with manpages and google at the ready. Dynamic display detection, long since considered standard on both Windows and OSX for laptop users, seems to be beyond the ability of Linux’s graphics engine. 3d acceleration, which is automatically turned on when needed in Windows and on all the time in OSX, is a further hassle, compounded by the slow driver release schedule from both Nvidia and ATI. Even something as basic as changing screen resolution can often require more prodding of the xorg.conf file, as still commonly misinterprets or ignores DDE information provided by modern displays.

Another hardware-related source of grief is the difficulty of getting most WiFi chipsets to function. This is largely due to the unwillingness of Broadcom and other chipset manufacturers to release either working drivers, or enough of an API for the Linux driver team to write their own. Some of the blame, however, can be laid at the feet of the Linux network stack, which in most cases still requires poking around in config files even for setting up basic things like static IP addresses, network SSIDs and WPA2 keys.

USB flash drives and external hard drives are another source of grief in Linux, as there is no agreed-upon default location in the filesystem for these drives to appear once mounted. (The /mnt directory is suggested, but each device must be given its own empty subdirectory, which must be made beforehand.) Windows’s habit of assigning drive letters to each partition makes this a non-issue, as does OSX’s habit of making drive icons appear directly on the desktop for interaction. Linux, in its default state, forces the user to set these mount points manually, for each drive, and requires user intervention to even begin the mount process. Recent iterations of the Gnome and KDE desktop environments have attempted to automate this process, but neither one works consistently and predictably.


Linux is quite possibly the most versatile OS to date. Its scalability, reliability, and unique anyone-can-contribute development environment, combined with its already vast popularity in the server world, give it a unique position among operating systems. It has the potential to eventually displace Windows as the OS of choice for the desktop, but for now, these issues outlined above act as major stumbling blocks to its success. Hopefully these issues will be addressed in future development efforts. Once that happens, we may be able to truly declare that Linux is ready for prime-time in the home.

Create a free website or blog at

%d bloggers like this: