Browsing the archives for the Technology category

Legolas Skynyrd – Freebird

No Comments
Games, Technology

For quite awhile I’ve been hoping that, one day, there would be a game that would allow people to write their own music. I mean, after all, modern computers have sound cards, and can generally run complex music synthesizing software. Sending a string of MIDI signals across the internet is not any more outlandish than sending real-time physics data for fifty or so people and potentially hundreds of projectiles (see: almost any multiplayer First Person Shooter).

So recently I came across this video from Lord of the Rings Online:

Very cool! Although it does bring up the obvious flaw that I never thought of – What happens when people use this music-synthesizing utility to break in-game character. I mean, from my impression, this sort of thing is purely a roleplay / fluff feature (no tie-in to mechanics), so the fact that something that’s intended primarily to improve immersion is seemingly used almost exclusively to break it is problematic.

Windows into Mac, part 2

No Comments
Technology

Continued from part 1.

There are a plethora of reasons why Window Management is easier on Windows than on Mac OS, but I want to home in on one aspect in particular: The Windows Taskbar and the Mac OS X Dock.

I remember when the Dock debuted in OS X, hearing that it was a human interface disaster. And, a few years later I started using it. The Dock is one of those features that looks really great on display, but when you sit down to actually use the thing you quickly realize its limitations.

When I started using OS X many years ago, Mac computers were still using standard-size monitors. Lately, however, all Macs come with these widescreen monitors. I’m still not quite sure what the supposed benefit of a widescreen monitor is supposed to be over a standard screen. Is having less vertical space but more horizontal space really all that useful? I seem to notice that vertical space is a lot more useful than horizontal space, particularly if you do a lot of reading. But I digress. So every new Mac comes with a 16:9 monitor, and at the top of the screen we have a 50px high menubar, and at the bottom of the screen we have a 300px high Dock, immediately taking up a sizeable amount of vertical real estate on our display.

Here’s a picture of the Dock in a standard install configuration. You can pretty much envision the usable screen area as consisting of 75% of the area between the menubar and the dock. The sides also need some wiggle room since you need to be able to stack windows and use the scrollbars.

Starting OSX Desktop

Yes, the Dock can be moved to the side. Yes, the Dock can be shrunk in size. Even at the smallest size and shoved off to the side your Dock will still take up ~5% of your screen. If you shove the Dock over on the right side of your screen you’ll get the added benefit of having it constantly overlapping your window scrollbars, so the left side is pretty much the only realistic option.

So what exactly is the Dock useful for anyway? Well, it does two main things: It stores icons for commonly used Applications, and it stores windows that you’ve minimized.

The first of these, storing icons for commonly used applications is typically what ends up taking up 80-90% of the Dock space. Your default Mac configuration will have the Finder, iTunes, iPhoto, iMail, iMovie, and so on and so forth. Probably a good 15 items to start with. Some of them you can’t ever really get rid of — The icon for the Finder is pretty much always going to be in the Dock, since the Dock shows running applications and the Finder is (almost) always running. If my own experience is any guide, a user’s Dock gets more and more cluttered as he accumulates more programs he needs to use.

The problem with this is that the Dock can only realistically accommodate so many icons. Probably about 50 along the vertical at the smallest icon size. 50 icons would be pretty acceptable, if the only icons shown were those in use. But the Dock isn’t designed to only show programs in use, it’s meant as a quick-access tool for starting programs, and it’s really easy to accumulate another 20-25 programs you use and turn the Dock from this snazzy-looking thing into a muddle of indistinguishable program icons.

Windows also has a tool for displaying programs for quick and easy access. It’s called the Start Menu, and it takes up about 1/10th the space the Dock does, and can display hundreds of items instead of fifty or so before becoming too cluttered to use. The Start menu utilizes this magical invention called a “Menu” that can pull out when in use and retract when not in use. It’s pretty cool. At one point, if you were really in the mood, you could do something like this with the Dock by making a folder and putting a bunch of program aliases in the folder, but Apple has apparently decided that using menus is verboten on a computer, and so now if you do this you will get, in essence, a much larger Dock pop-up over your whole screen. Wonderful.

The second thing the Dock is useful for is storing “hidden” windows and tracking what programs are open. The latter is a bit of a corollary to the first, since generally speaking, a program that’s open is also going to have a window open.

It’s been something like a decade since I used an OS9 computer, but I do recall that the Applications menu handled the tracking of open programs in a much more elegant and utilitarian way. While the Dock insists on displaying the icon of every program that is open, in a string along the Dock, OS9 had a single icon which represented the program that currently had window “focus,” which also served as a drop down menu for switching to other open applications. It’s too bad this feature seems to be gone, as it was space-conscious and faster than the Dock.

A new problem has also been introduced in the Leopard version of OSX…

Leopard Stars

Previous versions of OSX indicated “open” applications via a black arrow next to the application icon. Leopard has switched to a new icon which is best described as a “glowing orb of indistinguishableness.” As small a thing as it may be, it’s quite irritating that Apple has chosen [yet again] style over substance. Not only is the new “active application” icon (the circle) harder to see than the black arrows, but it’s easily confused for a background element, particularly when using the default desktop image of a nebula and stars.

As for Windows, it really isn’t that much better in this area. Windows does do a fair job of presenting most applications that are open to you up-front, but some applications, for unknown reasons, go in different parts of the Taskbar, and it seems quite random why some go in some places while others don’t. The inconsistency here is the main problem, as there’s no real visibility issues or excessive focus on visual flair over usability.

The feature that most distinguishes the Dock from the Taskbar is the handling of “hidden” (or otherwise) windows. You see, the Dock only stores hidden windows. The Taskbar will store windows, hidden (aka minimized) or not. This seems like a subtle distinction, but in practice it has major implications.

Lets assume one has two programs open. For example: Photoshop and Firefox. Both of these programs typically run in a full-size window or full-screen, so in a single-monitor setup you have to decide which program will be at the forefront at any one time. On a Mac, if you are working in Photoshop and want to switch to Firefox, you must make the Photoshop window smaller, move it out of the way, find the Firefox window, then click the Firefox window to bring it to the forefront. In Windows, if you are working in Photoshop and want to switch to Firefox, you must click the Firefox window in the Taskbar.

The difference being, because your Firefox window was “open” and not “hidden” the Dock did absolutely nothing for helping you to manage the open windows. In other words, unless you’re in the habit of minimizing every window you ever shift focus from, the Dock isn’t going to assist you. You must manually take charge in OSX by managing the size and placement of open windows. And this isn’t generally a huge hassle, but having to do it all is rather inconvenient when I shouldn’t have to.

Now, again, presumably the Expose feature added in some other version of OSX allows you to do this – But a keystroke is definitely not as intuitive as building it into the graphical user interface. That is what most Apple fans rave about, anyway, and here it is, doing something that’s critical to day-to-day usage less well than Windows.

I can always cross my fingers that Apple will consider trying to match Windows’ functionality in this area, but I’m skeptical that’s even possible using the Dock model. After all, the Dock’s ideal place is on the left side of the screen, which makes it a less-than-ideal place to display text. And even assuming Apple decided to make the commitment to improving their GUI by having the Dock hold all windows, hidden or not, you’d need identifiers to distinguish between open windows (aka “text”) quickly and easily. The Dock is really poor at displaying text, and since it’s such a space hog it’s icons, especially for windows, are frequently so tiny they’re indistinguishable.

Windows into Mac, part 1

2 Comments
Technology

Lately at work I’ve been forced to juggle both Macs and PCs. I have several years’ experience using Macs, but I’ve been working exclusively on XP-based PCs for the past couple of years, so it was a little bit of an adjustment going back. What’s been interesting to me is getting a fresh perspective about what works and what doesn’t work about the Mac or PC. Also of note is that the last version of OSX that I used was 10.3 and some of the newer machines have 10.5 installed, and there are some differences there.

Fair warning – I’m an “advanced” user and so if you only use your computer for checking email and browsing the web, chances are you may not notice or encounter the same issues as me.

What’s interesting to me about going back to Mac OS after an extended period of time using Windows is that Windows ought to be called something like “Microsoft OS” Mac OS really ought to be called “Windows.” Because, while Windows has the eponymous windows, windows are actually a lot more integral to the Mac OS than vice versa.

What do I mean by that? Well, if tomorrow Microsoft came out with a universally downloaded virus patch that would affect all Windows computers in the world and caused every window to run full-screen … As long as the Taskbar remained intact I don’t think it would hugely upset things. On the other hand, if Steve Jobs decided tomorrow that windows were passe and that he was moving Mac OS to an all-full-screen window mode, except for the file menu and the dock … Your typical Mac user would be in deep, deep trouble.

So, to me, it’s pretty obvious that Macs are a lot more integrated with this user interface philosophy of “windows.” Which I’ve noticed, since I have been using them regularly again, makes it kind of troublesome to use the Mac since as a graphical user interface (GUI) it’s frankly just inferior to Windows for managing windows. And, I mean, I enjoy using Mac OS since I can open up a shell and do everything I need to pretty easily. But it’s ironic because everyone always talks about the Mac OS GUI, and that’s the weakest link in my opinion.

Why? Well, here’s the big one. Window management. This is something that I’ve come to notice myself doing quite a bit of in OS X. And this is something I remember doing quite a bit of in OS X. And after some observation, it’s something I’ve come to notice myself not doing in Windows XP. Even understanding that my XP machines are “worn-in,” so to speak, while the Mac machines I’m using may not be, I’ve noticed that there are some reasons why window management is always part of the stock Mac OS experience, and why it’s not for Windows XP. I’m going to try and explore this in a bit more detail later.

Manga Multimedia

1 Comment
Anime, Culture, Technology

Been awhile since I posted anything, so I’m going to pick up from an earlier post I made on Bleach — I didn’t want to totally dismiss the series out of hand having simply watched the anime. After all, anime is usually pretty debased compared to the manga versions. It’s entirely possible that the Bleach manga could have had fantastic artwork compared to the anime, as some of the comments on the YouTube version intimated. So I did a little searching and actually managed to find some of the Bleach mangas online.

One of the bad trends I noticed was people usually put these manga comics to music. I turned off my volume, as I really don’t care about hearing YouTubeKid99’s favorite song of the moment. But at some point after watching a couple of these, I turned my sound back on and was surprised — The chapters I was looking at actually had some thought put into the music choices, timing, panning of scenes and pages. Pretty cool stuff, even if it is for a derisible power up manga.

TonyCHRYSA in particular seems to post some awesome, well-composed videos. He’s the source of the video at the top of this post, and most of the others that I watched in order to get a feel for Bleach as a manga as opposed to an anime.

These videos didn’t change my mind about Bleach — The manga has the same poor art quality and general power-up fantasy substitute for a story as the anime. But I do find it interesting how people on YouTube can invest the time and effort into these black and white comics to set them to music, pace the video, do special effects, and so on and so forth to turn what you’d normally expect to be a simple visual experience into an almost-interactive one.

Computer Myths

No Comments
Games, Technology

As I’ve been following Shamus’ posts lately on piracy in computer games, I was linked to this post by Brad Wardell of Stardock talking about piracy and the games industry in general. Since Stardock is primarily a vendor of software other than games, Brad talks about how he’s learned from experiences there to target the largest possible userbase and ignore the plethora of pirates. He then goes on to relate this to the games Stardock has been involved in developing recently and how little attention they’ve received in the game news media.

Reading this, I can’t help but think of my experiences with Galactic Civilizations 2. I bought GalCiv2, played it intensely for the first weekend, played a bit through the first week, and by the time the second weekend rolled around I’d pretty much stopped playing the game (granted, I did get involved on the forums agitating for game update patches). I liked GalCiv2 well enough, but the reason my interest in the game was so short lived is that it was, as Brad Wardell says, targeted for a broad consumer base. The strategy involved in the game was too simple (Obvious example: Attackers attack first, therefore if you build a fleet of ships with all weapons and no defensive capability, you’ll win as long as you’ve got enough firepower to annihilate the enemy in the first turn) to scratch the itch of someone like myself who desired a true sequel to Master of Orion 2.

Anyway, while Brad’s thoughts were interesting in and of themselves, what I actually wanted to write about is in the comments. For starters,

Speaking as a person with a relatively new $15K rig on my right side and an older (3 years) $10K rig on my left, I think I easily qualify as the definitive hardcore gamer and I must say I have not felt like games have been targeting me. This is of course for a number of reasons. The first and foremost is that I have one utterly frustrating time getting all the stuff to work as a system without anything going other then the OS and drivers. Then getting support for the hardware and OS is a major bitch to say the least.

A $15,000 computer? Lets assume for a moment that this isn’t a mistake or a lie. How exactly do you make a computer that expensive?

I paid a visit to the Alienware website to see what the most expensive machine I could possibly build would be. The specs:

Acoustic Dampening Case
1000 Watt PSU
Triple SLI 768MB 8800 GT
Intel C2E 3.0 GHz (w/ OCed FSB)
4GB 800MHz RAM
nForce 680i Motherboard
Vista Ultimate
2 64GB SATA Solid State Drives
2 1TB 7200rpm HDs
4x Dual Layer Blu Ray Drive
30″ Monitor and a 20″ Monitor
Surround Speaker System

So what does this run us? Just barely squeaking in over $11,000. Granted, that’s an obscene amount of money to be paying for a computer and you’re already paying a hefty premium for every piece and the dubious honor of owning an Alienware computer, but even that doesn’t hit $15k. What exactly could one add on that would add another four thousand dollars to the pricetag? A couple more monitors, a gold-plated case?

Aside from being completely preposterous and more likely than not totally untrue, what I find objectionable about this is that it reinforces the false notion that computer gaming requires these huge expenses. Follow the conversation spurred by Brad’s post and you’ll see several people who take for granted that running modern games requires a multi-thousand dollar computer. No game on the market today requires that. Crysis is the single game that might not be able to be run at highest settings with a thousand dollar computer, and even that is questionable.

As much as I may be, at times, the kind of person who complains about excessive pursuit of graphics in games (when in lieu of things like story and gameplay), it’s time to stop complaining about hardware costs. It’s just not valid to claim you need a $3000 computer to run the latest PC games, or a $400 graphics card that you upgrade every six months. These assumptions have become so mythical that they’re taken for granted. It’s time to put a stop to that. Today’s Triple-A computer games are probably the least demanding of high end computer hardware of any that I can remember. Maybe if there weren’t so much misinformation flying about people would be able to make more informed purchasing decisions.

Gaming Costs

No Comments
Games, Technology

Unfortunately I’ve been a bit neglectful of the blog of late. This isn’t due to lack of desire or ideas, just the complete lack of time in the face of a hundred other tasks that are demanding my attention lately.

On the other hand, when I read a comment over at Shamus’ blog the other day, I got rather irritated to the point that I had to vent.

What people don’t consider is playability (over restrictive DRM measures inhibit this) and graphics (yes, you’re game looks great and all, but unfortunately I won’t buy it because I can’t run it!) I love the PC as a platform, it allows for so many possibilities, but when I could shell out $400 for a console and some games, or $1500 for a decent mid-range computer that might play the games I want, I think I’ll go for the console.

A good gaming machine that will run all current games (and the generation after), not to mention most past-generation games (“backwards compatability”) will run you about $1000 these days. Considering that the comment was probably written and posted to Shamus’ blog from a computer, which, lets be generous, probably cost a few hundred dollars itself — Where exactly are these big savings with consoles coming in?

For me this is pretty much a no-brainer comparison. I can purchase a high quality machine for work and personal use, plus add on the additional expenses of a console, the restrictions of a console, and ridiculous expenses like XBox Live, or I can purchase a marginally higher quality machine and also play games on it when so inclined.

I suppose for Joe Consumer who uses his Dell or other branded computer for nothing more than checking email and looking at porn, it might be a good deal to buy a $400 gaming machine. For those of us who actually use our computers for work, which I assume should include most of Shamus’ readership, having a computer that doubles as a gaming platform is almost certainly the cheaper option. Particularly for people who do not own televisions, a niche group surely, but a growing one especially among reasonably intelligent people.

The only caveat here is that you can’t simply walk into Best Buy and say, “Give me your best $1000 gaming computer.” All of the retailers selling gaming computers do so at 100% or more markup, so you have to be willing to build it yourself. I know I personally hate dealing with hardware, as I prefer to have my computers just work, but when you’re talking about getting superior computer using experience in general — Yeah, I’m willing to put in a little bit of work for that. Obviously most people are reluctant to do this because they have no knowledge of hardware and are technophobic. We all have to start somewhere.

Assisted Suicide for [PC] Gaming

No Comments
Games, Technology

I ran across this article a couple of days ago, which is essentially musings on the state of PC gaming, lending a bit of a skeptical eye towards the claims that PC gaming is dying off in the face of consoles.

This is a topic of relevance to me, since I do pretty much all my gaming via the computer, and I don’t have a heck of a lot of interest in consoles. In particular, I’m a bit concerned because one of the games I’m big on at the moment, Unreal Tournament 3, hasn’t been doing quite so well on the PC side of things. And naturally, people like Mark Rein or the bean-counters at Epic can say that PC gaming is dying, and that it’s piracy, and this and that, but looking at Unreal Tournament 3 — The menus in that game are clearly designed to be used on the console. Literally thousands of complaints have been made about the menus of the game, and I can’t say that I’m particularly surprised. You only have one chance to make a first impression, and when that first impression is horrible, then you’re going to turn people off.

Even though I think the game itself is excellent, and overall the best Unreal Tournament to date, there’s absolutely no excuse for shipping a computer game with a user interface that bad. In many ways, this experience sort of confirms the skepticism of the article’s author — He suggests that game developers are pushing for the death of gaming on the PC, because gaming on the PC is freedom compared to the locked-down force-fed consumeristic model that modern gaming consoles have become. Yes, you’ll probably have hardware problems at some point, but at least you can usually do something about it.

But it’s not PCs that will have died, and it’s not consoles that will have won. Consoles are just the tool most convenient for the purpose – locked down systems that can prevent outside innovation without significant initial investment. It’s gaming that will have died, and a single corporate monolith that won. The same rehashed game sold eight different ways – that will be consumer “choice.”

Now that is a scary vision. And it’s one that we can glimpse on the horizon too if things don’t change. With ever-increasing budgets for games to support ever-more crazy graphics, putting A-list titles above the reach of indie developers, a relative lack of improvement in development tools, and movement towards these locked-in console systems, or ridiculous garbage like Microsoft Live… It’s hard to see where else things can go except towards more centralization, which in turn means more ability to exert influence to encourage people to give up their gaming freedoms, starting a snowball effect right to hell.

Anonymous vs. Scientology

1 Comment
Culture, Politics, Technology

Unless you’ve been living under a rock for the past few days, you probably haven’t heard of the declaration of war that the internet entity “Anonymous” has declared against Scientology.

This was the first video to hit the interwaves:

A new one has recently come out, though similar in content. There have been some threads on social news sites following the release of these videos. One comment in particular in this Reddit thread struck me as quite interesting. User notany says: “Anonymous might be the first real Stand Alone Complex.”

Spoiler-ish description of Stand Alone Complex behind the cut.

Continue Reading »

Working out the Kinks…

No Comments
Games, Technology, Unreal Tournament

I’ve been suffering some persistent problems with my mouse lately. As you might expect, this has been kind of a damper on the enjoyment I get out of something like Unreal Tournament 3, as it depends a tad on your ability to aim accurately with a mouse. I’ve also been generally dissatisfied with the level of performance my machine has been giving me lately, so I figured it was time to start looking into upgrades.

I figure I’ll need at least a new mouse, some more RAM, and a new graphics card. That’s for starters, though I may need more than that. I ordered the RAM and installed it recently, and I’m happy to say my performance across the board has improved drastically. And oddly enough, my mouse problems cleared up. I guess I won’t be needing that new mouse after all? Either way, I’m really happy, and my Unreal Tournament 3 experience just got better by leaps and bounds. Having a partially functional mouse is extremely frustrating, but I suffered through it because I like the game so much — Now that I actually have decent control? Excellent. I went and won a few DeathMatch matches today on public servers, and I actually got my heart pumping with the action. Been awhile.

I guess while I’m talking about it, I should mention that I actually ran across a modified demo server already — Unfortunately I didn’t catch the IP address or the name, except I know it’s a clan server with tags {*X*}, with * being a single-character wildcard. I don’t know exactly what they did with the settings, but it felt like it was running at 150% gamespeed with low gravity, and was running Heat Ray in a loop. That was great fun, and anyone who misses the Unreal Tournament 2004 mobility should love it.