Tuesday, November 30, 2010


Is it just me, or are Android base fonts just ugly compared to the iPhone base fonts?

Monday, August 23, 2010


Went to iOSDevCamp 2010 this past weekend. Must say that it's always a blast going to these things. Nothing like spending an entire weekend coding an app that could end up being something brilliant (or equally terrible), with people you don't even know. Thankfully, with John Ellenich (from BickBot.com) as my partner in crime for the weekend, we managed to churn out an amazing app in just two days.

Enter DIODE. It's a simple, yet addicting game. The best kind. All you have to do is trace your finger along a path that's lit up, but the time you're given gets progressively faster and faster. You can see me demoing it here. I failed hard on the second level, but it was on purpose! I swear!

I did all the coding, while John did all the artwork, music, and robotic voiceovers. Good designers are key for great apps.

One last thing I should mention is about the multiplayer I managed to squeak in at the last second. It's got the remnants of the ideals that I tried to put in Whorl'd Champions: Many people, many hands, one screen. The majority of the games on the iPad are still just single player or online multiplayer, and I don't get why. Even one of the other teams making a game, xTanks had a similar idea. There are a lot of big game companies out there making apps for iOS devices that could be doing this. Get cracking!

As for DIODE itself, it'll be in the app store as soon as it's polished up a bit more. Hope you'll buy it and show single-screen multiplayer games some love!

Sunday, April 18, 2010

iPadDevCamp Hackathon

So I'm at iPadDevCamp, doing their hackathon. I entered this camp kinda thinking that I'd be doing a little coding, mostly listening to lectures.

Instead, I may have stumbled across the most brilliant idea for gaming on the iPad. Here's my little spiel on my revelation:

So what makes the iPad unique? Most simply, it has a large touch screen. The iPhone/iPod touch screen is small, making it primarily a solo experience, maybe accommodating one other player. The iPad is huge in comparison, allowing for many people to view it at one time, and can handle up to ten touches. With just fingers, the iPad provides an excellent platform for gaming. There are plenty of games coming out which tend to use iPhones as controllers. Although it's definitely cool, I think that might be a mistake. Not everyone has an iPhone/iPod on them (as much as we'd like to dream), but everyone has fingers. The iPad has a touch screen, people. It is a controller in of itself.

So what resulted over two days of intense coding was Whorl'd Champions (many more whorl'd puns incoming). It currently supports two game modes, Twisted Whorl'd and Whorl'd in Motion. Twisted Whorl'd is a variant of Twister for the iPad (players have to keep their fingers on circles of a given color). Whorl'd in Motion has players trying to keep their finger on a single moving circle, all the while bumping and colliding against other enemy players.

Code for it can be found at:
However, the code is extremely shoddy, and probably should be reworked from the ground up. What you SHOULD take away from this app is the core idea behind this: Multiple people, multiple fingers, one screen. I think this is a complete shift in design that many of the iPad game companies are completely missing, and I hope that everyone will take to heart as iPad development continues.

Edit: It won Honorable Mention, behind Tank Or Die. A little sad, but kinda expected it. A lot of other projects had a vast majority of polish behind it, especially Tank Or Die, a top-down tank shooting game with iPhone controllers, very nice graphics and some chiptune music that I recognize and adore (though the title escapes me). Ours was really dinky. One look and anyone would choose Tank Or Die.

I'm just glad that our dinky little hack got some recognition. I'm hoping that more game companies will catch on. Either that or I start creating some more polished hacks and get some extra side cash from the App Store.

Edit 2:
Here's me, doddering through my presentation:

Tuesday, December 1, 2009

Linguine with White Clam Sauce

Linguine with White Clam Sauce.
Wine: Girard Sauvignon Blanc 2008
Recipe: Epicurous: Linguine with White Clam Sauce.

Costco was selling cockles for around $1.25/lb, so I randomly picked up a bag of them. I've never worked with clams before, so for $6, I figured it couldn't hurt. Working with 5 lbs of clams in one big pot is pretty troublesome and weird.

The cockles are still alive, so you can just spend time looking at them in awe as they open up slowly, and then rush close when you give them a tap. Some of them leave something of a tongue hanging out, which they attempt to slurp back in before closing. Some of them (3 in my batch) are dead, hanging open like nobody's business. Cleaned them off by tossing in salted water, and let them sit for an hour. They expel a bunch of nasty stuff which might have ended up in your mouth. Then gave them a quick rinse and tossed them into the recipe.

Recipe turned out pretty good. I always tend to go a little heavy handed with the salt, and I accidentally did it again, but the sauce still turned out good, just a tiny bit on the salty side. Tossed with linguine, sat down in front of some Dexter and Heroes, and ate it up.

Sunday, August 23, 2009


So one of the things I've been doing this summer is translating manga (comics) from Japanese to English.

Mostly, I've been responsible for the translation of the manga Akaboshi Suikoden. It's based on an old chinese novel, "Shui Hu Zhuan" or "Water Margin". I figured that I should make these summers where I have a bunch of free time worth something, so the moment I had the opportunity, I joined a translation team at IEatManga. I had randomly started looking into the JET program in prior weeks, and it renewed my interest in learning Japanese.

The most interesting part about manga translation (and also unfortunately), is how much it tests your skills in the target language rather than the source language. Ultimately, the target language is what the readers are going to be seeing, and as long as you can get the gist of what the source is saying (whether through your own personal skills or heavy reliance on a dictionary), you can make do with what you got. There's a major problem in Japanese (that doesn't occur as often in English) where words are altered drastically and sentence endings have definitions that aren't handled in dictionaries, but a basic knowledge of language can pull you through.

The target language, however, is a whole different level. A dictionary can't help you utilize "voice". Limited experience in the target language won't help you speak to the reader in the same language. At best, you can only talk in a stilted formal voice. Although it gets the idea across (since really, your average manga reader glosses over the bubbles), it's still awkward and you just won't have the experience for using the right wording.

Good thing I'm a English speaker first, and a Japanese learner second, I guess.

Tuesday, June 23, 2009

VR Game Gun

A couple months ago, I participated in a VR experiment with the some CS grad students, who had a head mounted display as well as a motion tracking device for the hand. All in all, the experience was pretty exciting, but for some reason, the hand tracking was laggy (but the head tracking wasn't. Who knows?)

More recently, some guy has created a hackish VR controller, called the PC VR Game Gun:

I've always been a big fan of VR, and this thing is no exception. It uses a gyroscopic mouse to track pitch and yaw, and I think it uses a keyboard/gamepad hooked up to the gun's innards for movement. All told, these features don't seem all that new, and considering the accelerometers and buttons in a Wii controller, using a Wii mote + Zapper might have actually been a better idea.

What DID catch my eye was the fact that it places a screen right along the scope of the barrel. I'm actually a bit puzzled as how exactly how it's anchored to the gun, though I suppose it's a problem that a bit of welding and screws could fix. What I like about this is that the screen follows the hands rather than staying in a fixed place like a regular monitor (e.g. Wii).

Granted, a head mounted display would do much of the same, but the problem there is that it requires more expensive location tracking, which requires fixed cameras (i.e. not exactly the most portable setup). Gyroscopes can detect acceleration, so it's fine for turning and rotation, but it can't exactly tell where your hands are in relation to your face. It just seems awkward to be holding a trigger that essentially does nothing for where you're aiming.

In this iteration, if you imagine that the screen simply being a enlarged scope on top of a gun (ala CornerShot), then you get a similar amount of immersion for a much cheaper cost. You won't have as wide range of view as with a HMD, but the PC VR game gun seems to be undergoing a second revision: adding in micro projectors on top of the gun to project against the walls.

I'm not quite sure if micro projectors are even powerful enough to do the job, as last I checked, most micro projectors have a very low amount of lumens, and just aren't extremely bright at large ranges (which you'll probably want when you're waving around a toy gun, shooting virtual mobs). However, the idea itself is rather brilliant; in theory, it'd be no different than running around with a gun/flashlight combo, great for playing a dark game like FEAR. I can't wait for this next iteration to come around.

Sunday, June 21, 2009

LikeHate: Multiple Monitors

I am a big fan of big workspaces, where you can spread out to an extent where everything is everywhere. My office (read: my room) is a very big representation of this fact. Everything I need is spread out within a 270 degree arc of myself; I have desks in front, to the side, and to the back. Just a few small sections are open to get out of this interactive jail cell.

So it's no wonder that I use a plethora of monitors to accentuate this workspace. At the time of this writing, I have four monitors hooked up to my main computer, which is a boon for debugging; there's about million different things that can give vital feedback to me, and I need them to be visible at all times. In addition to these main four, I still have my laptop, and although I haven't yet hooked up another monitor to that laptop (not enough space right now), it brings me to a whopping potential of 6 screens to give feedback and/or to work off of.

It's a feedback overload made for an informational junkie like me.

So what's there not to love?

Well, unlike the hand, the mouse is not a full extension of the human body. The only feedback of the location of the mouse is a tiny arrow floating around the screen. If you lose visual track of your mouse, you're really left in the dust for a couple seconds, shaking and waving the mouse around in hopes of finding that little arrow again.

I'm not sure if it's even possible to lose track of your own hand. You can directly point at any object from any position with little forethought. Pointing at something with your hand is usually faster than pointing with your mouse. Unfortunately, most consumer computers don't exactly have point to screen technology yet, so this is something people have to live with.

A bigger follow-up problem is of intention; the mouse is your eyes/hands for the computer. It determines what the computer thinks you're currently interacting with, and when there's a disconnect between what you're intending to do and what the computer thinks you're intending to do, disaster ensues. Try typing random gibberish into IDA Pro, and see if you can recover from the holocaust you just blasted onto your assembly code.

The problem's source lies in the fact that the visual cues for these context switches are not especially jarring, making it easy to mistake whether one has actually registered their intention with the computer first. Under single monitor situations where the user usually only has one window open at a time, switching to another window is represented by the entire screen changing to the target window. The user is given the undeniable feedback that the computer has registered the action. However, in multiple monitor situations, the possibility of components not being hidden under by others is much higher. If I switch to one of the other open windows, the only major feedback is the title bar becoming highlighted. This feedback is very minimal and can be easily overlooked or forgotten.

A solution for this is not necessarily very simple; just darkening the whole contents of non-active windows defeats the purpose of having multiple monitors. Adding additional audio/visual cues on context switch could be potentially annoying, and I don't exactly know how long it takes for an audio/visual cue to be ingrained inside the user to the point where the absence of the cue will be a cue in itself.