Tuesday, December 27, 2011

Protecting WebGL content (and why you probably shouldn't)

I received an email this morning from a developer that is in a predicament that I think many WebGL developers will find themselves in pretty soon:
Do you have any experience with protecting assets in WebGL? I wrote a quick solution to protect textures during transfer - base64 encoding + concat server side and splitting + creating imgs client side. Still, using WebGL Inspector I can get all textures transferred to the graphics card. I wonder if you have any ideas on how to prevent that. I don't mind open sourcing my apps, but my clients apparently do :)
Ah, that last line is really the crux of the matter, isn't it? Most developers I know don't mind sharing their creations with the world at large, as we tend to understand that there's rarely anything sacred and secret in the lines of code we crank out, but that's not always an easy thing to convince management of. So: What's an enterprising WebGL developer to do when the boss decrees from on high "Protect our content?"

There's a couple ways to approach such a request, in my opinion. First: Do whatever you can to convince said management that you don't need to protect said content. Second: Convince them harder. Third: If all else fails, obscure the content as much as possible. I'll be talking about both approaches in todays post.

Friday, December 9, 2011

Compressed Textures in WebGL

[UPDATE: The compressed textures spec has been changing, and so the original code posted with this entry stopped running. I've since fixed the code and updated the information below. Be aware, though, that the spec may STILL be tweaked at some point!]

I gave a presentation this last Friday at WebGL Camp 4, the slides of which are online now. I had a great time, met some awesome developers, and saw a lot of things that got me really excited about the future of WebGL. I highly encourage anyone that is interested in WebGL to try and make it to WebGL Camp 6!

During my talk I was able to show what I think may be the first public demo of compressed textures in WebGL! The demo isn't terribly impressive, it simply displays a DXT5 texture loaded from a DDS file, but it shows off the required code effectively enough.


(Warning: That demo will only work on machines that support DXT5 compression. That should be most desktops, but the majority of mobile or tablet devices will be out of luck! You'll also need to be running a fairly new Chrome dev channel build)

Yay! I got a textured cube on screen! Surely I'm the first person ever to do this!

Okay, yeah... it's not all that impressive. The key here is the potential that it provides. Compressed textures have been an integral part of 3D games and many other 3D applications on the desktop, console, and mobile platforms that they've become something of an invisible, pervasive optimization that everyone tends to take for granted. Up until now, however, they've been something that's been left out of WebGL (not without reason, they're tricky to get right). The fact that we're gaining the ability to use them now is, in my view, something of a benchmark of the maturity of the standard.

So what exactly do we mean by compressed texture? If you're not already familiar with the concept from a prior life as a game developer it can be a bit confusing to really grok what we're referring to and why it matters. After all, JPEGs and PNGs are compressed images, right? What's different here?

Sunday, November 27, 2011

Building the Game: Part 5 - Static Level Geometry

See the code for this post, or all posts in this series.
See the live demo.

(WARNING! The live demo for this post will probably take a while to load and will most likely not run at 60 FPS! Optimizations will come in later posts.)

Sorry about the long gap in posts, but this has proven to be one of the more challenging things I've attempted so far. That's primarily been because I wasn't too familiar with how Unity handled things behind the scenes until I started working on this particular post. I've found some surprising (and occasionally disappointing) things about how Unity handles it's levels, and it's forced me to rethink a couple of aspects of this project, but I think I've got a decent handle on it now and at this point the order of the day is progress in small increments.

To that end, we're going to start talking about exporting, displaying, and interacting with levels from Unity, but we're going to do so one step at a time. Today's step is simply going to be getting the level geometry and lighting information exported and rendering brute force. We're not going to be worrying about collision detection, visibility culling, or anything else right now than just getting those triangles out of Unity and into our browser! 

Sunday, November 13, 2011

Thoughts on the iPhone after switching from Android

In the last few months, for various different reasons, I've had been looking to upgrade my beloved Droid X. There was a lot back and forth on my part about which phone to get, as I considered Android heavyweights like the Bionic and Galaxy Nexus, but at the end of the day Android was having a hard time really impressing me with it's latest and greatest. I've been pretty vocal in the past about my love of Android phones, and at one point even wrote a blog post about why the thought of me getting an iPhone was pretty absurd. And yet... today I've got an iPhone in my pocket, and I couldn't be happier with the decision!

I don't want to spend a lot of time talking about why I made the switch (I've already ranted on Google+ about that.) But it did want to mention a few pros and cons about the switch, to help out anyone else who's wondering which way they want to go.

iPhone Pros:

  • UI Fluidity: I've got the latest and greatest iPhone now (4S), but even going back and playing with a 3GS there's simply no questions that Apple's UI is snappier than Android in just about every way. Android has gotten better, but even the much anticipated Ice Cream Sandwich doesn't quite match the iPhone's effortless interactions and transitions. (And yes, I've played with an ICS device.)
  • Better Browser: On a professional level, as a web developer, this was a very compelling reason to switch. Thing is, I use Chrome exclusively on the desktop so I have a lot of faith that Google can make a great  browser. They just haven't on the phone. Once again, ICS makes strides towards this, but it's still not at the "Chrome on your Phone" point that we really want. Apple, on the other hand, has delivered "Safari on your phone" quite nicely. It's a very fast, robust browser that's pretty close to desktop level in terms of capabilities. If you're developing for the mobile web, Safari is what you target before putting hacks in place to support everyone else.
  • More Stable: It was rare that I could go for more than a few weeks without having to battery pull my Droid X. Even when I didn't have to reboot though I had to manually kill off rouge apps more that I wanted to. On iOS I've never had to kill a misbehaving app (although my Wife often has issues with Facebook, but I'm happy to blame that one on a shoddy app.) and I've rebooted exactly once, to install a new OS update. Oh, and speaking of which...
  • OS Updates, when they happen: On Android, even with the best of manufacturers, you were looking at many months between the time Google said "Here's Android 2.x! Have fun!" and the time that you could actually get it on your phone (in skinned, unstable, bloatware infested form). With iOS, I was able to upgrade to the latests version the day it came out. That's pretty spectacular, from a developers point of view.
  • Keep your grubby hands off my phone! I have exactly 0 Verizon apps on my phone, which is the same number that came pre-installed on it. I don't have a Verizon logo anywhere on my phone. I don't have some special, unremovable skin, and I don't have random social networking bloatware that can't be turned off. Obviously this is possible, but apparently Apple is the only manufacturer that's willing to make an argument for the consumer when it comes to carrier installed crud. I appreciate that more than I can possibly express.
  • Accessories: Everyone and their dog makes iPhone accessories. Motorola makes Droid accessories. Guess which one yields the better selection? (Oh, and I love being able to buy headphones that actually include a working mic and inline volume!)
  • The Screen: This one is a bit love and hate, but first with the love. With every Android phone on the planet (even the almighty Galaxy Nexus!) shrugging their shoulders and says "PenTile is good enough" Apple's retina display really shines. People, this is what a phone screen should look like! No dithered colors, no jagged edges, no funky refresh rates. Just beautiful, high DPI goodness.
  • The Camera: I don't even want to try comparing my Droid X's camera to this one. It's cruel.
  • The Apps: This, right here, is the big daddy of switching reasons. Look, I love Android. I really do. But let's face it guys: the Apps kinda suck. For one, Apple's store has a larger selection of apps, and far more platform exclusives (especially if you care about games!) But even for those apps that do have an Android counterpart, the Android version is usually the neglected stepchild compared to the iOS app (Pandora, this still means you!) or you may not be able to run the app at all, due to hardware/software incompatibilities (Netflix, anyone?) And then there's the fact that even assuming that you get a great app that works well it's probably already been out on iOS for months before they bothered to port it, and will likely always be behind in getting updates. The fact is: Apple's App selection and quality makes Android's Market feel kinda laughable. Sorry, but it's true.
iPhone Cons:
  • The information is in there... somewhere: To be honest, I was never big into slathering Droid X with widgets. I did make use of most any widget, however, that notified me when something within the app was newly available. Google Reader and TweetDeck were my favorites in this regard. This had the very nice, practical effect that I could pull out my phone, glance at it, and stuff it back in my pocket knowing there was nothing worth looking at. With iOS, EVERYTHING requires you to go digging to see what's new. The most absurd instance of this is Reeder (which is all other respects is a great Google Reader replacement). It gives you the option to but an unread badge on the app icon (yay!) but then only checks for new items when you open the app (whatisthisidonteven...) This is also true of the notification bar, which I loved on Android but almost never use on iOS because, well, there's no way of knowing that it has anything in it till you pull it down. And, of course, there's no notification light. I always thought the Windows Phone 7 commercials were clever. Now I understand exactly what they're taunting.
  • 4G: I knew that I would be giving up 4G going with the latest iPhone and it wasn't a big deal to me. It's still a bit of a disappointment, though, especially since I don't feel the tradeoff bought me much in terms of battery life (I get about the same battery life on my iPhone as I did on my Droid X)
  • The Screen: I could gush all day about the quality of the retina display, but then I hold my iPhone next to my Droid X and sigh wistfully. It's not a huge problem, and the high DPI certainly helps, but 3.5" is really just a bit too cramped. I don't expect (or even want, really) a massive 4.6" beast or whatever is all the rage these days, but is 4" too much to ask?
  • Mail: I was quite put out to realize that in it's infinite wisdom Apple has crippled it's Gmail interface. No push notifications. No contacts. You have to go through the clunky Exchange service just to get it working halfway decent, and then there's still weirdness such as "Delete" actually "Archiving". On top of that, I really hate how Apple handle's conversation threads. Google has just recently come out with an actual Gmail app for iOS, which gives me some hope, but it's still a pretty young app and is lacking some critical features, such as multiple profile support, which means I can't use it. sigh.
  • Maps: There are many things that the iPhone does better than Android. Maps is not one of them. Android's built in maps offering completely blows away anything the iPhone offers, either built in or on the apps store. Frankly, it's embarrassing to see Apple touting things like "alternate route selection" in their new OS when Google's had it forever. And that route selection doesn't help me one bit because I have to stare at the screen while I'm driving just to use this sad excuse of "navigation." Voice navigation was one of the first things I missed when making the switch, because Google does it so very well. iOS does have a free MapQuest app that will do voice navigation passably, but otherwise it's a pretty horrid map application, and it makes me sad to have to have two separate map apps on my phone.
  • Stupid app restrictions: I really don't care to use iBooks when I have a reasonably large pre-existing Kindle library, and it pisses me off that I have to go digging around in the browser to buy new books for it. On the same token, I loved Amazon's CloudPlayer on Android and it makes me sad to know that I'll never be able to use it on this device. These are things I was fully aware of going in, and not enough to sway my decision, but that doesn't stop it from being annoying.
  • Syncing: I made the rather "stupid" mistake of plugging my phone into my work laptop to pull a couple of songs off right after I got it. Little did I know that this would form a bond between machines that was eternal and unyielding, to be broken only by the death and rebirth of my phone, like a Phoenix, through a process that shall henceforth be known as "nuking it from orbit."
    All jokes aside, it's patently absurd that you can only sync your phone with one machine. As long as I log in to iTunes with my Apple ID, can you give me one good reason why I shouldn't be able to use that instance of iTunes to manage my phone content? No. You cannot. It's funny, I've heard a lot of people complaining that Android devices show up as just a dumb USB drive when plugged into your PC. In my mind that's vastly preferable to this insanity.
Now that's a decent list of complaints, but honestly when you look at the whole package iOS comes out on top by a pretty wide margin. You have to be Richard Stallman-style fanatical about avoiding Apple's walled garden to actively let it deter you from the otherwise incredibly solid phone that they've created. I'm still going to be keeping a close eye on Android, and I think that eventually it CAN win out over Apple, but in the meantime I've been liking my iPhone very much, and have yet to regret the switch.

Wednesday, November 2, 2011

Building the Game: Part 4 - Static Model Instancing

See the code for this post, or all posts in this series.
See the live demo.

Today's post is going to be far less involved than the last one, but it's an important subject that we need to nail down the basics of before we move on too much further.

Thus far we've been doing a decent job of showing one thing on the screen at a time, which is great if the game you're building is "Crate in empty space" or "Look at this thing!", but that's not the game we want to build! We want to build games like "Holy crap! That's a lot of stuff on screen!" and "Look at all these things!"

...or something like that.

Sunday, October 30, 2011

Building the Game: Part 3 - Skinning & Animation

See the code for this post, or all posts in this series.
See the live demo.

In the previous BtG we got the basics of a model format in place, but it only accounts for static meshes. Now, static geometry is very important and will make up the majority of any scene in our eventual game. But we all know that the most interesting things you see on screen are the ones that are moving, wether it be the player sprinting across the screen or the rocket careening towards your head.

There are many different techniques for creating motion in a game. It can be sliding a static mesh back and forth for a door, running a rigid body through a physics simulation, generating particle effects for smoke and sparks, transforming texture coordinates to fake flowing water, or deforming a mesh to look like waving cloth. We'll end up talking about some of those methods as we work our way through this series, but today I want to talk about the big daddy of animation: Skeletal Animation and Mesh Skinning!

Wednesday, October 26, 2011

Building the Game: Part 2 - Model Format

See the code for this post, or all posts in this series.
See the live demo.

So, after the previous BtG post we had a bare-bones pile of boilerplate code and nothing terribly interesting to show with it. Obviously it's impractical to hand-code all of our meshes into our game (although that would make it fast!), so the next order of business should be creating a way to get meshes out of the various different modeling tools and into our renderer in a format that's efficient and flexible.

In layman's terms, we need a model format!

Tuesday, October 25, 2011

Building the Game: Part 1 - The Setup

If you haven't already, read Building the Game: Part 0 - The Foreword to find out what this is all about.

See the code for this post, or all posts in this series
See the live demo.

So, you want to make a game, huh? 

Most of the time when you hit that point the best thing you can do is pick up an existing set of tools or code base and run with it. While you'll probably never find a pre-built toolkit that will fit your vision exactly (and if you DO, why are you building your game? It's already been made!) there is a lot to be gained by picking up Unity or UDK or even something a little less recent like the Quake 3 Source and hitting the ground running. You'll inherit a tested, robust tools pipeline to handle things like level building and model importing. You'll benefit from the experience of veteran developers in the code base. You'll have a large community of people playing with the same tools that you can turn to to ask questions. In short, you'll skip over all the boring, tedious parts of painstakingly tweaking the technology to the point that it's game-ready and focus instead on doing something that makes your game unique.

Essentially: You're going to be very hard pressed to come up with a good reason for NOT building on someone else's work. Save yourself a lot of heartache and try an existing solution first, because you'd have to be insane to want to start your own game from scratch...

Monday, October 24, 2011

Building the Game: Part 0 - The Foreword

See the code for all posts in this series

While at onGameStart this September I had a chance to talk to a lot of great developers, many of which were either building or marketing their own web games. A lot of them knew about the demos that I was doing, and I got a bunch of compliments, but I also got one question over and over again that I had honestly never really thought much about:

"Where are you going with all this?"

I was truly surprised at how many times the question came up, and I realized very quickly how silly it was that I couldn't really point to any sort of end goal to it all beyond "Make a cool proof of concept."

The truth is, I've been doing the WebGL demos primarily because, well, they're a hell of a lot of fun! Also, I've enjoyed the opportunity to contribute to the dev community in some small way through the code I've made available. And those are great reasons, to be sure, but given the demos that I've already done and the time that I've sunk into it I could be half way to a working, playable WebGL game by now, not just some static scenes. So... I figured I would give it a go!

I've got some reasonably modest goals for the game itself, but ones that will push the capabilities of a Web-based game. Obviously it will be WebGL based (I mean, it would be a bit of a letdown if I decided to do a 2D game in flash, right?) and right now I'm thinking it will be a very simple, multiplayer only, 3rd-person shooter. This automatically dictates a couple of the development focuses going forward:
  • As a multiplayer game, I don't have to worry about AI at all. (At least not at first)
  • Being multiplayer also means that I need to develop a decent server and set up the basics of realtime communication. (Hello Websockets!)
  • Third person games tend to play well with both mouse and joystick based controls, so while I am dependent on the browser manufacturers to get one or the other implemented it's probably a safe bet that one of them will be available by the time I reach that point in development.
  • We'll need to have systems and formats in place for Models, Maps, Materials, Animations, etc.
  • We'll need to establish a tools pipeline for getting content into the appropriate formats.
  • And since this content in being delivered through your standard web connection, it would also be great to have some asset management on the client side.
None of these are what you would label a simple problem, but as long as we keep the scope nice and tight and don't try for anything crazy it's all within reach. Being just one programmer, "not trying anything crazy" means making use of a lot of freely available tools, libraries, and art, with a healthy dose of the ever dreaded programmer art thrown in for good measure. What I'm getting at is that this will probably not be a stunning looking game, but I'll be working to ensure that the engine is at least capable of supporting nicer art should an actual artist want to use the code that I create.

Which brings me to a very important point: I'm still very interested in giving back to the community, so everything I do for this project will be open source. The code will be done with an eye towards encouraging others to take it an run with it, either as a whole or in parts. I'm not interested in making a "framework", the code will designed specifically for the game I'm building. But that doesn't mean that it can't be built to be "mod-friendly". I would love for other devs to be able to build on the base game that comes out of this.

I'll also be writing blog posts and putting up demos as I hit various milestones. The posts won't be tutorials, nor will they be simple progress updates but probably something in-between. I'm more interested in talking about the high level decisions that go into the process and how my experience with the previous demo's that I've done influences the choices I make for my own game.

Now, I'm not a professional game developer. I can't promise that everything I do will be a "best practice" or completely optimal. I'll probably re-invent a few wheels along the way, and it's very likely that from time to time I'll have to admit that one choice or another didn't work out so well and code will be scrapped and redone. It's basically a grand experiment on my part, but that's what makes it fun.

The first couple of posts worth of code are already done, and I'll be putting them up over the next few weeks. Wish me luck, and let's hope that this doesn't fall flat on it's face.

Next post in the series: The Setup.

Friday, October 7, 2011

glMatrix 1.0 Released

glMatrix has just received a relatively few minor updates, but in order to make my life easier I've decided to relocate the repository to GitHub so that I could manage all my projects in one place. I've decided to mark the occasion by making glMatrix officially Version 1.0!


There is certainly still room for improvement here, but hopefully GitHub will make it easier to work with the community on features and bugs than Google Code did.

One thing to note, since it's a fairly significant behavioral change: I was notified that quat4.toMat3/4 was producing transposed matricies in older version of glMatrix. I have fixed that in this version but if you were relying on that behavior in older versions you may have to update your code to compensate.

Wednesday, October 5, 2011

Source Engine Levels in WebGL: Video + Source

Took the time to get the source and video demo for the Source Engine demo online tonight. Have a look!

GitHub Source

And of course don't forget to check out the Tech Talk post if you want to know more of the nitty gritty details!

I stressed for a little while about fixing some of the remaining issues before posting anything (like getting water rendering) but in the end I decided that the demo serves it's purpose in that it shows that we can push a lot of geometry for a complex scene and still run at the speeds that realtime games require. Beyond that, however, I won't be able to make use of the format for any other projects and at this point I want to start working on something that is actually playable, not just looks pretty. As such, spending any further time on the format isn't practical.

I have started on my next WebGL project, and hopefully I'll be at a point that I can start posting about it soon. See you then!

Thursday, September 29, 2011

The state of the Javascript Fullscreen API

I gave the javascript Fullscreen API a shot tonight, something I've been meaning to do for a while. To test, I went and added a simple fullscreen button to my Quake 3 demo. I've posted the current results online (Webkit only for the moment, sorry!) so that you can play with it if you want, but don't go expecting too much just yet.

The good: It does indeed switch your browser to fullscreen mode and isolate the given element.

The bad: Pretty much nothing else works yet.

[UPDATE! nornagon has addressed pretty much all of these issues in the comments! I've also updated the Q3 demo to reflect his advice. See below.]

Tuesday, September 27, 2011

Source Engine Levels in WebGL: Tech Talk

So I've just gotten back from onGameStart and have been very pleased with how well my "Surprise Project" was received! For anyone not at the conference or following me on Twitter, at the end of my presentation I demonstrated a Source Engine level (2Fort, from Team Fortress 2) running in WebGL at an absolutely stable 60fps! Now, to be perfectly fair there are a lot of bits of the rendering that don't work properly yet. Off the top of my head, it's still missing: Normal mapping on brush surfaces, displacement surfaces, water rendering, 3D skybox, any shaders that use cubemaps, accurate lighting on props.. you get the idea. Over the next few weeks I'm going to try and fix some of the more egregious omissions after which I'll put the code up on github for any enterprising developers, and I'll also post a Youtube walkthrough of the level, but don't expect a live version any time soon.

[Update: Video and Source Code are online now!]

Tuesday, August 23, 2011

jsStruct: C-style struct reading in Javascript

Protip: I ramble a lot before getting to the link in question. You probably just want to jump straight to the project.

So in case you haven't picked up on it yet I tend to work with a lot of binary files in Javascript. This is, to put it kindly, an absolute mess. (I would put it unkindly, but this is a family friendly blog!)

Now, to the credit of the browser makers, binary parsing has certainly gotten a lot better in a very short period of time. When I was doing my Quake 2 and Quake 3 demos, the only way to parse binary was to request you file as a raw string from the server and use String.charCodeAt() to grab the bytes one by one and reconstruct them into the appropriate data types. This meant that parsing a float looked like this:

Saturday, August 20, 2011

Another Teaser Time!

So, I figured I'd post another teaser image. I'm not 100% sure that this is going to work out, but it's certainly turning into a fun project! Hopefully I can have it ready for onGameStart in September, as I think it would make quite an impression!

[UPDATE: See the exciting conclusion!]

Wednesday, August 10, 2011

Hey, Chrome! Fix your texture uploads!

I've been poking and prodding at my RAGE demo as onGameStart draws nearer, trying to clean up the code and squeeze a bit more performance out of it. During this testing I've made a depressing observation:

Chrome's performance, in reference to speed of texture upload, sucks. And by "sucks", I mean as it atrociously, painfully, unforgivingly slow.

The most technically demanding aspect of the RAGE demo is the constant texture swapping that happens every few steps. We pre-allocate an array of 30-40 some 1024x1024 textures (exact number depends on the map) and as we progress along the path we identify upcoming textures that will be needed, download them, and push them into an unused texture, hopefully well before that texture is actually needed. The Webkit and Firefox nightlies can handle this fine (though we do drop a frame or two here or there) but Chrome 14 on my Mac basically breaks down and cries when asked to do this. You can see for yourself in my simple jsperf benchmark.

So, to put this in perspective: Chrome is able to squeak out a measly 22 texture uploads per second at 1024x1024. That's ~50ms per upload that your browser is blocked on that call. If you are shooting for a 60hz game (~16ms per frame) this means that uploading one texture to graphics memory just caused you to drop 3-4 frames. One texture, 3-4 frames lost. Ouch! For a medium that will be highly dependent on streaming, that hurts!

By comparison, Safari 5 gives me 62 uploads per second (ie: you may drop a frame here and there, but performance will stay pretty solid.) and Firefox 7 blasts out a whopping 188 uploads per second! That's ~8ms per upload, leaving lots of breathing room for rendering!

It's a real shame too, because in most other ways Chrome seems to be the most solid performer with WebGL. If I chop all the textures in my RAGE demo down to 512x512 I can run at a rock solid 60hz with no tearing or stuttering. (Though the texture upload is still painfully slow compared to the other browsers.)

Maybe the benchmarks look a lot better on Windows, but I don't have a machine to test that with right now. Regardless, this is something that the Chrome team really needs to smooth out. Pretty please?

(All timing is taken from my iMac)

[UPDATE: I wrote up a bug report on the issue, lets see if it goes anywhere. Similar reports have been added in the past, as seen in the comments on this post]

Monday, August 8, 2011

WebGL Frameworks are awesome, here's why I don't use them.

Tonight I was posed a very interesting question by @HunterLoftis on twitter:

What's your opinion on three.js, glge, etc? I haven't seen anything in that camp half as performant as your quake 3 fullscreen demo

I answered him with my < 140 char assessment of the situation:

I think they're great frameworks, but I question if a high-performance WebGL app can afford the overhead.

I feel like this is a subject that could use a bit more explanation, though, because it's not at all black and white.

Saturday, August 6, 2011

Getting cozy with GitHub

When I posted my code for the WebGL Rage demo, I was quite surprised at the number of comments that I got requesting that I out the code up at GitHub instead of Google Code. At the time I mentioned that I would do so, but then promptly got caught up in the chaos that accompanies relocating to a new job, and never got around to moving the code. (Sorry about that!)

A nice side effect of the delay, however, is that I've gotten quite comfortable with git in the meantime, since that's what I use at my new job! I'm still probably more of a fan Mercurial due to simplicity of the interface, but I've found myself using git even for hobby projects lately just for consistency. So, yeah, git doesn't seem nearly as scary to me now as it did when you all were first pestering me about it. :)

In any case, I've finally got the Rage code up on GitHub. And as an added bonus I've put the Quake 3 code up there too for easy access! I'll most likely put any future projects I do up on GitHub as well, just for the benefit of having them all in one place.


Monday, August 1, 2011

The somewhat depressing state of Object.create performance

I have recently been introduced to the niceties of the new EMCA 5 Object model, which revolves around Object.create. The syntax is a bit wonky to those of us that have been javascripting for some time now, but once you get used to it there are some really great features at work here, not the least of which are actual properties, a much better inheritance model, tighter access control, and more! I'm not crazy about losing the ability to "new" my objects, and the funny little hacks that you need to put in place to simulate a constructor are turn-offs for me, but past that there's an awful lot to love here...

... except the performance.

I was curious about how the new model compared with the tried and true methods in terms of speed, so I whipped up a simple jsperf benchmark to gauge how different aspects of the two object methodologies performed. (And I'm not the first either) The results, frankly, were rather depressing.

On my iMac with Chrome 14 (dev channel) new Obj() is currently outperforming Object.create() by a factor of 10! Seriously! 10 times slower, and we've lost constructor functions along the way! Fortunately member access and function calls are virtually indistinguishable performance-wise once the objects are created, which is good (if expected). Sadly, however, utilizing Properties (one of the big bonuses of the new model) is painfully slow. My tests showed a Property to be 200 time slower than a good old setFoo/getFoo pair. The numbers are about the same on Safari, though Firefox showed some interesting variations. There wasn't a single platform where the new model could be called a clear performance winner though.

Of course, the feature is fairly new and hasn't undergone the rigorous optimization that some of the older methods have, so I would fully expect to see these numbers improve moving forward, but for now if you're performance conscious you'd do well to steer clear of Object.create.

(Oh, and despite drastically redesigning the Javascript object model we STILL couldn't be bothered to add operator overloading? Really?!?)

Wednesday, July 27, 2011

Dirty Full-Frame WebGL Performance Hack

So since WebGL first started appearing in browsers it's been people's natural instinct to create a 3D canvas that fills the entire browser window. Obviously this is because we like our 3D games to run full-screen (or as close to it as we can get). But you'll notice that I usually have my demos run in a window (usually 854x480). The reason for this has traditionally been because when WebGL was still gaining steam there was a severe performance penalty that was directly related to the size of your canvas. (See this early thread for a good idea of what I'm talking about)

Of course, things have improved on the browser side, and computers are always getting faster so this problem isn't as noticeable any more, but that doesn't mean it has disappeared. Netbooks/Chromebooks/etc are still popular, and don't have a lot of muscle. WebGL-capable mobile devices and tablets probably aren't too far off either. (N900 anyone?) For these environment, it would be great to preserve that fullscreen feel (especially on mobiles!) but still maintain a reasonable framerate (hopefully ~30fps or more.)

Since the dawn of 3D games we've had the ability to render at a lower res than your monitor is capable of and still have it fill the screen. Gamers are often willing to deal with some jagged edges to get smoother gameplay (but not many want to play in a window the size of a postage stamp.) So, is this an effect that we can emulate on the web? As it turns out, yes! I was playing with just such a situation a couple of days ago and stumbled on a great little hack.

The idea is simple: Create the WebGL canvas at a lower res (say, half width and height), and use CSS3 transforms to scale it to the full browser size. The code snippet is pretty simple:

// Create a WebGL canvas at half the document size
var canvas = document.getElementById("glCanvas");
canvas.width = document.width/2;
canvas.height = document.height/2;

And apply the following CSS style to the canvas element:

#glCanvas {
/* Anchor to the upper left */
position: absolute;
top: 0;
left: 0;

/* Scale out 2X from the corner */
-webkit-transform: scale3d(2.0, 2.0, 1.0);
-webkit-transform-origin: 0 0 0;

And done! Everything else works just like your standard WebGL app! In my experience, there is a small performance hit for the upscale (and yes, it interpolates), but it's nowhere near the performance hit of rendering everything at twice the resolution. On one slower machine I tried the fullscreen render was running at 6 fps, the half-sized render was going at 20 fps, and the half-size upscaled was getting about 16 fps. Not bad numbers overall!

As a proof of concept, I retrofitted the technique onto my Quake3 demo, which has a new variant here:

Full Screen Quake 3 (Touch enabled)

I've also taken the time to add some basic touch controls to the demo, since this technique will probably benefit mobile devices most as they gain WebGL capabilities.

  • One finger drag: Look around

  • Two Finger drag: Move/strafe

  • Three Finger tap: Jump

A small caveat for this demo is that the canvas will not scale to fill the window dynamically as you resize, but that wouldn't be too hard to add. Still, it's really cool to put your browser in fullscreen mode and see corner-to-corner WebGL running at a decent speed on most any device!

So now the fun part: What's the coolest device you can get this sucker to run on?

Saturday, May 21, 2011

WebGL RAGE source is up


(Edit: Code now resides on github, by popular demand. I won't be maintaining the Google Code version in the future.)

Okay, this is a bit of a different release for me, because I'm really not happy with the state of the code as is, but I'm putting it up anyway. Basically what it boils down to is that I promised to have the code up this week, and I don't want to break that promise. Beyond that, however, I have a new job at Motorola that I'm starting on Monday. (I haven't mentioned that on the blog yet, have I? I forget that not everyone on this site reads my tweets.) I leave for California tomorrow, and since there's no way in hell that I'm checking my iMac as luggage I'll be without a good personal development machine for something on the order of three weeks. So when it came down to a choice between releasing some ugly code now or releasing some prettier code a month from now, I figured you guys would forgive me if I just pushed what I had!

A few notes about the release:

You'll need to compile the PVRtoJPG.cpp into an executable that can be called by the tex_parse python script. (I've included an OSX build) The python script breaks an .iosTex file down into individual PVR files, and then translates those into the JPGs that you need for the walkthrough.

For the YouTube demo I was running off of a server on my local machine. This skirts around the problem of how long it takes to download the texture files. If you run this from a remote server currently you'll likely end up with the wrong texture most of the time. This could be solved by buffering image downloads much like a streaming video, where we start downloading before the initial render, time how long it takes for X number of images, then figure out how far ahead we need to have downloaded before we start moving.

The number one slowdown in the demo is unquestionably the process of actually pushing textures into video memory (gl.texImage2D calls). There's very little that can be done to mitigate that (use smaller textures, maybe?) but things could be helped if I restricted it to at most one gl.texImage2D call per frame. Right now they have the potential to come in big batches.

As I said in my tech talk notes, it took me a while to figure out exactly what the texture structures were. As a result, the code sometimes refers to them as "offests" and sometimes as "textures". Sorry for the confusion, I'll clean that up next time I get a chance.

If anyone has questions about the code I'll be happy to answer! And expect me to push out some improvements to the project once my personal life settles down a bit.

Monday, May 16, 2011

RAGE WebGL Tech Talk


I'm really blown away by the response to the video of my RAGE WebGL demo! It's been getting a lot of word of mouth on Twitter and it's been really fun to follow what people have been saying about it on sites like reddit. And most of what I've been hearing is positive too! Which is awesome... mostly...

..except I think people don't really understand what they're seeing. I'm getting a lot of credit for doing an awesome WebGL demo, and certainly I'm proud of it, but the fact is that the real work done on my part was reverse engineering the format. Once that was done the rendering was pretty trivial. And if the video looks awesome then ALL of the credit for that goes to id Software's incredible art and tools teams! The fact is, outside of some careful management of the textures this project pales in comparison to the complexity of, say, my Quake 3 demo. Of course, to a certain degree that was the point of this whole exercise...

Sunday, May 15, 2011

iOS RAGE rendered with WebGL

As promised, here's the new WebGL demo that I've been working on! I won't be posting a live version this time (watch the video to find out why), but I will be demoing it on stage at OnGameStart this September as part of my presentation.

I have several follow up posts to this one that I'll be making soon to post the source code for this demo, describe the file format (at least the bits that I was able to figure out) and talk about some of the rendering techniques used here and why I feel this is a great format for WebGL.

UPDATE: I've now got the file format notes that I kept while building this demo avaliable as a Google Doc.

UPDATE AGAIN!: Tech talk for this demo is up now.

Friday, May 6, 2011

Interleaved array basics

I got a question from Jon in the comments on the WebGL sandbox project yesterday:

Hi Brandon, Using your demo as a starting point, I try to display a pyramid, but this far I've only been able to see one of its four faces. If I use gl.LINES instead of gl.TRIANGLES, then I see only one triangle (i.e. on face). I'm also a bit confused by the way you mix texture coordinates into the vertArray. Can you explain how these coordinates get sorted out in the shader?

Honestly I don't know if I'm the best guy in the world to be explaining this, but I'll give it a try, since there seems to be remarkably little WebGL-specific information about this. Most tutorials prefer to keep the arrays separate for simplicity, but that's not optimal for performance. The concepts all work pretty much the same as OpenGL, but it's nice to see them put to use in the environment you are using. It requires some basic understanding of subjects that don't normally apply to Javascript, like counting bytes, but it's not too hard once you get the hang of it.

Saturday, April 30, 2011

Announcement: onGameStart

I've been sitting on this for a little bit, but now my face is on the website so I guess there's no sense in waiting any longer. I've been asked to speak on my Quake 3 demo at onGameStart, an HTML5 Game development conference in Warsaw Poland this September!

I'm still working on the exact subject matter of my talk, but it will most likely involve discussing the optimization challenges I faced while building the Quake 3 demo and things to consider when building a map or model format for use on the web. I'm also working on another surprise project that I want to be able to show off at the conference to contrast with the Quake Demo.

I'm extremely excited by this opportunity to mingle with the HTML5 game development community and share whatever I can about working in this exciting new environment! Hope to see you there!

Sunday, April 17, 2011

Teaser time

So I'm taking a bit of a break from my super-secret game project to do a super secret WebGL project! :) I don't know if I'll be able to get this to a functioning point any time soon, but the following screenshot has me SUPER excited!

Hopefully I have more to show in another weeks time or so. Wish me luck!

Update: There's now something a lot more interesting than dots to look at!

Wednesday, April 13, 2011

WebGL Starter Package

[Edit: Now with 100% less jQuery dependency!]

I've had some sudden urges to jump back into WebGL land again lately, and while gearing up for another project it struck me how much time I was wasting trying to copy one of my older projects, strip out all the project-specific stuff, and get down to a really basic starting point.

Realistically, that starting point is one of the hardest hurdles to jump for any graphically-based program, doubly so for 3D programs. There's just so very many silly little things that might go wrong!

  • Is the context and viewport set up properly?
  • Is your shader compiling?
  • Are your vertices right?
  • Are your indicies right?
  • Are your matrices right?
  • Did you give the right strides and sizes to your vertex layout? 
  • Is your geometry rendering in front of the "camera"?
  • Is it rendering in a color other than the background color?
  • Are you certifiably insane yet?
I decided to save myself some greif and put together a quick and dirty WebGL page that I can use as a jumping-off point for future projects. The goals here were to start with something that was putting geometry on the screen and allowed me to move around the scene, nothing more. This was the result:

Like I said, very simple. Just enough geometry on screen to know that you're rendering properly and to give you a sense of space. This is certainly not meant to be the foundation of a complex demo, but it sure beats starting out with a blank page! Of course, while I built this for my own use I certainly hope that some other aspiring WebGL developer out there finds it useful, and to that end I've packaged it up in a convenient downloadable bundle:

A couple of quick notes, for those that end up using this: Most of the formats I work with are designed with Z as the "up" axis, so Y is the one that actually points "out" of the screen. I'm using requestAnimationFrame (with fallbacks to setTimeout) for the core animation loop, so hopefully that doesn't cause any issues. I'm also not setting anything like blend modes or geometry culling states, you're on your own there. The page relies on my glMatrix library (included) and also makes use of webgl-debug.js, though you can easily turn that one off.

If anyone finds this useful or uses it as the basis for their own projects I would love to hear about it! Also, if you have any suggestions on how to improve it send them my way! Happy coding!

Tuesday, April 12, 2011


On the suggestion of @mrdoob today I reworked the animation loops for my Quake 3 and Doom 3 demos to use requestAnimationFrame (if it's available). This won't really produce a visible difference for most people, but it should utilize the browser event loop more efficiently. Paul Irish gives a good explanation of it at his blog.

A side effect of this that may be of interest to other developers is the simple little jQuery plugin I wrote to support this functionality in a cross platform manner that also provides a few perks to the user. You use it like so:

$('#canvas').requestAnimation(function(event) {
    // Draw frame here...

The "event" passed into the callback function contains the following values:

timestamp: Current timestamp, equivalent to new Date().getTime()
elapsed: Milliseconds elapsed since the animation started
frameTime: Milliseconds elapsed since the callback was last called
framesPerSecond: Rough count of the number of times the callback has been called over the last second. Only updates once per second.

If you wish to stop animating, return false from your callback.

I recognize that this may not meet everyone's needs, and probably is a little buggy at the moment, but it should provide a quick and easy way to do a basic animation loop in a way that plays well with your browser. If you have any suggestions for improving it let me know!

Hellknight demo works again

I've had something come up this last weekend that encouraged me to go back and clean up a couple of my Demos. Hopefully I'll have something more tangible to talk about in that regard soon, but the practical effect of it is that I got the Hellknight demo working again!

The problem was exceedingly stupid, and the only reason I hadn't solved it earlier is because frankly I had never bothered to look. When doing the update to move everything over to the new array models I apparently started pushing the mesh indicies in as a Uint8Array, then later called drawElements with UNSIGNED_SHORT (which, of course, is 16 bit). Obviously these two don't get along very well. I changed over to a Uint16Array and everything worked again! No one to blame but myself on that one. :)

Oh, and I had to un-invert the texture coordinate V component that I had previously been inverting. No idea why that happened. *shrug*

Monday, March 28, 2011

First impressions of Gingerbread for the Droid X

So I posted this review on the Motorola support forums (here), but apparently even though I didn't give out any details about where to get the leak Motorola still felt that me talking about Gingerbread (even if it was mostly positive) was unacceptable and they locked down my post and removed the review. Fortunately, I saved a copy, and I'm reposting it here. Also, since this is my site now and not theirs, I have no qualms about linking to the leak pages:

Get your Gingerbread goodness over at My Droid World! Huge thanks to P3Droid and the crew for pulling this together for us!

I've spent the last half hour or so browsing through my newly Gingerbread-ed X, and I wanted to let the community here know my initial impressions. It's still early, so I probably won't cover everything but I'll update this thread as I find new things. For the sake of reference I was using Liberty 1.5 just before I updated, so I'm going from Blurless to full on Blur. (Quite the switch!)

Sunday, February 27, 2011

glMatrix 0.9.5 released

Quick post to say that I've just posted a new version of glMatrix up on the Google Code repository. This is primarily a bug fix release, but I've also squeaked in a few new functions at the suggestions of some users including: mat3.transpose, vec3.lerp, and quat4.slerp.

I know it's been a while since I've updated the library, but frankly I haven't had much motivation to do so lately. I haven't had a chance to do much with WebGL lately (much to my dismay!) and it didn't seem like anyone else was really using the library. A few days back, though, I received word that the awesome tutorials at LearningWebGL.com have been updated to use glMatrix! Needless to say, this provides a bit more motivation for me to keep the libraries up to date!

I'll be paying a bit more attention to items on the library issue page for the next little while, so please direct any bugs you find or feature requests you have over there. Thanks!

Tuesday, February 8, 2011

Why I love my Android, but bought my wife an iPhone anyway

Yes, I'm one of those crazy blokes that was up at 3AM (well, 1AM in my timezone) to pre-order an iPhone as soon as it came out on Verizon. But not for me, for my wife. Her old feature phone was dying and she had made certain wistful comments about how nice a smartphone would be, so when given the chance I jumped at it. It arrived yesterday, and so far she seems to be in love. Yay!

There was a funny moment when I gave it to her though, one that caught me off guard. After an initial moment of shock and excitement (it was a surprise) she turned to me and said: "But wait, why didn't you get one for you?"

I laughed in her face.

It was a totally involuntary reaction, and I felt bad for it, but in all honesty the idea of getting an iPhone for myself seemed a little absurd. I absolutely love my Droid X, and wouldn't trade it for anything Apple has to offer. At the same time, I would never buy my wife an Android phone (or at the very least not any of the ones Verizon offers at this point). I feel it's worth examining the reasons why:

  • I don't want to give my wife a phone crammed with bloatware. I don't want to try and explain why that stupid Verizon bookmark will never go away, or why VZ Navigator is stuck there, even though she'll never use it.
  • I don't want to ever be concerned about wether or not she's going to get the latest software and OS updates. I don't want to tell her that the cool new feature that they just announced may not be coming to her phone at all because the manufacturer is too lazy to update it.
  • I don't want her to deal with a buggy, bloated skin. I don't want to have to explain why my phone looks and acts different than her phone which looks and acts different than her parents phone, even though they're all on the same OS.
  • I don't want to EVER tell her that she needs to pull her battery to get her phone to respond again. I've needed to do that weekly in the past with my Droid X (before I started using custom ROMS).
  • I don't want her to worry about wether or not an app in the store will actually work on her phone. I don't want her to pay for something only to have it crash and burn when she tries to run it, because it was developed on Phone X and she has Phone Y.
  • Basically: I don't want to give her a phone that needs maintenance, by me or anyone else. It's a freaking PHONE! If she has to keep running to her geeky husband just to keep it running, it has failed in the most fundamental way possible.
Say what you want about walled gardens and draconian policy, you have to admit that Apple puts the rest of the mobile world to shame when it comes to making a smartphone that just plain works. There's a lot to be said for that, and I honestly believe that that is the core reason why they still sell like mad.

Of course, on the flipside there's MY phone, which comes with an entirely different set of qualifications:
  • I don't want to ask permission (much less pay) for the "privilege" of running a program that I built on my phone.
  • I don't want to be told that I can't run something just because I didn't get it through their "official channels." If I find a cool project online, who are you to tell me I can or can't try it?
  • I don't want to ship my phone off for a week because my battery died.
  • I don't want to pay through the nose just to get more storage.
  • I don't want to have to use some proprietary cable when I have several perfectly good micro USB cables lying around.
  • I don't want to be forced to use a particular music, email, browser, or messaging app just because the phone maker doesn't like competition.
  • I don't want to lose my widgets! Holy crap, how do you people live without them?
  • Basically: I view my phone as a small computer, and I want to treat it as such! I don't mind a bit of tweaking and fiddling in order to have more control over my device.
Of course, these two viewpoints (Absolute stability vs. absolute control) are somewhat opposing ideals, but it is certainly nice that there's enough choices out there to satisfy both parties. Granted, I think both sides could certainly be improved by trying to meet somewhere in the middle: There's no good reason why Apple can't free up their platform a little more, and there's no excuse for me ever needing to pop my battery because my OS locked up! Until we hit that point, though, it's a bit sad to say that the iPhone really is the only sane option for users that want a reliable device without dealing with a lot of crap from the carrier and manufacturer.

Android still has a long way to go in that regard.

A few other random notes before I go, after observing my wife with her phone:
  • No matter how much I try to delude myself into believing that my phone has a snappy UI, that illusion disappears the moment I interact with any iOS device. It's so much more responsive that it makes me want to cry. This is important, Google! Fix it!!!
  • It's unfortunate that many of the Android apps out there are mere shadows of their iOS counterparts. (Pandora comes to mind immediately.) Many Android apps look like amateur knockoffs in comparison, even when developed by the same company! And that's not even considering the multitude of apps that have no Android equivalent! (Netflix! I'm looking at you!)
  • It's completely baffling that Apple would do something as clumsy as sticking an actual temperature (73 deg) on the weather app icon and not make it update! The first big question my wife had about the phone was "Why isn't the temperature right?" It took me several minutes of googling to discover that the value shown is static. Same goes for the clock. Seriously?!?
  • I didn't realize how much difference two features really make in how I use my phone: The notification bar, and the app drawer. The fact that I can get notifications about anything on my phone in a spot that I can easily see (AND easily ignore) from pretty much anywhere is something I totally took for granted. Likewise, the ability to keep infrequently used apps hidden away in the app drawer while reserving my homescreens for the things I use all the time is absolutely invaluable.

Sunday, January 9, 2011

Cutting the Cable (TV)

My wife surprised me a little while ago by approaching ME with an idea that I had been toying with for a while but never suggested to her because I thought she'd hate it:

"Could we save money if we canceled our cable subscription and just did Netflix instead?"

Well, yes! Yes we could. With our special "we want to keep you as a customer so we'll lower your cost for 12 months" rate having just expired a decent internet connection and standard cable package was now running us about $120 a month. Ouch. The plan was then to sign up for Netflix and Hulu Plus, get a Roku player, and see how it worked out. This was a bit of a leap of faith for us, since we'd never used Netflix before, and I'd only tried Hulu online. If we stuck with it, though, we'd be keeping an additional $45 or so each month, so it was well worth it!

Thursday, January 6, 2011

My Mac experience: Awesome, and occasionally awesomely frustrating.

I mentioned in my last post that I replaced my old (extra crispy) PC with a Mac. I've been using it pretty heavily for more than a week now, and wanted to post some of my initial impressions. Just FYI: I'm fairly new to this whole Apple thing, so some of my comments may be the result of ignorance on my part. If they are, feel free to correct me. (Note: "You just don't get it, it's better that way!" is NOT a valid form of correction.)

The great:

  • Out of box experience is FANTASTIC. Plug it in, turn it on. Done. Windows is physically incapable of providing an experience like this (though that's not always a bad thing. More on that later.)
  • NO CRAPWARE! The things that come pre-installed (iMovie, Garage Band, etc) are actually full, useable programs that I want to keep! What a concept! 
  • I love the program installation model. 90% of apps are "Drag into folder to install, drag to trash to uninstall." Beautiful! And it's only possible because...
  • No registry! I know that this can have it's downsides (the windows registry does address some issues) but most of the time it's just a badly maintained disaster waiting to happen.
  • Everything is... shiny! It's silly, but all those smooth little transitions make more of an impact than you know. Apple seem to be the only software company that really, truly appreciates how much a slick UI matters.
  • It's silent! At first I was upset by how load the magic mouse was dragging across my desk, but when I switched back to my G500 I realized that it was just as loud. The problem was, apparently, that this was the first time I could actually hear my mouse movements over the cooling fans. Wow. The loudest part of my computer now is the external hard drive.
  • Drivers, or lack thereof. I plugged in my printer and it just worked (didn't even tell me it was "Installing new hardware."), same goes for pretty much everything else. The most surprising so far: NO additional drivers needed to sync, develop for, and debug on my Android phone! There was one notable exception, which I'll mention in a bit.
  • I can actually let my computer go to sleep now and not have to worry that it may not wake up again. That's really REALLY nice.
The quirky (things that are weird to a newbie, but not necessarily "bad"):
  • It kinda weirds me out how programs don't actually "close" when I hit that little X in the corner. Well, most of the time, anyway. Some programs do. The inconsistency takes a bit to get used to.
  • All those little shortcut symbols used in the menus are greek to a new user. Command is easy to figure out (the same symbol is on the keyboard), and Shift can be guessed with a bit of thought, but that Option symbol is terrible. I only figured it out through careful experimentation. Same goes for Escape and Control.
  • The mouse ballistics feel all wrong to me. I simply can't get them to a comfortable point. Not to mention that the highest mouse movement speed is way too slow using the magic mouse. I'm glad I can crank it up with my G500.
  • Dual screen use is odd. It works okay, but to have the menu for all windows stuck on the main screen is inconvenient at best.
  • My nice 5.1 speaker system is useless now, since it was the "3 analog jack" type. (Yes, you can get converters. For the price you may as well get new speakers, though.) Not really Apple's problem, since they provide an optical out. Makes me sad though.
  • Had to buy a DisplayPort to VGA adapter after finding out that the DVI port on my old monitor apparently wasn't the right kind of DVI, rendering my DisplayPort to DVI converter useless. More ViewSonic's fault than Apples, but annoying nonetheless.
  • The magic mouse served to very quickly remind me of how much I depend on the middle mouse button (wheel click). It's annoying that there's no way to simulate that with some gesture. (Three finger click? I don't know.) I want to love the magic mouse, but I can't use it without going crazy due to this.
The maddening:
  • The hardware is limited and expensive. As a graphics developer and a gamer I was forced to buy the second tier of iMac hardware JUST to get a decent graphics card. I love the computer, don't get me wrong, but there's no question that it cost more than an equivalent PC (sorry, it does. You can't convince me otherwise) and that the choices are very slim. I would LOVE the ability to get a 3Ghz machine with the 5670 GPU, but I can't. I know this is part of what allows Apple to make such a stable OS and machine, but it doesn't stop it from being a major frustration. As much as Windows gets dinged for compatibility issues, the fact that they don't have such an iron grip on the hardware they are compatible with means that ANYONE can find a Windows machine that suits their needs. The same cannot be said about Apple hardware. (Nope, not even if you consider the Mac Mini.) 
  • Finder is painfully primitive compared to Windows Explorer. There's some nice touches (like playing music files directly from the thumbnail) but way too many omissions: There's no obvious way to go up to your parent folder (had to look up a keyboard shortcut). Opening an image in Preview doesn't let you navigate between any other images in the same folder. No built in way to open a terminal window from your current folder. No way to manually enter a file path. No "New file here" option. Oh, and to rename a file you press "Enter" but to open it you press "Command+O"? Really? I'm sorry, but that's just stupid.
  • What, exactly, is that stupid little green "+" in the corner supposed to do? SOMETIMES, if I'm lucky, it maximizes a window like I wanted. Other times (like in Google Chrome! Augh!) it just makes the window "a little bigger" (for seemingly random values of bigger). Is it really that weird to want a consistent method for making my window fill my screen? Oh, and on the subject of resizing...
  • Sorry guys, but the lower right corner just ain't cutting it for me. There's really no reason why I shouldn't be able to grab ANY side or corner of the window and stretch it out. Yes, Windows did it first, but that doesn't mean it was a bad idea. Suck it up, admit they did it right, and make all of our lives a little easier.
  • With all the peripherals that just works, the fact that I had to go pull a random driver built by a hobby developer off of Google Code to get my XBox 360 gamepad to work at all was a massive disappointment. I know that it's Microsoft's product, and I know that Apple isn't a big fan of games, but after everything else worked flawlessly this was a huge let down, and there's really not much excuse for it.
  • The fanboys. There's a reason why people dub it "the cult of Mac." You guys are worse than the Linux zealots out there sometimes! I'm sure that someone out there is reading this post right now and preparing to explain to me in great detail why all of the flaws I listed above are defects in ME, not OSX. You know what: I just don't care. If I'm fighting with the machine to get it to do what I need it to, it has failed. Windows failed a lot. OSX fails less, but it still fails, and it's all the more annoying because of how much they managed to get right.
Despite all my gripes however (and I didn't even list them all here) the fact remains that the thought of booting into my Windows partition now makes me cringe. I still work with Windows at my job every day, and I don't have a problem with that, but when I'm at home I stay in OSX as much as possible. Even with some real head-scratchers in the design department it blows away anything Microsoft has to offer. I love my Mac, and I'm glad I made the switch.

Now can someone give me a working "Maximize" button?

Saturday, January 1, 2011

Back up your data! Now!

So, wow... this has been a crazy last week for me. I managed to luck into having the entire week off of work, so I was looking forward to a nice break that I could spend working on my game project (which I mentioned a couple of posts back) to try and get it ready for the Mozilla Game On competition.

[Quick side note here: Apparently Mozilla decided to go and drop WebSocket support in Firefox 4 due to security concerns with untrustworthy proxies, something which I utterly fail to understand as being a WebSocket concern. If your proxy is lying to you, aren't you pretty much screwed anyway? Whatever the reason, though, that put a serious cramp in my entry as is because I relied heavily on WebSockets to get performance that didn't, you know, suck. I was rather annoyed about the whole thing, but that's not much of a concern for me now...]

Before I jumped into my little coding spree, however, I did a little post-Christmas shopping with some gift cards I received. I used one of these to purchase a new hard drive, since I was running a bit low on space on one of mine. Took the new drive home that night, plugged it in (SATA, so the plugging part was ridiculously simple) and flipped the computer on...

...and all hell broke loose.