2012 Comes to a Close

I’ve had a lot of false starts with writing posts lately. Writers blocks seems to have gotten the best of me, but I figured I’d do a post to reflect on the past 12 months.

The hand of a fellow runner
Photo By fejsez

This year I turned 30, which makes me feel a little strange. I’m no longer the young guy. Up until 2 years ago I was always the youngest guy on the team I worked on at work, now I’m the oldest guy (though in fairness, the team I’m currently on is only 3 people).

I also became a dad this year (2012-12-09), which is pretty cool. I wont bore you with any mushy revelations or talk about how it’s changed me – I honestly still feel like the same person. However, it is amazing to look over at the little guy and know that he’s got half my DNA. It’s also fun to wonder what kind of person he’ll be. Hopefully I can steer him in the right direction and help him become the best person that he can be.

PHP

I learned a lot about web development this year. Both on my own, by experimenting with new HTML5 APIs and browser tools, and at work. To speak in general terms, at work I’m a developer on two web applications – one based in Java Spring and one based in PHP. Working with the two side by side, I’ve slowly grown to hate Java web development – it’s slow for iterating changes, lends itself to gigantic class hierarchies, and seems to make trivial tasks harder than they should be. Even though it has its flaws, PHP is actually pretty fun to develop in. It also has great documentation and it seems like there is a blog post or forum question on anything you’d possibly want to do with it.

I still prefer the front-end though, and I’m still not sure I want to rely on PHP every time I do something on the back-end. One of my goals for next year is to take a serious look at Node.js, Python, and Ruby, and to do a for-fun project in each. I’ve actually started this already, but got a little side tracked when the baby showed up.

Internet Archive Fund Raiser

The Internet Archive is doing a donation drive with a 3-to-1 match. The archive was of great help to this site a few years ago when I underwent the one-two punch of my hard drive crashing and then my old web host deleting my site. Thanks to their Way Back Machine, I was able to recover a lot of files (in fact, it’s the reason the VB sections to this site are still up). I threw a couple of bucks their way out of appreciation, and I figured I’d pass on the link to anyone else who was interested in helping them out. They’re almost at their goal of raising 150k.

Book Review: “JavaScript: The Definitive Guide”

I felt a little nerdy asking for this for Christmas, but it was worth while read

The web apps I write for this site are written in JavaScript, and after landing a web developer job two years ago, I’ve focused more on getting better at everything web related – through reading blogs, writing apps, and reading books.

As far as books were concerned, there had been one which had consistently caught my eye, but which I’d kept resisting due to its size. I have a short attention span and I was worried I wouldn’t finish it. I also tried to foolishly convince myself that I probably already knew most of what it covered – after finishing JavaScript: The Good Parts and a couple other short books, I felt I had a pretty good handle on the language. What else could there really be to know? But temptation got the best of me, and I’m glad it did, because it’s a great book and I learned a ton.

JavaScript: The Definitive Guide, by David Flanagan, is 1078 pages* of densely packed information on the JavaScript programming language. It’s not filled with fluff and it covers an amazing amount of ground. In truth, it’s really 3 books in one: a book on the core JavaScript language, a book on client-side JavaScript development, and a reference book for client-side and core development. It’s written for people familiar with programming who want to gain an in-depth understanding of everything they can do with JavaScript.

An experiment in retaining information

I didn’t want to read this book and then 6 months later not remember anything I’d read. I had a friend who’d read it and not gotten much out of it, but I believed that may have been because of information overload. Leisurely reading technical books can be fun, but the information isn’t going to stick unless you use it or discuss it. So I decided to try an experiment – after each chapter, I was going to write up a set of notes on what I found interesting in that chapter. That would force me to go back over the information and help me document what I may want to go back to later on.

I did this on the wordpress blog Reading the Rhino JS Book. It’s really just a collection of notes, but it’s a great way for me to go back and go “oh yeah, this is what I found interesting in this chapter”. In the beginning I was really excited and felt it was a great way to read a technical book – if you’re going to invest the time in reading a large book, you might as well invest the time to try and retain the information. However, I’d be lying if I didn’t say I got tired of writing up notes on each chapter. So my feelings are mixed. I do believe it helped in organizing what I learned and found interesting, but it was also a bit of a pain towards the end. I haven’t yet decided if I’ll take notes on each chapter of the next programming book I read, but I can say it was useful to do so in this case.

Who should read this Book?

I would not recommend this book for people who are new to JavaScript. It does contain almost everything you need to know, but it’s not really written for the newbie. When you’re new you want to get up and running quickly, and you want a brief introduction to the tool set you have at hand. For that, JavaScript: The Good Parts is probably the better choice.

If you do front-end web development professionally, or you just really like writing web apps, this book is worth picking up. It’s written to be readable and thoroughly covers the current set of web technologies you have at your finger tips with JavaScript. Even if you feel like you have a good handle on things, this book does a good job at filling in the gaps. As an example, I knew JavaScript did automatic semicolon insertion if you forgot to include semicolons**, but I wasn’t sure how this worked. It turns out that the ECMAScript spec has a clearly defined algorithm for this, and knowing how it works gives some insight into using the language.

Final Thoughts

This is probably now my favorite book on JavaScript. A couple weeks ago I was openly pondering where I wanted to go web development wise, and I think, for now, I’m going to focus on client-side development. This doesn’t mean I’m going to ignore back-end stuff, I do a lot of PHP at work and there’s other back-end technologies like Ruby, Python, and Node which look interesting, but the client-side looks like it has the most utility for app developers. It’s nice to be able to quickly write a single page app, upload it, and have anyone be able to use it.

* 716 pages if you don’t include the reference sections.
** Technically the interpreter doesn’t insert semicolons, it just treats a line break as the end of a statement in certain situations. Thus it’s sort of simulating semicolon insertion.

Trying out Chrome’s App Store for the Web after falling down the W3C Widget Rabbit Hole

In my last post I discussed offline storage for web apps, and that I wasn’t sure how users were supposed to know that certain apps worked offline. Google Chrome has come up with a neat solution for this that allows users to “install” web apps via their Chrome Web Store. I’d played around with its previous incarnation, when it was the Chrome Extension Gallery, but I hadn’t really been back since it opened up to web apps and changed its name.

Installed App

The Chrome Web Store is interesting because you can submit both packaged apps and hosted apps. A hosted apps is simply a zip file of a metadata file and some icons. The metadata file explains general information like the URL for the app and if it works online or not.

Installing a hosted app is sort of a fancy way of bookmarking it, but the fact Chrome tells you whether it works offline or not is a big plus. This solves the issue of letting users know that an app’s URL will work even when they don’t have an internet connection.

This is exactly the kind of thing I was looking for! To try it out, I decided to create hosted apps of my Text to ASCII Art Generator and JavaScript Snake Game:

Both work offline, and I plan to create hosted apps of some of my other programs too, but for now I’m just testing the waters. FireFox will also soon launch a web app store called The Mozilla Marketplace. It’s setup will be similar to chrome’s, though it’ll be slightly more feature rich, allowing you to run web apps like Desktop programs. This feature is similar to the idea behind W3C Widgets, though W3C Widgets seem to have fallen out of favor.

As an aside, before I ended up at Chrome’s Web Store, I took a detour through the world of W3C Widgets. I hadn’t heard of them before, and unless the tides change, I may not hear much about them again. However, I figured I’d write up what I learned, since they seem to be one of the least popular pieces in the HTML5 puzzle.

W3C Widgets?

While researching offline web storage, I was bothered by the fact that normal users would have no idea that certain URLs still worked when they were offline. With this concern, I ended up emailing the W3C’s fixing-appcache mailing list because I could only find a small bit of discussion on the topic, and that discussion didn’t come to a consensus about what to do. The response I got was pretty interesting, and made me realize there were areas to the HTML5 world that I hadn’t seen:

This is essentially what the Widgets spec is supposed to achieve.  See
here for an example of a widget configuration:

http://www.w3.org/TR/widgets/#example-configuration-document

It makes sense to split packaging a website from providing offline
storage/caching technologies, since the two are different solutions to
different problems.  It's true that there is some crossover,
especially if a widget includes content in its package.  But the
widgets spec has its own problems, not least of which is a terrible
name.

Letting users know that your site will work when offline doesn't seem
like a terribly difficult problem to solve at the application level,
to be honest. I think as a developer I'd rather browser vendors spend
their time on other stuff :-).  But it's certainly a fair point, and
does need to be considered as part of the UX of your app.

After thinking about it, he was right about separating out the packaging of an app from its offline storage capability. However, what was this widgets spec he mentioned? I’d never heard of W3C Widgets, so I decided to some research.

W3C Widgets are essentially packaged web apps. In fact, at one point, ppk proposed that they be put under the buzz word “HTML5 apps”. The W3C Widget spec discusses a format similar to Chrome extensions. You have a zip file of a directory structure containing HTML, CSS and JavaScirpt files, with a config.xml file that explains the basic metadata of the widget. Widgets can be embedded within a webpage or run from a user’s desktop. Not exactly what I was looking for, but it sounds kind of neat. Why had I never heard of this before? I did some more googling and the vast majority of the blogs and news regarding W3C Widgets seemed to be from 2009 or 2010. I could find nothing on Chrome, FireFox or IE implementing the spec – even though as of September 27, 2011, it became an official W3C recommendation.

I did, however, find a hand full of projects that used the spec. While there wasn’t a lot of buzz around W3C Widgets, there appeared to be enough going on with them that they weren’t in danger of falling into the abyss, at least not yet. Below is a list of the projects I found with some notes I made on them.

  • Opera’s Web Browser – Opera appears to have fully embraced W3C Widgets and is using the spec for Opera extensions. They’ve also set the browser up so that it will run widgets that you launch from your desktop. So its basically allowing you to create native-ish HTML5 apps. I find it peculiar that no other vendor has embraced W3C Widgets like Opera has.
  • Apache Wookie – This appears to be a widget environment and it reminds me of the OZone Widget Framework. Though Apache Wookie runs W3C widgets while OZone runs OZone widgets.
  • Phonegap – This framework allows developers to create HTML5 apps that can be distributed in mobile phone app stores. Its app configuration is based on the W3C Widget spec.
  • Wholesale Applications Community (WAC) – “An allegiance of telecommunications firms and others working together to create a common mobile platform” [ref]. W3C Widgets are apart of what they’re doing. However, according to this article, they ended up failing.

So each of these projects is kind of cool, but none of them addressed the original problem I had – how the hell are users supposed to know they have web apps installed on their computers that will work when they’re offline? And if W3C Widgets are the answer, why aren’t the major vendors adopting them? Eventually I had some nice people direct me towards the Chrome Web Store, and it was also nice to find out that FireFox will soon have its own web store too. Hopefully IE10 will do something similar when it comes out.

Even though I didn’t find them useful, it was interesting to learn about the existence of W3C Widgets. I’m not sure why they seem to have failed, as they’re essentially a way of packaging HTML5 apps. Though then again, I do think Chrome’s manifest is a lot nicer than the config.xml that the W3C Widget spec detailed.

Notes on Offline Web Storage for TAAG

After you’ve visited it at least once, the Text to ASCII Art Generator (TAAG) will now work even when you don’t have an internet connection. I had a user request this, so I figured I’d add it in for them and anyone else out there who may want to use the app offline.

Offline web applications are one of the new capabilities being introduced with HTML5. An offline web application has its files downloaded and permanently installed in a web browser’s application cache. This allows users to go to an app’s URL and have it work even when they aren’t online. It’s a smart idea, but it has some interesting pros and cons. In this entry I’ll detail what I did to set TAAG up to work offline, detail some issues I ran into, and provide some notes for anyone looking create an offline app.

The first step in creating an offline web application is to create a manifest file which tells the browser which files it should store in its application cache, which files should try getting from the network, and if offline, which files it should use in place of certain standard files. Any HTML files that reference this manifest will be implicitly added to the CACHE section of the manifest and after a user’s first visit, all cached data will come from the application cache, even if a user F5’s the page. The application cache is only ever updated when the manifest file is updated, and even then, it will take 2 visits for a user to see any updates. This is because when a user visits the app’s URL, the application will be read from the cache, it will then see the manifest has been updated, and it will then update the application cache for the user’s next visit. These idiosyncrasies are a bit to take in, but once you know what’s going on its not that bad. Here’s what a sample manifest file would look like:

CACHE MANIFEST
# This is a comment.
# The cache is only ever refreshed when changed.
# Last updated: 2012.08.18

# The CACHE section indicates what we want cached.
# The HTML file linking to this manifest to auto-added to this list.
CACHE:
one.jpg
two.jpg
jquery.js?12345

# The NETWORK section indicates which non-cached files should be 
# obtained from the network. This will almost always be "*".
NETWORK:
*

# The fallback section allows us to setup "fallback" pages to use 
# when offline.
FALLBACK:
page2.htm page2-is-offline.htm

An HTML file linking to this manifest might look like this:

<html manifest="my.appcache">
<head><title>Test</title><meta charset="utf-8"></head>
<body>
<div>Some text</div>
<img src="one.jpg"/>
<img src="two.jpg"/>
<script src="jquery.js?12345"></script>
</body>
</html>

And one last gotcha – to keep the manifest file from being cached by the browser, and to ensure the manifest is served with the correct mime type (an incorrect mime type will keep the manifest from being recognized), we need to update the web server’s configuration. For Apache, we’d add the following rules to our .htaccess file:

AddType text/cache-manifest .appcache

<Files *.appcache>
    ExpiresActive On
    ExpiresDefault "access"
</Files>

That should do it! With this technique your users will now be able to use your apps even when they don’t have an internet connection. I find this cool because it makes web apps more useful and puts them a step closer to supplanting Desktop apps. However, there are unfortunately a number of issues that have held offline web apps back, and they’re worth mentioning.

Issues with application caching

A lot of people aren’t happy with how application caching currently works. The W3C has set up a working group that’s discussing possible improvements, though hopefully the main components of the current spec will continue to work. However, I also feel like they need to make some changes too. Below I’ll list my biggest concerns.

  • How would users know that certain URLs still work when they’re offline?

    I don’t understand how this isn’t the most important issue for offline applications. Without someone manually telling you that you can use a particular application while offline, I don’t see how someone would think they could browse to example.com/webapp and expect it to work when they didn’t have an internet connection. In their current form, I only see offline applications as useful to techies or people in the know.

    It’d be nice if offline applications could provide an icon, name, and description, and then browsers could show users which offline applications they had installed. It could be argued that this is a browser feature, but bookmarks are a browser feature too, and all browsers allow webpages to provide a favicon for bookmarking purposes. Whatever the case, there should be a way to let users know certain web pages/apps work offline.

    I’ve only seen this discussed once, and the people involved didn’t come to any conclusions, so I’m not too hopeful on this one. However, I think it’s probably the most important point, since no users = no usage.

    2012.09.01 Edit: Someone alerted me that there is a Widgets spec that’s currently in the works that supposed to solve this issue. I haven’t taken a good look through it yet, but it’s good to know there are attempts to solve this problem currently in the works.

  • Fresh updates aren’t served immediately / F5 doesn’t break the cache / the page that links to the manifest has to be cached.

    This seems to be the most popular feature request, though after reading spec author Hixie’s take on the issue, I sort of agree that many of the use-cases for using the application cache in this way are very similar to straight up HTTP caching. Though it would be nice if an app used the HTTP cache while online and this cache synced with the application cache, which would then be used when a user was offline.

    Around two months ago, a new feature was added to spec in an attempt to address this. It takes the form of a new setting that gets added to the manifest:

    SETTINGS:
    prefer-online
    

    It’s explained here, though there’s some debate on if its the right solution, and I was unable to get it to work in Chrome or the nightly FireFox build, so I’m not sure how it works or if it actually solves the problem.

Other Notes

  • Chrome was easier to work with than FireFox

    Chrome will print to the console as its loading the application into the cache, so you know instantly if there’s a problem or not. FireFox didn’t do this and that made it a bit more annoying to work with.

  • Listing installed applications

    • For Chrome, browse to the URL “chrome://appcache-internals/”
    • For FireFox, in the menu, go to: “Tools>Options>Advanced>Network” and see the section on “Offline Web Content and User Data”.

  • In the manifest, URLs need to be URL encoded and query strings needed to be included

    Kind of obvious, but worth mentioning.

  • The popular iframe trick doesn’t work in FireFox

    There is a popular trick that aims to allow an HTML page to use the application cache while not being cached itself. It attempts this by having the page include an iframe with an HTML tag that links to the manifest. From my own tests, this trick works in Chrome but not in FireFox.

    2012.09.10 Update: After updating to FireFox 15, it seems to work now.

  • JavaScript API

    I didn’t go into it here, but there’s also a JavaScript API for the application cache. See the “Additional Resources” section below for more information.

  • What would happen if every website wanted to work offline?

    The space on a user’s computer is finite. I’m not sure how offline storage will work if offline apps end up becoming very popular in the future. Maybe the solution is to focus on installable apps which have the option of working offline – and users can pick and choose what they want installed?

Additional Resources

Tone Playing Experiment with HTML5’s Web Audio API

tl;dr: Tone Player (works in Google Chrome and Safari 6+ only)

One of my favorite parts of the QBasic class I had in high school was discovering how to play sounds. It introduced a whole new layer on which to experiment with.

Until recently, adding custom sounds to a web application has not been a simple task. Thankfully, the W3C is working on a new high level audio specification called the Web Audio API that allows you to easily create and play sounds. Webkit based browsers have pushed out an implementation of this spec (Chrome and Safari 6+), and hopefully other browsers will follow suit soon. Previously, Mozilla had been working on a more lower level audio API called the Audio Data API, but it is now deprecated.

After seeing a few fancy demos of the Web Audio API, I decided to do some digging to see what all it could do. Sadly, there aren’t a lot of great up-to-date tutorials on how to use it. A couple of exceptions I found to this were the getting started tutorial at creativejs.com and a hand full of nice blog posts by Stuart Memo. However, I was also surprised to find that the Web Audio WC3 specification is actually very readable.

So with this fancy new API, how hard is it to create and play a simple tone? It’s pretty easy actually. The below code gives one example.

var context = new webkitAudioContext(),//webkit browsers only
    oscillator = context.createOscillator();

oscillator.type = 0; // sine wave
oscillator.frequency.value = 2000;
oscillator.connect(context.destination);
oscillator.noteOn && oscillator.noteOn(0);

You can also play more than one frequency at once. However, I found that playing too many tones together can crash the browser, so this is something you have to be careful about, and there may be a better way to do this.

var context = new webkitAudioContext(),
    oscillators = [], num = 5, ii;

for (ii = 0; ii < num; ii++) {
    oscillators[ii] = context.createOscillator();
    oscillators[ii].type = 0;
    oscillators[ii].frequency.value = 2000 + ii * 200;
}

var connectIt = function(ii) {
    oscillators[ii].connect(context.destination);
    oscillators[ii].noteOn && oscillators[ii].noteOn(0);
    ii++;
    if (ii < num) {
        setTimeout(function() {connectIt(ii);}, 1000);
    } else {
        setTimeout(function() {
            for (var jj = 0; jj < num; jj++) {
                oscillators[jj].disconnect();
            }
        }, 1000);
    }
}

connectIt(0);

Oh, and you'll notice I check for the noteOn property before trying to use it. The spec says you need to use it, while Chrome tells me it doesn't exist. This appears to be a possible bug in Chrome's implementation.

With the basics down, I decided to create a simple Tone Player. It doesn't add much to what I've already showed you, but it has a few extra features. And in all honestly, it was actually made more to serve as a hearing test for myself. A couple years ago I had a hearing test done which indicated I had hearing issues, especially in my left ear. Since then I've been trying to take care of my hearing as best as possible. This tone generator is fascinating for me because I actually found that I can't hear certain frequencies (near 12000Hz), unless I really jack up the volume, but I can hear others fine all the way up to the 20k's. Hrm, probably time to schedule another appointment.

Anyway, enough about my hearing issues. Even though the API currently only works in a couple of browsers, it's a lot of fun, and I've hardly even dared to scratch the surface with this entry. I recommend checking it out if you have a few moments. If you're looking for a less experimental, more cross-browser way to play sound, I'd recommend checking out one of the many JavaScript libraries that have popped up to play audio. I'll link a few below for those who are interested.

WordPress Hacking

A couple weeks ago I took an afternoon to read up on how to create a WordPress theme, and was surprised to learn how much there was to it. Re-doing this site’s theme has been on my TODO list for about 3 years though, and I felt it was important to give the site a fresh coat of paint.

From a user’s perspective, I’ve always loved WordPress. It’s intuitive, has a great interface, and has all of the blogging features I could want. However, under the hood, I’d heard it was a mess. After poking around a bit, I didn’t really find anything that was discouraging, but I did find myself spending way too much time researching how to make minor adjustments. So rather than toil endlessly, I decided to take a different approach and took the popular Twenty Ten theme and made a bunch of modifications to it (most notiably mixing in some elements from the Responsive theme). This was actually pretty painless, and I’ll probably continue to make more modifications. If you’re thinking about creating a theme, it’s worth reading up on how to do it, but using an existing theme as a launching pad will make your life a lot easier. Anyway, I hope the new design is easier on the eyes, please let me know if you have any issues!

Thoughts on PHP

WordPress is powered by PHP, and as of late, I’ve noticed a hand full of articles deriding the language, and another hand full vigorously defending it. I don’t like to consider myself a language specific programmer, but I do a fair amount of PHP development at work, and also find myself reaching for it when I do non-work related projects.

As of late I’ve been wondering if I should dive deeper into PHP, or instead try to look into getting good at Python or Ruby. The inelegance and quirks of PHP are a big turn off, but the fact that it’s used for so much (WordPress, Wikimedia, and lots of other popular software), and is so convenient to write and deploy, makes quitting it hard to do. I’ve sort of been in this weird stalemate about where I want to go. So while it’s not my favorite language, I don’t dislike it enough to throw the baby out with the bath water – for now at least.

Showdown Reboot

Comparing only Facebook-related data wasn’t that much fun, so I decided to soup-up the Social Media Showdown!* app to take into account sharing information from Google+, Pinterest, Twitter, StumbleUpon, Delicious, Digg, and LinkedIn. It’s now a lot more entertaining to see which side gets which trophies:

Thankfully all of the services mentioned have public APIs which make getting the share data a piece of cake. This handy Share Counts page details how to do it for all of the major APIs.

The only service I didn’t use the API for was Google+, and that’s because I sadly didn’t find that API page until I was about half done the reboot. Until recently, there was no way to read the number of +1’s a site had gotten, so clever people had found ways to parse that information out of the source code. I took this approach, but will go back and do it the more correct way later on.

Other than pitting sites against each other, this kind of data can be useful for creating your own share buttons or tracking the popularity of a website. It could also probably be used to answer some interesting questions – Are G+ users really more technical, or do they like cat pictures just as much as Facebook users? Are the different services used to share different kinds of information, or are they strict competitors? Does popularity on a particular service correlate with anything? Though for now I don’t see myself doing much else with these APIs.

The only other major change to the app was that I removed my custom built JavaScript library and replaced it with jQuery. For personal projects, I’ve tried to only use JS libraries for GUI components, since I was worried I’d be missing something if I relied too much on a library. However, I now feel pretty comfortable with the guts of JavaScript and the DOM, so writing my own utility functions seems a bit silly.

Over time I think jQuery has won out as my favorite all purpose library. Initially YUI blew me away, and I still think its an awesome library, but I’ve come to dislike the verboseness of its syntax. Anyway, I hope you find the updates to the app entertaining. Let me know if you have any suggestions/feature requests/etc!

* Formally known as “Website Showdown!”. I figured “Social Media Showdown!” made more sense.

TAAG Reboot

The HTML Frameset tag is officially obsolete in HTML5. Since their inception, frames have drawn the ire of many web designers, though I’ve always had a soft spot for them. When I first learned how to use the frameset tag back in ’98, I thought it was the coolest thing ever. I was able to do all sorts of trickery with it, like creating an online midi player and storing data between page loads. When I heard people go on about how they sucked, I ultimately knew they were right, but I felt like they were throwing the baby out with the bathwater.

Times have changed though, and as HTML has evolved, the gap framesets filled has been replaced by better technologies. Knowing their fate was sealed, I decided to overhaul the one app I had here that was still using them – the Text to ASCII Art Generator (TAAG)*. The new version was written from scratch, but with the aim to keep the same look-and-feel. This time around I also wanted to implement the full FIGfont spec, which surprisingly has a lot to it.

The main part of the spec that I didn’t implement last time was the vertical layout rules. These rules allow font authors to “smush” letters together vertically. Below you can see an example done in the “Standard” font.

  _____ _   _ _   _ 
 |  ___| | | | \ | |
 | |_  | | | |  \| |
 |  _| | |_| | |\  |
 |_/ _|_\___/|_|_\_|
  | |_| | | | '_ \  
  |  _| |_| | | | | 
  |_|  \__,_|_| |_| 

This can be fun to play around with, but it’s not always what you want. Therefore, I decided to also allow users to control which horizontal and vertical layouts they wanted a font to use. Exploring what layout looks best is actually kind of fun.

The other main part of the spec I left out last time was the loading of non-standard letters. Most fonts don’t implement characters outside the normal ASCII range, but the FIGlet spec allows authors to define whatever unicode character they want, and some fonts implement quite a few extra characters. For fun, I went ahead and added unicode character #3232 to a few of the fonts, so that people could make a FIGlet look of disapproval:

   _____)      _____)
  /_ ___/     /_ ___/
  / _ \       / _ \  
 | (_) |     | (_) | 
  \___/ _____ \___/  
       |_____|       

I’m probably too easily amused. Anyway, there are quite a few other updates, but if you’re interested you can read about them here.

I’ve also decided to open source the JavaScript FIGlet driver I wrote for this app. There were no other open source full-spec implementations of FIGlet that I could find, so I figured it could be useful. If you’re interested, it can be found here or here.

If you have any suggestions for the app please let me know. Also, I haven’t forgotten about the Keyboard Layout Analyzer, so don’t think this reboot means I wont be adding any more to that.

Oh, and as a final side note, it was interesting to learn that frames can be a drag on the loading of page. The additional HTTP requests should make this obvious, but that combined with whatever else the browser has to do to accommodate them, makes for a non-optimal load time. And interestingly enough, even though the new app has a heavier page weight (190k vs 114k), it loads much, much quicker – though I should say that frames are definitely not the only factor to influencing this.

* It’s former name was “Text Ascii Art Generator”. I noticed some people calling it “Text to ASCII Art Generator” and thought that was a better name.

Cracking MaGuS’s Fate Zero Encryption

I’m getting ready to upgrade my computer, and while going through some old files I stumbled across Fate Zero, the last version released of infamous Fate-X application. The tool was popular way back in the late 90’s since it added a lot extra functionality to AOL – some of which AOL was ok with, and some of which it wasn’t very fond of. It was created by two mysterious individuals known as MaGuS and FunGii. After Da Chronic (known for AOHell), MaGuS was probably the most widely known AOL hacker. Even though Fate-X 2.5 and 3.0 had a much bigger impact, Fate Zero was the most extensive in regards to features.

To maintain its status at the top of the heap, Fate Zero had to protect its external data, and this meant encrypting it so that other developers couldn’t snatch it up for their own progs. The prog scene of that time, however, is now long dead. Seeing these files today, I got curious. MaGuS was only 16 when he wrote Fate Zero. When I was 16, I knew almost nothing about encryption. It wasn’t until I was in college that I got a good exposure to the field of cryptography. Even though MaGuS seemed like a pretty smart guy, at that point in time he probably also didn’t know much about encryption. This made me think that the files might be easy to crack. It seemed like a fun way to spend a few hours, so I decided to see if I could decode them.

Interestingly (or not interestingly, depending on how you feel about it), the biggest source of external data for Fate Zero was AOL ASCII Art (ASCII Art done in 10pt Arial). This was typically used for scrolling into chat rooms. Fate Zero had over 500 files dedicated to this. You can see an example piece of art and its corresponding file encoding, below.

                         .--··´¯¯¯¨˜`·-.,
            .---··· ´¨¨¨                      `·.
       .·´                                        ',
    ,'                                               ',
   ¦             /|        |        /                  |
    ',     (     \\:\  |   /|      /''\     .|          |
      '·.  \|\ \.,'.|::\|\/ |¸,.-·´¨¨`·/.·´  |           |
         ` ·-\\|'/|¨`,     `|˜¨|¨˜`·„¸      |   |´¯`,    |
           ,'/||', \:'| ,     |_\::':/      |    |,  ,'     |
         ,'//|  ',¯¯·',                    |    | ¯        |
        ,'/  |  | ` ·.  --·´               |     |           |
        |´  |   |   _ ` ·.__ .·´        |      |/_        |
        |   |   |¨¯  ¯¯///,··\     ,.--·|      |  ',¯¨¨˜˜``'
        |.·´|   |--,··´¯//\ \ \    //   Aeka  _¸'·-By KioNe

File data for the above picture:

MDR恔…f”…f”…f”…f”…f”…f¡’ý)õý¦¡“rn~f”…f”…f”“sŽ¡ý”î
…f”…f”…f”…f”…f”…fÁ,“Sk…f”…f,f”…f”…f”…f”…f”…f”…f”…f”…f”…f”…mo恔…rˆ”…f”…f”…f”…f”…f”…f”…f”…f”…f”…f”…f”…fˆ rP”…쁔…f”…f”…f£áf”…f”…”…f”…f”…f”…f”…f”…f”…Ân~f”Œr”…fœ…f”…¢½®Áfð…f£áf”…f£Œm½”…f”“”…f”…f”áSk…f”…m¢…f½ðÁf½¢‘mðŸ€½ðÁuðr¡ú
Åý¢ú”áf”…f”…f”áSk…f”…f”Åf¡Á¢Ý›”Â
Ô‘f”…fÁðýîÝý¦øf”…fð…fðõÁ …f”áSk…f”…f”…f›”ÂÝ›‘f½®ŒÂ …f”…ÂÀП€ˆ®”f”…fð…f”ár”‘m”…fðrP”…f”…f Œuð…fˆ õ›‘f”…f”…f”…f”…f”…”…fÝ”f”…f”…Ân~f”…f”‘m”…”áfÁ”t”’s)…f”…f”…f”…fð…f”…”…f”…f”…Ân~f”…f”áú”áf”áf”ÄfÁ”tÀÓ…t)…f”…f”áf”…fð”¥”…f”…f݁o恔…f”…”…”…Â
$…f$”u ý½”…f”‘tŽ¡”…f”áf›‘õ
ýÞÁÔŒSk…f”…fð“ýð…fð’s,ú£”¢Ð…¢”…f£…fµÊ±Â”…¥›s£í…‘Ê㳫n~

So right away it’s clear he’s not using a simple substitution cipher, yet due to the repeated use of white space in the source data, a pattern does seem to emerge in the encoded data. I compared the file sizes and found MaGuS’ encoded *.mdr files to be 5 bytes larger than their decoded counter parts. I chalked this up to the “MDR” that prefixed all the files, and the ending carriage return and line feed that seemed to end all of the files.

That meant there was probably one-for-one character encoding going on. After trying a few things out, I realized every 4th character seemed to use the same encoding. My guess was that he was combining 4 simple substitution ciphers, and using a different cipher depending on the index of the character. I created a quick script that read in an input/output combination and then tried to use that information to decode an encrypted file. To my delight, the script (mostly) worked! This was great, however, without knowing the full map of each cipher, I would only be able to get partial results.

I looked further and found each cipher was simply doing a character offset, meaning each cipher was a Caesar Cipher. The offsets were 70, 97, 116 and 101, respectively. If you look up the corresponding ASCII code for those numbers, you get the word “Fate”. I tried out this new decoding strategy and was able to successfully decode a directory of MaGuS’ files. I had broken the code! MaGuS was using what is known as the Vigenere Cipher, and for that particular directory, “Fate” was the pass-phrase.

In another interesting twist, I noticed certain types of files used different Vigenere keywords. For his *.mdf data files, the keyword “12151981” was used. My guess was that this was his birthday, since this date would have made him 16 when the prog was released and he mentions that he was 16 in the app’s about section. In this same about section he also mentions that he’s Asian and what high school he went to. This narrows down who he is to almost a T.

This got me thinking: “I wonder if I can track down who MaGuS was?” With the aid of some crafty googling, email addresses taken from webpages mentioned inside of Fate (if you dig through the machine code, you’ll find a dozen or so URLs), Rapportive (which can be used to look up social profiles based on email addresses), the internet archive, and leads taken from Fate Zero itself, I was able to pin point an individual who fit all of the criteria and was friends with people who got shout outs in Fate. I plugged their name and the “12-15-1981” birthday into dobsearch.com, and only one result came back, and it was from the state and city MaGuS said he lived in. I was stunned, I had found MaGuS.

I feel like it’d be wrong to out him, but at the same time I know it’d be a cop-out to not say anything. So I’ll just say that according to his LinkedIn and Facebook, he works for a consulting firm in the Washington DC area and is specializing in web related work. The rumors of him working for a security firm or of being this guy are false. He also seems to be somewhat of world traveler, and has a side hobby of being a photographer.

Part of me wondered for a second if I should contact him. He was a big inspiration to me back in the day, and Fate-X and its ilk are what led me to learn how to program. However, after talking with my wife, we thought that’d be too creepy. He made some cool progs a long, long time ago, no need to freak him out with some elaborate story that involves breaking some encryption he wrote over a decade ago.

Anyway, after I’d finished my little side quest, and I realized I still had 500+ decrypted AOL ASCII Art files, many of which haven’t seen the light of day in over a decade. Since some of that stuff is kind of cool, I decided to create a gallery for it. If you have a few moments check it out. Also, feel feel to grab and host any art there that you like, just be sure to leave in any artist signatures. It’s kind of strange to think that era is so far away, but also kind of neat to find remnants of it every so often.

2013.04.28 Update: A bit more has happened since I made the original post. MaGuS actually emailed me to congratulate me on the finding and to confirm his identity (though I’ll continue to respect his anonymity). He also mentioned that at the time he wrote Fate he had no training or knowledge of programming, and that he came up with his own encryption method as he went along. I don’t fault him for this, as Fate is still really impressive and I think most of us were in the same boat back then. He seems like a pretty cool guy, and I was glad to hear he enjoyed the post.

2016.03.10 Update: To reconnect with fellow former AOL developers:

Finding and fixing JavaScript errors that no one tells you about

Earlier this month I saw an interesting article on logging client side JavaScript errors, and then another one that showed how to do it using Google Analytics. The idea was clever and easy to implement, so I decided to try out the Google Analytics version when I posted up my new Keyboard Layout Analyzer last week. When an error occurred, I’d log the file it happened in, the line number it happened on, the browser’s user agent string, and the error message itself. Below you can see a screen shot of the error log after 1 week.

Google Analytics Panel

I’m unsure what errors 2-5 are. There’s no associated file, and the error message is simply “Script error”. The Analyzer gets an average of 90 visitors a day, so these account for a small percentage of users, though it is a little peculiar. Error 1 is definitely a real error though. I had been checking the stats every day, so when it popped up I actually got a little excited.

Usually when I discover an error, it stings. Users don’t often tell you about errors, unless they really like the app or are feeling unusually nice. Often times they may not even know something’s wrong. If the app wont work correctly, they just move on to another page without another thought.

This time, I caught it right after it happened, so I was actually a little happy. The error was pretty easy to pinpoint. Luckily it only occurred if AltGr keys were used, which doesn’t effect users unless they decide to add customizations for foreign layouts, which is why it didn’t show up until later in the week. I should have caught this while testing, but over sights are made sometimes.

So far I’m happy with this little experiment, and am thinking about possibly adding client side error logging to my other apps. For now the only draw backs I see are that this method of discovering errors can be a hassle if your scripts are minified and concatenated, and that users could abuse this if they knew you were doing it (though then again, they could abuse any kind of data saving request you were doing).