All posts by patorjk

The Last Days of Summer

I decided to I take a 5 day weekend to celebrate the end of summer. I needed some time to just sit back, relax, and clear my head. For my break I thought I’d read a book on PHP, check out a John Swartzwelder novel, and do some serious coding.

It’s strange how one’s plans can just go right out the window. I didn’t get any of the above done, though I did have a lot of fun. It’s weird that I planned such an anti-social weekend and then went out did a bunch of stuff. Looking back, I had a lot of fun this summer. I didn’t really take any time to soak it in or appreciate it until just now though. I hope everyone out there reading this had a good weekend.

Today is officially the last day of my big weekend. Hopefully I get some of my original goals done so I can have some more stuff for this site. In the meantime, if you’re bored, I highly recommend viewing this short film:

It was shown to me by a friend a few years ago and it’s absolutely beautiful. I’ve actually re-googled for it a couple of times just so I could re-see it.

Well, it’s almost 5am, I should be getting to bed. I don’t want to totally mess up my sleeping schedule for Wednesday.

Stats: August

Average Number of Visitors a Day: 156.45
Total Number of Visitors: 4850
Total Amount of Bandwidth Used: 2.93 GB

Links from an Internet Search Engine  
13 different refering search engines Pages Percent Hits Percent
Google 1188 90.6 % 1188 90.2 %
Yahoo! 59 4.5 % 59 4.4 %
MSN Search 14 1 % 15 1.1 %
Ask 11 0.8 % 11 0.8 %
Unknown search engines 10 0.7 % 10 0.7 %
AOL 10 0.7 % 10 0.7 %
Windows Live 9 0.6 % 9 0.6 %
AltaVista 3 0.2 % 3 0.2 %
Google (Images) 3 0.2 % 3 0.2 %
Google (cache) 1 0 % 5 0.3 %
Dogpile 1 0 % 1 0 %
del.icio.us (Social Bookmark) 1 0 % 1 0 %
Blingo 1 0 % 1 0 %

Thanks again to all of you who check in every so often. I’ll do a real update later this weekend… I promise…

Know the name of a Federal Employee? You can look their salary up online!

I actually can’t believe this is real: an online database containing the salaries of most federal employees (exempt are those who deal with matters of national security):

http://php.app.com/feds06/search.php

This sort of makes sense since we the tax payers are paying these people, however, it also seems a little wrong. Aren’t these people entitled to some amount of privacy? I’d be annoyed if someone could look up my salary, though then again, maybe if everyone knew what everyone else made it wouldn’t be so bad. You wouldn’t have people trying to cut backroom deals to get a higher salary.

Unfortunately I don’t know many Federal Employees. I tried most of the people on my facebook list and came up with only one hit (and it was the guy who told me about the site). This would be a much juicer find if Maryland State employees were listed, sort of like how New Jersey State employees are listed.

Anyway, I found this to be shockingly interesting site. If you know any federal employees you might want to point them to the site, just to see what they think :).

New Old Stuff

I decided to resurrect the “Most Asked For VB Code” section of my old site. I figured there were a lot of helpful bits of code in there, and that it was a waste to simply discard it. You can view it here:

VB 6.0 Code Bank

I went through and checked all of the code to ensure that it still works, and I tidied up some of the messier looking segments. It’s cool to look back at some of the code I wrote when I was 16/17 and think “hey, that’s pretty clever!” But then some of it also made me go “holy crap, I was an idiot!” I was especially embarrassed by how many times I used variables named “pat” or “patorjk”. I believe early on in my programming days I saw some hacker posting code on a message board and he would use his handle as a variable name. I thought that was pretty cool so I picked it up. Sadly, no one ever pointed out to me that this was a bad programming practice and that it made code hard to decipher.

I suppose that’s one of the disadvantages to teaching yourself. You can end up unaware of certain bad practices until they come back to haunt you. When I was a Teaching Assistant back in grad school I worked with a beginner class where students could lose up to 60% of their project points on style alone. I remember having irate students come to me and ask how they could have received a “D”, or in some cases an “F”, when their project worked perfectly.

I was kind of upset with the harsh grading guidelines too, but there wasn’t anything I could do. Looking back, I still think it’s too harsh to put so much of a weight on style. All that will do is discourage smart people who have picked up some bad habits. This reminds me of an essay I read recently:

Hackers and Fighters

I like how the author romanticizes the idea of the “street programmer” (the self-taught programmer), and I like how he points out the weak points in a university education. Though a university does do a good job of weeding out complete idiots, I’ve seen a number of people who can barely program (ex: “I can’t program in Java, I only know C++”) who go on to make 80k+ a year. Then there are people I know who are self-taught who could program circles around those people and they make less than 50k a year. It boggles my mind. A degree and good social/networking skills seem to be the most important things when getting a good job, which saddens me a little, but oh well.

Opera and Other Things

I downloaded the Opera web browser recently, just to see what this page looked like. I noticed that the TAAG program looked like crap so I’ve redone some of the CSS for it. Later this week I hope to do another update to it to add in some new features.

A while back I was working on a new web app, I haven’t touched it in over two weeks – my plan is to start working on it again later this week and to try and release it asap.

“Google! Teacher, mother… [lustful voice] secret lover.”

I love google. Lately it’s been bringing me lots of visitors. It brought me 55 yesterday, which is a pretty damn cool (and pretty good for a small site like this one). However, this past weekend when I messed up my .htaccess file, it brought me no one. In case you weren’t here: I screwed up my 301 redirect links and no one was able to access anything in the /software, /downloads, or /programming directories of this site (you were just redirected to one of my blog pages). I will now do a total site checkout after touching that file. I can’t believe I was that stupid.

Like an idiot I let this error sit around for a few days before I realized something was terribly wrong. That something wrong was that google was no longer bringing me traffic. I’d fallen in my page rank. I quickly fixed the error and my page rank mostly returned. However, my ranking for the term Patrick Gillespie seems to have disappeared. I was actually kind of hoping to grab the top spot from that stupid news article on the pervert named Patrick Gillespie. Though then again, it was my Tiburon entry that was climbing the top ten of that page, and I’m not so sure that article would look good next to one titled “Patrick Gillespie arrested for failing to register as a sex offender.”

Back to the topic at hand though. Luckily google forgave my foolish .htaccess blunder. I was reading an article on some other blog earlier this week where the site owner wasn’t as lucky:

Google Penalty Nightmare

Google penalizes sites for certain offenses. Usually this is because a site is trying to do something artificial to raise its page rank. But since google doesn’t tell these sites the reason they are penalized, no one can be sure why a page suddenly drops in its page rank – or in some cases, is removed from the search results. Sometimes you mess up and things go back to normal (like me), and sometimes you mess up and things are sucky for a decent amount of time (like the girl in the blog entry above).

I find the whole idea of a secret penalty system quite interesting. It makes sense that they wouldn’t release all of the details, because then people would know exactly what to try and get around. Not knowing what you’re up against means there are more mines you could potentially step on, and when those mines are things like being removed from google’s search results, you’re less likely to do something to try and cheat the system. At least that’s what I think their reasoning is.

Not surprisingly, webmasters have tried to understand this system and some have even put together lists of possible google penalty filters. Below is one such list:

Google Filters

Jargon you’ll need to know to understand that article:
SERPS = Search engine results pages
SEO = Search engine optimization

If you’re a webmaster, that article is worth reading. It’s even caused me to think twice about the naming of my “links” section – though it’s late now so I’ll rename it later.

Oh yeah, in case you’re wondering, the title of this entry is a reference to a famous Homer Simpson quote. I figured it would probably be a good idea to mention that :).

Cool New Image Resizing Technique

One of the things I studied in grad school was how to find the least visible seam in an image. This would be useful say if you overlapped two images and wanted to find the best way to cut them together, or if you wanted to create your own waldo image. I thought that the seam finding algorithms were really cool, but that there wasn’t a whole lot of use for them outside of doctoring photos and generating texture. Holy crap was I wrong. Check out this video:

This is the first time I’ve ever watched a technical video and thought to myself “I understand everything that is going on here… why didn’t I think of this!!” That’s a truly awesome idea. It makes me want to breakout my seam finding code and make an image resizer (I’ve written code that finds the low energy image seams you see in the video – it’s what I used to make the waldo image, which is the same image randomly pasted over and over again with a seam between overlapping areas).

The only problem with that idea is that finding seams isn’t as fast as they show you, at least not for large images (400+ by 400+). They might be using a different algorithm then what I’m thinking of, or maybe they were running the app on a super fast computer, but in my experience finding a seam can take some time if the image is of a decent size (by “some time” I mean 1 or 2 seconds). Though I might be wrong and this might be a super fast algorithm. It’ll be interesting to see how people apply this.

Blocking Firefox

There’s been a decent amount of hysteria on some of the social news web sites about a new campaign to block Firefox users. The reasoning behind this is because Firefox has a plug-in that allows you block ads.  Proponents of this campaign argue that this robs website owners of the opportunity to make money from their site. You can see the campaign’s website here:

http://whyfirefoxisblocked.com

The majority of people who are reacting to this seem to be upset about this. However, after doing a bunch of googling, I was only able to find one site that is participating in this campaign (and it sucks).

Anyway, I figured I’d give my opinion on this issue since I’m sure I’m in the minority. Frankly, I don’t see why so many people are getting upset. If a website is going to block you, just don’t visit that website. There are millions of places to go on the internet, if a handful of sites want their ad revenue and you don’t want to look at ads, just don’t go to those websites. For every site that blocks Firefox, I’m sure an alternative will spring up somewhere.

I kind of like the ad revenue based system though, since it allows a lot of stuff to be free. Most sites I visit have a tasteful display of ads. If I go to a website and they ambush with lots of crap, I just never go there again. I’d hate to see this model replaced with a model where ads are injected into the actual content (movies and TV shows sometimes do this). I wouldn’t put it above sites to do this either. They’re going to make their money one way or another (or disappear).

As a site note, according to my web stats, 37% of the people who visit this website use Firefox. 52% use Internet Explorer, and the rest use a variety of other browsers. I guess that’s indicative of a more web savvy audience.

Some Minor Updates

VB Arrays Tutorial

I noticed from my stats page that chicanerous’ VB Arrays tutorial was getting between 8 and 10 views a day. This made me realize a decent number of people were reading it and I felt sort of bad that I had it displayed with such a crappy layout. The grey background and white text on a stand alone page made the tutorial look very low quality, so I decided to give it a make over and add a section on the Split and Join functions:

http://www.patorjk.com/programming/tutorials/vbarrays.htm

Also, there will be more updates to the programming section soon. It’s been pretty sparse for a while.

Links

I’ve created a formal link section. On my old website, I had a link exchange program. I will no longer do this. Mainly because I had a lot of crappy links submitted to me, and at the time I figured it was better to be nice and do the link exchange than to reject someone who probably visited my page regularly. However, I actually ended up having a few people email me asking why I had a couple of really crappy sites linked (I wont name names). I’m sure this caused them (and others) not to trust the link suggestions I gave. There was also the problem with people unlinking me after I linked them, which was annoying (especially if I didn’t like their site to begin with).

So now I’m just going to limit it to sites that I like, think are interesting, have useful/relevant content, and feel are worth checking out. As time goes on I hope to add a lot more than what’s there now. I haven’t decided if I want to focus on smaller sites or just interesting sites in general. My gut is to go with smaller sites, but we shall see.

Sleep

I haven’t been getting much sleep lately. I guess there’s no point to me saying that here, but I’m pretty tired right now and felt like sharing :P. There was actually going to be more added today, I just didn’t get around to it, maybe this weekend though.

Alternative Photomosaic Algorithms

One thing that has always bothered me is software patents. They just seem wrong. How can someone own a way of doing something? Or own a technique that others would come up with when trying to solve the same problem? They seem like unnatural restraints, like patenting the solution to a math problem. I can understand wanting to protect your ideas, but I honestly don’t think most software ideas are novel enough to warrant a patent, especially at the rate that the US government seems to be giving them out. I’m mean hell, who can be expected to know about all the stupid things people have patented? There is no fucking way that there were 40,000 patent-worthy software ideas that came out last year. That’s completely absurd.

One example that always comes to mind when of thinking about why software patents are bad is the story of what happened to id Software while they were developing Doom 3. Essentially John Carmack, the technical director at id Software, came up with a neat way of doing real time shadowing, which he called Carmack’s Reverse. After discussing the technique on his blog, it was discovered that two researchers had already patented the idea. In order to be able to use the technique, Carmack had to come to an agreement with them. Someone else, independently of these two parties, had also discovered the technique and presented it at a conference before the patent was filed. This person offered to let id Software use it for free, but id Software decided to play it safe and strike a deal with the patent owners [1].

Its things like that that bother me. Someone is toiling away writing a piece of code, they come up with a great way for solving a problem, implement it, and then later learn some researcher in a lab somewhere has patented the idea – probably just so they can say they have X number of patents – and now they have to pay to use a technique that they came up with on their own.

But I digress. I could go on for 10 pages about why I don’t like software patents. My focus here is that I’ve learned that a patent has been granted for the creation of photomosaics, those neat images that are made up of smaller images. The abstract reads as follows [2]:

A mosaic image is formed from a database of source images. More particularly, the source images are analyzed, selected and organized to produce the mosaic image. A target image is divided into tile regions, each of which is compared with individual source image portions to determine the best available matching source image by computing red, green and blue channel root-mean square error. The mosaic image is formed by positioning the respective best-matching source images at the respective tile regions.

I honestly don’t blame the person who filed this patent, since they were one of the first people to create mosaics with photos [3], and they wanted to make sure people didn’t profit at their expense. However, I still really dislike the idea of patenting a way of doing something, especially when the method seems rather obvious. I was able to write a photomosaic generation program when I was in 11th grade, and I didn’t have to look up an algorithm, I just made one up (yes, Mosaicer was written when I was in high school).

The only part of that abstract that seems outside the normal algorithm for creating your basic mosaic is the part that mentions the “root-mean square error” in relation to the color difference. That part is significant because it does a much better matching job than taking a simple color difference (what Mosaicer does). However, spend any time in a reading up on color differences, and you’ll learn that that’s the way to do color differences in RGB color space. So I’m not sure why this abstract as a whole was considered so novel.

Furthermore, the RGB color space is flawed in that it’s a non-uniform color space. Instead of using the RGB color space, one could use the L*a*b* color space, which is a uniform color space that has some of the most precise color difference formulas ever developed.

There’s also another way of creating photomosaics, outside of simply changing the color space and color difference methods. 90% of the people who read what follows will have no clue what I’m talking about, but try to follow along because this is a completely different way of patch matching than what is described in the patent abstract. Instead of dividing the target image into blocks and seeing which input images best fit the blocks, one can use Fast Fourier Transform patch matching [4, 5]. This technique wasn’t discovered until 2002, and I haven’t seen anyone discuss it with relation to creating photomosaics.

When using this method you transform you data into frequency space, perform some calculations, and then you transform your data back into the normal space. Because frequency space is a weird place, multiplication becomes addition. Therefore your computation time is greatly reduced when calculating squared color differences in frequency space. Kwatra et at [6] wrote about the speed up this matching technique provided in their paper on graph cut texture synthesis. With the simple RGB method, generation time for a particular video was 10 minutes, with the FFT method, generation time on the same video was 5 seconds. This is actually how I came across the FFT patch matching. I had to implement Kwatra’s paper for a graphics project I was assigned.

After reading the FFT paper, I immediately saw the possible application to photomosaics – however, after implementing the FFT patch matching method and seeing its nuts and bolts I wasn’t sure how much of a speed up it would actually provide, since it performs more calculations than needed for photomosaic patch matching (it tells you how well the image matched at every offset instead of just the subset of offsets needed for tile placement). In fact, I wondered if it could end up slowing things down. But there are ways of optimizing it for photomosaics, so it was (and is) unclear to me how much of a speed up or slow down effect it would have.

As you can probably tell, I never got around to trying out the above method in relation to photomosaics, mostly because I’m not that interested in photomosaics anymore. But I did put it on my list of possible things to do in the future. I will never patent any of these ideas discussed here so they’re free for anyone to use. However, I have no clue if someone has already thought of them and patented them. So I can’t make that guarantee. Though I figured I’d make a post noting all of this in case no patent for this exist, just so there are patent free methods for photomosaic creation out there.

References:
[1] http://techreport.com/onearticle.x/7113
[2] http://v3.espacenet.com/textdoc?DB=EPODOC&IDX=US6137498
[3] http://en.wikipedia.org/wiki/Photomosaic#History
[4] http://www.cs.sfu.ca/~torsten/Publications/Papers/icip02.pdf
[5] http://www.cs.sfu.ca/~mark/ftp/Icip02/images/icip_2002.jpg
[6] http://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf

TAAG Update

One thing that annoyed me about my TAAG program was that every time you changed fonts, the top frame had to be reloaded. This was because the program needed to talk to the server to get the information about the new font. However, technically, nothing on the page needed to be redrawn, so refreshing the whole frame seemed like a little much, and when you change fonts a lot, it gets annoying. Anyway, this week I was reading up on AJAX, which is a way of talking to the server without reloading the webpage. Since this was just was I was looking for, I decided to incorporate the technique into TAAG:

http://www.patorjk.com/software/taag/

The top frame will still reload is you change the “Font Type” or if you select a font from the preview page. However, it should not reload if you change fonts via the drop down font list. Also, there is a bug in Firefox where the “onchange” event isn’t triggered for keypresses on listboxes, I’ve set things up so you should now be able to change fonts with your keyboard on Firefox. A few other things were updated as well, but it was all small stuff.

If you’re thinking about developing web applications or interactive webpages, AJAX is worth reading up on. I wish I’d known about it sooner. Later this week I think I’ll start on my next program, it’ll be another web program and I’m unsure of how long it’ll take to make.