Ideas of March

Around two weeks ago, Chris wrote a blog post that I responded to, and I was reminded of some of the great conversations that helped build our community. Many of these took place on the blogs of the aughts.

Like Chris, I think we've lost a bit of that. I've seen what feels like hundreds of conversations fly by on Twitter, 140 characters at a time: incomplete thoughts crammed into a package that's simply too small for detailed and deep expression. Don't get me wrong—a stream like Twitter (or maybe not Twitter itself) is valuable for quick thoughts and light conversation, but we often need more than that.

Thus, like others, I am pledging to do more blogging this year than last, starting now.

I recently spoke at ConFoo, and I intend to turn my Fifty Things talk into a series of short blog posts. I've also been mulling over a post on how and why we ported Gimme Bar from CouchDB to MongoDB. Those will hopefully pave the way and form a habit and personal culture of blogging. Please feel free to hold me to this intent, and if you have a blog, I hope you'll join this effort of creating a blogging revival (and if you don't yet have a blog, check out Habari).

See you soon.

Gimme Bar: One Year Old

Exactly one year ago, today, Gimme Bar was born.

Gimme Bar has been the focus of my work for that entire time, and I haven't blogged about it (much).

Ironically (sort of), I've been far too busy working on Gimme Bar to do much writing about Gimme Bar, but I thought it fitting to take a couple minutes to write a few words about it today.

The elevator pitch for the project goes something like this: Gimme Bar is a personal utility to help you capture and collect interesting things you find in your day-to-day use of the Web.

I'm admittedly not very good at the pitch, but my colleague (and Gimme Bar's backer) Cameron is, and we released this demo video, yesterday:

We're in the middle of some huge changes on the technical side that I intend to blog about, and once those are released, I hope to add a lot more active users. If the video makes it sound like something you might be interested in, be sure to sign up for an invitation. We've got some really great stuff coming in the pipeline, too, for existing users.

I'll post more, soon, I hope.

Post-Advent 2010

As I write this on Christmas Eve, Chris is putting the finishing touches on PHP Advent 2010.

A brief search of my site indicates that I haven't actually blogged about PHP Advent since 2007, when I was lucky enough to write the first article. That first year, Chris put the advent articles up on his blog (and we do intend to copy them over to phpadvent.org, eventually). Sensing that Chris had entirely too much work to do, curating, and since we were working together by the time the season came around in 2008, I offered to help with editing and curation—I did, after all know the pain/joy of putting together a magazine.

Chris took me up on my offer, and he enlisted Jon and Jon to design and build a proper site. We commissioned authors a little too late, but they came through and PHP Advent 2008 was a success.

By the time 2009 came around, Chris was already deep into preparing to launch Analog, and I'd already announced (internally) that I was moving on to other things. As a result, 2009's Advent was hard. Really hard. We commissioned authors too late, didn't set solid deadlines (as much as we hate deadlines, this sort of date-sensitive project requires them), neglected to dedicate enough time to author herding and editing, and to top it all off, I was headed to Costa Rica for a much-needed vacation, leaving Chris holding the bag for the last five days of 2009's season. Things were so bad at one point, last year, that I took it upon myself to write an article just so that we didn't miss a day. Luckily, we made it through (and by we, I mean Chris, because by the time my flight to San Jose on Dec. 19th came around, I'd had quite enough of Advent for the year).

If we learned anything from PHP Advent 2009, it was sadly not from the great articles, but instead from our own failures. If we were going to do this again in 2010, we needed to get on it early, and we needed to attack with full-force. I set my calendar to start bugging me in August, but even though I was hassled by its weekly reminders, we found ourselves at the start of November, wrecked from just having organized a conference, and in the middle of two product launches. Despite feeling like we didn't want to have the trouble of Advent again in 2010, neither of us dared say it to the other…at least not in so many words.

Due only to the abilities and professionalism of our most excellent authors, PHP Advent 2010 was—at least in my opinion—the best year, yet. They wrote wonderful, substantial, punchy articles that informed our readers, and generated significantly more traffic than we've seen in previous years: over 70,000 views, from more than 25,000 unique visitors, so far, and data from past years tells us that these numbers drop slightly starting on the 25th as we cease to post new content, but remain strong into January, with constant, lower traffic and occasional blips throughout the year. The most popular article this year had more than 10,000 views!

As we post the last article of 2010, I'm encouraged but all of this, and—contrary to how I felt in 2009—am actually looking forward to making PHP Advent even better in 2011.

Thank you Chris, and thank you authors. Have a wonderful new year.

Remote pbcopy

I use the command line a lot. I'm sure many of you do, too.

I find myself often piping things between processes:

$ cat seancoates.com-access_log \
> | awk {'print $1'} \
> | sort \
> | uniq \
> | wc -l
627
$ # unique IPs

One particularly useful tool on my Mac is the pbcopy utility, which takes standard input and puts it on the pasteboard (this is known as the "clipboard" on some other systems). Its sister application, pbpaste is also useful (it outputs your pasteboard to standard output when your pasteboard contains data that can be represented in some sort of text form—if you have image data copied, for example, pbpaste yields no output).

$ cat seancoates.com-access_log \
> | awk {'print $1'} \
> | sort \
> | uniq \
> | pbcopy
$ # the list of unique IPs is now on my pasteboard

I find this particularly useful for getting information from the command line into a GUI application.

Wouldn't it be even more useful if we could pbcopy from a remote SSH session? Indeed it is useful. Here's how.

The first thing you need is a listener on your local machine. Luckily, Apple has provided us with launchd and its administration utility, launchctl. This is basically [x]inetd for your Mac (plus a bunch of other potentially great stuff that I simply don't understand). Put the following in ~/Library/LaunchAgents/pbcopy.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
     <key>Label</key>
     <string>localhost.pbcopy</string>
     <key>ProgramArguments</key>
     <array>
         <string>/usr/bin/pbcopy</string>
     </array>
     <key>inetdCompatibility</key>
     <dict>
          <key>Wait</key>
          <false/>
     </dict>
     <key>Sockets</key>
     <dict>
          <key>Listeners</key>
               <dict>
                    <key>SockServiceName</key>
                    <string>2224</string>
                    <key>SockNodeName</key>
                    <string>127.0.0.1</string>
               </dict>
     </dict>
</dict>
</plist>

…then, run: launchctl load ~/Library/LaunchAgents/pbcopy.plist

This sets up a listener on localhost (127.0.0.1) port 2224, and sends any data received on this socket to /usr/bin/pbcopy. You can try it with telnet:

$ telnet 127.0.0.1 2224
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
hello
^]
telnet> ^D
Connection closed.

…then try pasting. You should have hello (followed by a newline) on your pasteboard.

The next step is tying this into SSH. Add RemoteForward 2224 127.0.0.1:2224 to ~/.ssh/config. This will tell your SSH connections to automatically forward the remote machine's local port 2224 to your local machine, on the same port, over your encrypted SSH tunnel. It's essentially the same thing as adding -R2224:localhost:2224 to your SSH connection command.

Now you have a listener on your local machine, and a secure tunnel from remote servers to this listener. We need one more piece to tie everything together. Put the following in a file (preferably in your path) on the remote machine(s) where you'd like a pipe-friendly pasteboard:

cat | nc -q1 localhost 2224

…I like to put this in ~/bin/pbcopy or /usr/local/bin/pbcopy on servers where I have root. You'll also need to chmod +x this file to make it executable. You'll need the nc executable, which is often available in a package called netcat. This invocation of nc takes standard input and pushes it to localhost on port 2224.

Now you should have a useful pbcopy on your remote server(s). Be aware, though, that there is no additional security on this port connection. If someone on the remote machine can connect to localhost:2224, they can inject something into your pasteboard. This is usually safe, but you should definitely keep it in mind. Also, if you have multiple users using this technique on the same server, you'll probably want to change the port numbers for each user.

I use this technique all the time. Now you can too. Hope it's helpful.

Brooklyn Beta

Last week, many of the Web's most influential developers and designers converged on a seemingly unremarkable art space (née factory for novelty invisible dog leashes) in Brooklyn for the first of what I hope will become a long-standing conference tradition: Brooklyn Beta.

Brooklyn Beta

Despite having personally helped organize several other conferences in the past, Brooklyn Beta has easily earned a spot at the top of my list of favourite events in my career.

I've been involved with planning this event nearly since its inception, but mostly in an advisory role. My friends and colleagues Cameron and Chris did the heavy lifting and deserve all of the credit, though they'd be the first to object to this statement by identifying the many people who came together to volunteer and without whom the conference simply would not have happened.

The goal of BB was to get a group of developers, designers, and (a few) savvy business-type people — the makers of the Web — in one room to meet, converse, show & tell, and hopefully to inspire them to collaborate and make something. Even though only a few days have passed, I know this effort was successful, and I can't wait to see the applications, sites, art and teams that arise and attend next year's conference.

In addition to the impeccable list of speakers, what really made BB stand out was the group of attendees who had the pleasure of spending the day(s) together. Despite my daze (see below), I finally put a face to many of the people whose blogs and Twitter streams have occupied large amounts of my career.

Much of the time leading up to Brooklyn Beta is a blur — we've been frantically trying to finish our app in time to demo (more below) at BB, in addition to handling last-minute details, and we quite obviously bit off more than we could chew. A tip: organize a conference OR finish a large application; don't do both in the same week.

To keep this from turning into rambling and to let me get back to putting some polish on the aforementioned app, here are a few things I feel worth highlighting, in point form:

  • I am blown away by the overwhelming positivity associated with Brooklyn Beta. I've been following the associated Twitter stream, and with the exception of one misinformed whiner (who didn't even attend BB), I've seen nothing but glowing reviews. Further reading: Fred Wilson (one of our speakers), Josh Smith (Plaid), and Charlie O'Donnell; I also put my photos of Brooklyn Beta on Flickr.
  • The first talk of the day, by Shelley Bernstein, far exceeded my expectations. It's not that I had low expectations, it's that the talk was absolutely full of wisdom and good practices. If you have the opportunity to see Shelley give this talk, I suggest you take it.
  • Marco Arment's talk on giving up his day job at Tumblr to focus his efforts on Instapaper was very inspiring. If I wasn't already hip-deep in a startup, I'm pretty sure I wouldn't be able to resist the urge to build something of my own after hearing Marco speak.
  • Fred Wilson, who — from what I can gather — has been key in funding at least $80M of our peers' projects this year, spoke on Golden Principles for Successful Web Apps. The talk as a whole was very good, but he immediately captured my attention when he opened with a statement that seems obvious to me, but I feel is under-represented in the industry: Speed Matters. This point wasn't buried in the middle of a discussion; it was at the forefront of his talk. Remember this point; Fred obviously knows his stuff.
  • Gimme Bar! As I hinted above, we're on the cusp of launching a project that I've been working on full time since leaving my day job at the end of 2009. We demoed Gimme Bar at Brooklyn Beta and received universally positive and excited comments. This is extremely encouraging. You will hear more about this before next week.
  • Similarly, my friends at Analog demoed their project, Mapalong, which was also positively received. I'm excited for them to be launching as well.

If you missed Brooklyn Beta this year, hopefully you won't let it pass you by again in 2011.

There's so much more I could say… but I've got a project to launch. (-:

Arbitrary Incrementer in PHP

On several recent occasions I had a need for an incrementer that uses an arbitrary character set and I thought I'd share my code with you.

I've used this code in the GPL Virus that I wrote to poke fun at the Wordpress/Thesis/GPL debacle, as well as in some clean up I'm doing for the extremely useful JS Bin project.

The most important application, however, was in creating a URL shortening system for the as-yet-unannounced startup project that I'm working on.

I wanted the URL shortener to make the shortest possible URLs. To keep the number of characters in a URL short, I had to increase the set of characters that could comprise a key.

To illustrate this, consider a hexadecimal number versus its decimal equivalent:

$num = 32323232321;
echo $num . "\n";
echo dechex($num) . "\n";

This outputs:

32323232321
7869d6241

As you can see, the second number is two characters shorter than the first number. The reason for this is that every digit of a decimal number is represented by one of 0123456789 (10 unique characters), while ever digit of the hexadecimal number is represented by one of 0123456789abcdef (16 unique characters). This means that we can pack more information into each digit, making the overall length of the key shorter.

PHP has a base_convert() function that allows any sequential base up to 36 (the number of letters in the alphabet (26) plus the 10 numeric digits). We can further compress the above example by increasing the base from 16 (hexadecimal) to 36:

$num = 32323232321;
echo $num . "\n";
echo base_convert($num, 10, 16) . "\n";
echo base_convert($num, 10, 36) . "\n";

Using the full spectrum saves us 4 characters:

32323232321
7869d6241
eukf1oh

Unfortunately, base_convert() does not take the base beyond 36. I wanted to increase the information density (and thus decrease the length of the tokens) even further. URLs are case-sensitive, so why not use both uppercase and lowercase letters? We might as well throw in a few extra characters (- and _).

Additionally, I wanted to be able to increment the sequence, based on the current maximum value. PHP offers no facility as simple as base_convert for this (and the $a = "zzz"; echo ++$a; trick doesn't quite do what I need).

After a bit of code wrangling, I came up with the following algorithm that allows an arbitrary character set, and increments over it, recursively.

function inc($n, $pos=0)
{
    static $set = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_';
    static $setmax = 61;

    if (strlen($n) == 0) {
        // no string
        return $set[0];
    }

    $nindex = strlen($n) - 1 - $pos;
    if ($nindex < 0) {
        // add a new digit to the front of the number
        return $set[0] . $n;
    }

    $char = $n[$nindex];
    $setindex = strpos($set, $char);

    if ($setindex == $setmax) {
        $n[$nindex] = $set[0];
        return inc($n, $pos+1);
    } else {
        $n[$nindex] = $set[$setindex + 1];
        return $n;
    }
}

To change the set, simply alter the $set variable, and adjust the $setmax accordingly. I hope you find this as useful as I have.

After writing this piece, but before publishing it, I stumbled upon some similar code that they use at Flickr to do arbitrary base conversion, so take a peek over there to see how they handle this.

Beer Alchemy Integration

Note from future Sean: this is certainly dead code 10 years later; leaving here for reference in case it's somehow useful.

As I mentioned in my previous post, my beer recipes are now online.

I've had several people ask me how this is done, so I think a post is in order.

While it's entirely possible to brew beer at home without any fancy gadgets, there are several tools I use (such as my refractometer) that make the process easier, more controlled, or both. Brewing software is one of the few instruments that I'm not sure I'd want to brew without. I use a Mac, primarily, so Beer Alchemy (BA) is the obvious choice for recipe formulation, calculation, and logging.

BA has its own HTML export mechanism for recipes, and I used this for quite a long time, but I was never really satisfied with the results. The markup was hard to style, contained a lot of clutter (occasionally useful, but often redundant information such as style parameters), and simply didn't fit well with the rest of my site.

You can also export from BA in PDF (not suitable for web publishing), ProMash's binary recipe format (a pain to convert, although there do seem to be some tools to help with this), BeerXML (normally the most accessible, but in my opinion, a poorly-designed XML format), or in BA's native .bar ("Beer Alchemy Recipe") format, which is what I chose.

The bar format contains a property list, similar to those found throughout Apple systems. Property lists are either binary or XML (but the XML is very difficult to work with using traditional tools because of the way it employs element peering instead of a hierarchy to manage relationships). Luckily, I found a project called CFPropertyList that allows for easy plist handling in PHP. (I even contributed a minor change to this project, a while ago.)

Once you've run the .bar file's contents through CFPropertyList, layout is very simple. Here's most of the code I use to generate my recipes:

<?php
$beerPath = __DIR__ . '/../resources/beer/';

$recipes = apc_fetch('seancoates_recipes');
$fromCache = true;
if ($recipes === false) {
	$fromCache = false;
	foreach (new DirectoryIterator($beerPath) as $f) {
		if ($f->isDot()) {
			continue;
		}
		if (substr($f->getFilename(), -4) != '.bar') {
			continue;
		}
		$cfpl = new CFPropertyList($beerPath . '/' . $f->getFilename());
		$recipe = $cfpl->toArray();
		$title = $recipe['RecipeTitle'];
		$recipes[self::slugify($title)] = array(
			'title' => $title,
			'content' => $recipe,
		);
	}
	asort($recipes);
	if ($recipes) {
		apc_store('seancoates_recipes', $recipes, 3600); // 1h
	}
}

In addition to displaying the recipe's data, I also wanted to show the approximate (calculated) beer colour. Normally, beer recipes declare their colour in "SRM" (Standard Reference Method). There's no obvious, simple, and direct way to get from SRM (which is a number from 0 to 40—and higher, but above the mid 30s is basically "black") to an HTML colour.

I found a few tables online, but I wasn't terribly happy with any of them, and keeping a dictionary for lookups was big and ugly. I like the way Beer Alchemy previews its colours, and since it has HTML output, I emailed the author to see if he'd be willing to share his algorithm. Steve from Kent Place Software graciously sent me an excerpt from his Objective C code, and I translated it to PHP. This might be useful for someone, and since Steve also granted me permission to publish my version of the algorithm, here it is:

<?php
/**
 * Calculate HTML colour from SRM
 * Thanks to Steve from Kent Place Software (Beer Alchemy)
 *
 * @param float $srm the SRM value to turn into HTML
 * @return string HTML colour (without leading #)
 */
public function srm2html($srm)
{
	if ($srm <= 0.1) { // It's water
		$r = 197;
		$g = 232;
		$b = 248;
	} elseif ($srm <= 2) {
		$r = 250;
		$g = 250;
		$b = 60;
	} elseif ($srm <= 12) {
		$r = (250 - (6 * ($srm - 2)));
		$g = (250 - (13.5 * ($srm - 2)));
		$b = (60 - (0.3 * ($srm - 2)));
	} elseif ($srm <= 22) {
		$r = (192 - (12 * ($srm - 12)));
		$g = (114 - (7.5 * ($srm - 12)));
		$b = (57 - (1.8 * ($srm - 12)));
	} else { // $srm > 22
		$r = (70 - (5.6 * ($srm - 22)));
		$g = (40 - (3.1 * ($srm - 22)));
		$b = (40 - (3.2 * ($srm - 22)));
	}
	$r = max($r, 0);
	$g = max($g, 0);
	$b = max($b, 0);
	return sprintf("%02X%02X%02X", $r, $g, $b);
}

A new seancoates.com

Over the past few weeks, my business partner Cameron and I have spent evenings, late nights, and weekends (at least partially) working on a much-improved seancoates.com.

If you’re reading this via my feed, or through a syndication outlet, you probably hadn’t noticed.

The primary goal of this change was to reduce (hopefully even remove) the ugliness of my main presence on the Web, and I’m very happy with the results.

In addition to making things look nicer, we also wanted to improve the actual functionality of the site. Formerly, seancoates.com was a blog, with a couple haphazard pages thrown in. The new version serves to highlight my blog (which I fully intend to pick up with more frequency), but also contains a little bit of info about me, a place to highlight my code and speaking/writing contributions, and a good place for me to keep my beer recipes.

Cameron came up with the simple visual design and great interaction design, so a public “Thank You” is in order for his many hours of thought and contribution. Clearly, the ugliness reduction was his doing (due to my poorly-functioning right brain).

I’m very happy with how the site turned out as a whole, and thought I’d outline a few of my favourite bits (that might otherwise be missed at first glance).

URL Sentences

The technique of turning URLs into sentences was pioneered by my friend and colleague Chris Shiflett. Cameron (who shares studio space (and significant amounts of beer) with Chris) and I both like this technique, so we decided to implement it for my site.

The main sections of the site are verbs, so this was pretty easy (once we decided on proper nomenclature). Here are a few examples:

  • seancoates.com/blogs – Sean Coates blogs…
  • seancoates.com/blogs/about-php – Sean Coates blogs about PHP (my “PHP” blog tag)
  • seancoates.com/brews – an index of my published recipes
  • seancoates.com/brews/coatesmeal-stout – the recipe page for Coatesmeal Stout

To complement the URLs, the page title spells out the page you’re viewing in plain language, and the visual site header indicates where you are (while hopefully enticing you to click through to the other sections).

Moving my blog from the root “directory” on seancoates.com to /blogs caused my URLs to break, so I had to whip up yet another bit of transition code to keep old links functioning. Even links on my original blog (which was hosted on blog.phpdoc.info) should still work. If you find broken links, please let me know.

Vertical Content Integration

My “/is” page contains feeds from Twitter and Flickr.

The Twitter integration was pretty simple; I use the JSON version of my user feed, but I didn’t want to include @replies, so they’ve been filtered out by my code. If the fetch was successful, the filtered data is cached in APC for a short period of time so that I’m not constantly hammering Twitter’s API.

Flickr’s integration was also very simple. After a run-in with some malformed JSON in their API, I decided to integrate through their Serialized PHP Response Format. The resulting data is also cached in APC, but for a longer period of time, as my beer tasting log changes much less frequently.

Code Listings

Displaying code listings on a blog isn’t quite as easy as it sounds. I recently had a discussion with a friend about redesigning his site, and he was considering using Gist from Github’s pastebin-like functionality. Doing so would have given him easy highlighting, but one thing he hadn’t considered was that his blog’s feed would be missing the embedded listings (they come from a third party, and wouldn’t actually appear in his feed’s data stream).

Another problem we faced was one of space. While I often try to keep code to a maximum of 80 (or slightly fewer) characters wide, this isn’t always possible. Injecting a line break into the middle of a line of code is risky, especially for things like SSH keys and URLs. This problem is usually solved by setting the content’s CSS to overflow: scroll, but that littered Cameron’s beautiful design with ugly platform-specific scroll bars. “Clever” designers and developers sometimes overcome this by implementing “prettier” scroll bars, but I’m strongly against this behaviour, so I wouldn’t have it on my site.

I’m quite happy with our eventual solution to this problem. Now, when a blog post contains code that extends beyond the normal width of the blog’s text, the right-most part of the text fades to white, and the listing is clickable. Clicking expands all listings on the page to the minimum width that will accommodate all embedded code.

Here's some example code that stretches much wider than this column would normally allow.
Injecting line breaks is dangerous. Here's why: http://example.com/obviously/not/a/sentence/url
Breaking that in the middle is far from ideal.

jQuery saved me hours of development work here, and I couldn’t recommend it more highly. Highlighting is provided by a plugin that I wrote a couple years ago. It uses GeSHi to highlight many languages. I’ve never been very happy with GeSHi’s output, but it’s Good Enough™ until I can find time to implement a better solution that uses the Tokenizer for PHP.

Software

In addition to PHP, this site integrates a custom version of Habari, with our own theme and plugins. One of those plugins allows me to keep my blog posts in HTML files in my Git repository, to make for much easier editing, greping, etc.

Everything except /blogs was built within the Lithium framework. It handles all of the boring stuff like controllers, routing, and templates, so I didn’t have to write that code myself (which I find incredibly boring these days).

Hashgrid was invaluable in ensuring that the site aligns with a visual grid (again, thanks to Cameron’s meticulous expertise). Pressing your g key will show the grid he used. I even made a few improvements to how Hashgrid works, which I hope to eventually see in the master branch.

Goodbye, OmniTI

Today is my last day at OmniTI.

From an email I just sent out to my soon-to-be-past colleagues:

“I sincerely wish you continued success as a company, and also as individuals who truly make up a significant portion of the best people in this industry. There are many things that OmniTI does very well, and I won't hesitate to refer business your way when the situation arises.

This past year and a half (or so) has been a bumpy road, but I'm absolutely sure I will look back on my time with OmniTI as a net-positive. Thank you all for supporting me and my team with our sometimes-(absurd | stupid | obvious | amateur | tough) questions and requests.”

The road ahead for OmniTI doesn't look nearly as bumpy, but after a very long period of thought, I finally decided to pursue other options around 6 weeks ago, and will now join the ranks of the funemployed.

Thanks for the opportunities, experience, insight, and tough problems, OmniTI.

2010 will be a great year. I'm already excited about some of the prospects that are in my future.

Bonus points if the title of this post seems familiar. (-:

Horrible Support

I fully acknowledge that this is a rant. If you're not into that sort of thing, scroll on by—nothing to see here. I do have a point at the end of the (long) narrative(s), if you do manage to read the whole thing, though.

Technical customer service sucks—at least for people who have the slightest clue about the technology they're calling about.

Videotron

Today, I spent all afternoon (and this evening, and this will carry into tomorrow, maybe Friday) without Internet service. After lunch, I was sitting at my dining room table with the laptop, where I have a clear view of the telephone/hydro/cable pole across the street. This pole services my house. I glanced up and noticed a Videotron (my cable provider) truck, and its driver up in the bucket, doing something to the wires.

I jokingly wrote this in the work IRC channel:

[14:08] sean: expecting internet to drop any second
[14:09] sean: there's a cable guy on the pole across the street

I was disconnected within seconds. I looked up again, and the truck was pulling away.

Thinking it might just be a one-off desynchronization, I reset my cable modem, and it didn't reconnect.

I hate calling technical support. It's always an absolute last resort. If I get to the point of desperation that I actually need to call tech support, you can be absolutely sure that I've fully exhausted every possible solution on my side. This was obviously related to the careless technician formerly of the pole across my street, and clearly not a problem on my side.

I waded through the phone menus, and got to speak to a support agent. After confirming my super-secret birth date with him to verify my identity, he asked me tell him what was wrong.

"One of your technicians was on the pole across the street from my house. My connection was working perfectly before he arrived. When he left, my modem wouldn't sync. He obviously broke something, could you please send him back?"

I was asked to reset my modem. I explained that I already tried this, and it still wouldn't sync. I was then asked to reboot my computer. I explained, calmly, that nothing has changed on my side, and the modem simply wouldn't connect. Nothing on my side of the modem mattered if the modem wouldn't connect. He offered to check the files to see if anyone else in my neighbourhood was complaining of outage, noticed that there was a technician in the area, and that he was fixing a problem that was reported earlier this morning.

I chuckled and told my support agent that he probably fixed my neighbour's problem, but in the process managed to seemlingly knock the pole side of my connection out. The agent told me that since there was a reported outage in the area already, he couldn't send a technician but there was someone working on it. I didn't believe him that someone was still working on it, since the truck pulled away. I was right.

It was obvious to me that the agent wouldn't understand Occam's razor, so I didn't bother.

Three hours later (5:30pm), I called my ISP again. I swam through the sea of menus, and spoke with a technician. I had to explain the whole situation again. He asked me to reset my modem, reboot my computer, and wanted to know if my router had changed configuration. I, again, calmly explained that my router configuration is moot if the modem won't sync. After checking what was presumably a connection diagnostic on his side, and once again verifying that my neighbours weren't having trouble, he had a eureka moment and informed me that there was a technician in my neighborhood repairing a problem earlier that day. I reminded him that I just told him the same thing (slightly less calmly—but not rudely—this time), and he admitted defeat and agreed to send a technician to my place. "He'll be there by 8pm."

I fully expected to have to call Videotron back at 8:05pm, and I was right. So, I tried resetting my modem one last time, then called support. The menus were getting easier. "Please choose from the 7 follo..." *keypad 2* I got a technician, confirmed my birthdate before he even asked, and started explaining that the technician didn't show up. He was confused. I explained the whole problem, and concluded by repeating that the technician didn't show up. "Oh, I see that a technician is supposed to visit you." Sighing, I said "Yes, that's why I'm calling you. He was supposed to be here by 8. It's after 8 now."

The agent said he'd have to call the technician's dispatcher to see what's up. Hold music. (Aside: the hold music gets interrupted by messages that explain how to fix your own problems by resetting your modem, but they warn not to do this if you're using Videotron's VOIP service (-: ) The agent comes back and starts grilling me about not being home when the technician got here, so I missed him. I was home the whole time. In fact, I spent the 2.5h checking the yard every time I heard a car drive by. The agent insists that the technician was here, and I wasn't. I insist that I was here, and the technician didn't want to come out after 8pm on a cold, wet autumn night, so is saying that I wasn't home. Stalemate.

The agent had an idea. Technicians are supposed to leave cards when they call and the customer neglected to be home to receive them. I checked both doors. No card. Nothing. This convinced the agent that the technician wasn't here, somehow. He offered to schedule a new appointment for tomorrow. I remind him that this isn't my fault in any way, and they should just come fix it now. That's not possible. I can schedule for one of three blocks, tomorrow. 7am-12pm, 12pm-5pm, or 5pm-8pm. None of these work for me. I have to take my daughter to school in the morning. I have tomorrow off, and I have errands that need to be run, so the 5-hour afternoon block is out, and I'll be out in the evening. Friday is the same situation. They require me to be home for FIVE hours just so they can fix a problem that I didn't cause.

He said it's impossible for them to fix the problem if I'm not going to be home. I know this isn't true, but I'm sure it's a policy on their side, so I didn't fight with the poor agent too much. "He was perfectly capable of breaking my service without coming into my house."

So now I'm in Internet limbo. I don't know when/if it will be fixed. I'm basically screwed until I can find a 5 hour window where I'll be home and when I don't need to be online. Normally, I'd just tell them to fix it or cancel my account, but these guys are the least-worst choice for broadband in Montreal. The only other option is DSL from Bell. (Not quite true: there are other options like 3G access from Rogers (another evil), Satellite (impossible latency), and resellers that use Bell's and Videotron's infrastructures; none of which are actually viable.)

Bell

Before getting Videotron at the house, I had DSL from Bell. I canceled them due to their incompetence.

One day, after a few months of good service, I started getting >50% packet loss. I checked everything on my side. It was fine. This was a problem with my DSL connection itself. So, I gave in and called tech support.

The usual annoying questions ensued. You'd think that if I said "I'm measuring 53% packet loss" it would automatically qualify me for escalation beyond the "is your computer on?" type of questions. Not so.

I rebooted. I bypassed the router. I installed their stupid PPPOE software (which was not necessary, but I obliged anyway). Magically, this didn't fix my packet loss problem. The agent acknowledged that they weren't getting a very good signal to my DSL modem. Then he asked me a stupid question. "How long is the telephone cable that connects your modem to the wall?" I replied with the truth "I don't know, offhand. I guess eight feet." Little did I know that this was a trick question. The correct answer to cable length queries was about to be revealed: six feet. "What?" "It needs to be six feet long." "Uh. No. It doesn't." "Yes, sir, with a longer cable, you will introduce noise, and you'll get packet loss." This was humourous but also frustrating. I asked the agent if the electrons magically changed into some sort of noise-proof signal upon entering the wall, as it was the same type of cable on both sides of the socket. He wasn't amused.

"Hold on a sec. *pause* OK. Now it's six feet." He was still unamused. "Sir, you can't just tell me it's six feet." "Oh, no. I wouldn't do that. It's six feet now." If you make up lies about things like this, it's fair for me to play your game.

He finally gave in and agreed to send a technician to fix the problem. The first appointment they had was four days later. Yes. Four days. I insisted that there must be an appointment before that. They disagreed. I pressed, anyway "Do you know that I can sign up for service with Videotron faster than you can get a technician out here to solve your DSL problem?" They held their ground, so I signed up with Videotron and canceled. Videotron has worked well up until today.

I hate having to manipulate tech support to solve a real problem, though. This reminds me of how I've had to deal with Dell in the past.

Dell

Around five years ago, I had a Dell laptop. After a few months of use, the power connector on the motherboard came loose, and it would only charge sporadically. We had purchased the super-mega-extended-warranty that Dell offers, so when I called tech support (obviously, this was not a problem I could solve on my own, or I would have), and convinced them that the hardware needed to be fixed, they sent a technician to my office the next morning (super warranty to the rescue).

The technician replaced the motherboard on my laptop. When he put everything back together, I gave it a quick test and was satisfied, so I thanked him and signed off on his work. Within an hour or so, I noticed that my computer was underperforming. Everything was slow.

To make my subjective observation into objective evidence, I found some online benchmarks for my laptop's model, and ran the same benchmarks locally. I was right: it was underperforming by around 50%.

I called tech support again, and explained the whole situation: a technician replaced my motherboard that day, and afterwards, my computer was performing much worse than before. Obviously something was wrong with the new motherboard. "Obviously" has a much different meaning to me than to Dell's technical support. I was forced to go through a procedure of rebooting multiple times, re-seating RAM, resetting my BIOS, explaining that I couldn't boot Windows into safe mode because I wasn't running Windows (this further confused the agent, and almost jeopardized my ability to actually get support). I was around 45 minutes into the call at this point, and I had no way of convincing the agent that the motherboard replacement was the obvious culpit. Her flow chart of how to solve my problem didn't include an actual solution to my problem, and every branch of her problem-solving scripts ended up in fruitless frustration.

Finally, she asked me to run the full Dell diagnostic tests. This came on a CD with the laptop, and I'd run it once before just to see what it did. It took several hours to run the full suite. She was ready to be through with me, rescued by an impossibly long procedure, but I wasn't ready to give up that easily. So, I dug out the disk, and asked her to "please hold." At this point, I was quite bored, and had to amuse myself, so I'd pick up the phone every ten minutes or so and ask her entertaining, yet covertly mean questions about her job. "Out of curiousity, is your performance judged by your average call duration?" "Will this 90 minute call negatively affect you?"

Around two hours in, I decided to give up. It was obvious that she didn't have a script that would allow her to turn my problems into a new visit from their technicians, no matter how many times I insisted that the motherboard was to blame. I had places to be, so I thanked her for her help and hung up without any sort of solution.

The next morning, I desperately called the technician directly. I had his number because Dell outsources on-site work to third parties, and I had to call him to schedule the first meeting. I explained the whole situation, from the slowness to my useless call with Dell tech support. He was sympathetic, but insisted that there was no way he could help without a work order. I understood, but asked him how he might suggest I actually solve this problem. "Well, if your computer has no power at all, then they'd have to replace the motherboard again." A lightbulb turned on. "I understand! Thank you!"

So, I called Dell tech support again, and played dumb. When asked to describe the problem, I said "my computer won't turn on." "It says in your file that your computer is running slowly..." "Yes, that was yesterday. Today, it just doesn't work." A few minor exercises involving removal and replacement (or so they thought) of the battery, they broke the bad news "I'm sorry sir, but we're going to have to replace your motherboard." I feigned sadness, and got a new work order number, and was told a technician would call.

The technician replaced my motherboard that afternoon, and everything returned to normal. I even had a working power connector after the ordeal.

Apple

The only time I can remember actually having good technical support from any company with more than 100 employees is from Apple.

This might read like a fanboy remark, but it's true. The few times I've had to visit the Apple Geniuses at their stores, they've actually listened to my problem, acknowledged that I've probably already tried the obvious solutions, and treated me like an actual person, and not just someone they want to get out of their queue as quickly as possible to improve their call time averages.

I've been genuinely impressed with them.

I wish more companies could be like Apple in this regard. It would have been trivial for Videotron and Dell to acknowledge that—in all likelihood—the problem was caused by obvious circumstances. It's truly nice to not be asked to "reboot, and call back" when talking with technicians who actually make an effort to understand and solve the problem.

In the meantime, I'll keep "borrowing" my neighbour's open wifi. Thanks, "default"!