To Git or not to Git

You might notice from my past few posts that I’m basically going through my entire stack of tools and re-evaluating everything. This time it’s version control.

A little history

I’ve mostly used SVN, and before that, CVS. I’ve tinkered with some of the more heavyweight ones like ClearCase, TrueChange, and Visual SourceSafe as part of consulting gigs, but only enough to know that they were skills unto themselves, and ones I didn’t especially want to let into my brain. My personal repo up till now has been SVN after finally switching over from CVS a few years ago.

Why SVN?

The short answer is, because it’s easy. The longer answer is that it’s easy to set up, it’s fairly hard to break, and it has a decent Eclipse plugin. You might notice that I didn’t mention anything about branches, or rollbacks, or speed, or centralized vs. distributed. Those things don’t really matter to me if the first three requirements aren’t satisfied.

Branches are the devil

I don’t hate branches because they were a legendary nightmare in CVS. I don’t hate branches because svn merge rarely works. I hate branches because of the mental cost they inflict on a team.

Having a team work in multiple branches is, as far as I’ve ever seen it, a sign that your team is too big or your project is too monolithic or your effective management and oversight capabilities are lacking.

There are cases where branches don’t impose such a cost, however. If there are no plans to ever merge a branch back to trunk, they are simply an experimental offshoot where a few snippets might be pulled into trunk/master, that’s fine.

If everyone is switching to a new branch, that’s also fine. At one point we had to rollback to code from over a month prior, and weren’t sure if we were going to get back to trunk or not. Everyone switched to the new branch, and luckily everyone was able to switch back to trunk a few months later. It took days to merge back, and that wasn’t SVN’s fault, that was just a lot of human time that was required to pull two very different versions together safely and not leave landmines all over the place.

Not on my resume

I don’t put any VCS software on my resume, because, while it’s absolutely important to use one, I don’t think it’s that important which one I know or use (it is a good interview question, though). They aren’t really that hard to learn, and since everyone uses them slightly differently there’s no avoiding some ramp-up time. If an employer ever has me and another candidate so close that our VCS experience is the deciding factor, please, take the other person.

Dirty little secret

I don’t actually have the command line version of svn or cvs installed on any of my workstations. Nor do I have standalone GUIs or shell integration like Tortoise. I know the command line, and use it on servers, but I do all my actual development with Eclipse’s integrated client. I’ve actually even used Eclipse to manage svn projects that were Flash or C. I just find the command line so restricting and linear for what really is a very non-linear task. The eclipse subversion plugin took years to go from bad to passable, and it’s still not as good as the CVS version, which is one reason I never grew too attached to SVN.

Why? I need to see everything that’s changed, that’s dirty. I diff every file individually before I commit. I often find changes I didn’t really need to make, and nuke them. Sometimes I find that I actually didn’t account for a situation the old code did. Many times just looking at my code in this way makes me think just different enough that I come up with a better way of doing it. I simply don’t have this visibility amidst the >>> and <<< of a command-line diff, because it’s not my native development environment.

Enter git

So, even though I’d be happy to continue using SVN, I need to see what all this git hubbub is about. It’s been around for over 5 years now, so clearly it’s not a fad. It’s also been vouched for by enough voices I respect that it deserved a shot. There is also Mercurial and Bazaar, but I haven’t seen nearly the same level of buy-in from trusted people for those.

My sideline view is that git is favored by the python people and the “ruby taliban”. Mercurial seems to be favored by the Microsoft and enterprise crowd, and bazaar is somewhere out back playing with package maintainers. Java is still mostly in SVN land, probably because it’s more mature, more corporate, and slower moving. 5 years isn’t a long time in Java years these days, so I’d bet that a high percentage of projects people are still working on are from when git was just Linus stomping his feet. The Spring/JBoss people seem to have gone the Mercurial route, while Eclipse is going git.

Git also has github, which is used by some people I know, while I don’t know anyone personally who is using Mercurial’s version, bitbucket (or even using Mercurial for that matter). So I ultimately went with what Eclipse and my friends were using over the other interests, and started with git. From what I understand the differences are slight in the early stages anyways, this was really more a matter of trying DVCS vs. VCS.

First steps

I started off with Github’s helpful handholding, which included installing msysgit. I imported my projects and used it in earnest for a few days. Once I was confident in my ability to actually get stuff up there, I dug a little deeper.

I read the Pro Git book, which I need to call special attention to because it’s really, really, good. It’s short, concise, has diagrams where you need diagrams, and ranks high in terms of how computer books should be written. If you don’t know anything about git, spend a night reading through this, and you will know plenty.

What about the secret?

So yeah, I was using the command line. Lean in closer so I can tell you why…BECAUSE THE GUIS ARE TERRIBLE. They’re not just ugly, they’re interface poisoning at a master level, think what an even more complicated Bugzilla would look like. And yes I say “they”, because there is more than one, and they’re all basically someone jamming the command line output into various GUI toolkits. Part of this is git’s design, but I know that someone will figure this out.

Git’s design allows for a huge number of permutations of workflow, which means there are a number of extra steps when comparing to something like subversion. On the command line, this doesn’t seem to hurt that much (in comparison). But GUIs don’t deal with situations like this very well. They can either be helpful and guide you down a path, or play dumb and wait for you to hold it’s hand. All of the Git guis I’ve seen so far do the latter.

Am I being a stick in the mud and saying that something as marvelous as git should be constrained to the simplicity of dumb old svn? Actually, yes. I should be able to edit some files, see a list of those files, diff them against the “real” version of the file (as in the one everyone else sees) and commit, with message. Then go home. I don’t care about SHA-1 hashes because I don’t remember them, I only need them when you need to tell me that two things are different. I don’t care about branches other than knowing which I’m in (we’ll get to this next). I don’t want to be bothered with any of this fancy information if nothing is broken (or going to break if I continue).

This isn’t actually a problem of git. This is a problem of people being indecisive when it comes to UIs. If you do everything, you fail. If you do nothing, you fail. If you do any subset of everything, you fail for some people. That’s OK, don’t worry, you can optimize as you go. Your first priority should not be exposing the power of git, it should be letting me put my code on the server so I can go home. Let me drop into all this fancy stuff with local branches and pushing tags and rebasing and such on a case-by-case basis, when I need it, and when I’m good at it.

What about the branches?

Git does branches right. I could go into more detail, but Pro Git does it so well I won’t bother, so just go read that. What git does not do very well, and I’m not sure if anything can do as long as humans are involved, is mitigate the mental cost of a distributed branch. It does make merging much more feasible, and allows for a larger number of cases where branches are not going to cost much, but they still need to be used with discipline and restraint.

The new idea that git adds is the local branch. I haven’t had a chance to use this much yet, but this is the feature that may ultimately win me over. I can look back and say “when have I ever needed a local branch?” and the answer would be “a few times, but not often”. But if I look back and ask “when would I have benefited from a local branch” and answer would be “hmm, I don’t know, but probably more often than I needed one”.

The example of the hotfix scenario (where you need to fix/test something from last week’s release and trunk/master isn’t ready) isn’t very compelling to me as an SVN user. It’s easy to make an SVN branch for something like this. I svn copy and switch it if my local copy is clean. If not, I can check out the project again, or if its a big one, I copy it over and svn switch it. Not as easy as git, but then again, I don’t generally have to make alot of hotfixes either.

The issue scenario (work one one issue per branch, merge when/if complete) is more compelling. I’d like to say that it isn’t and that I try to start and finish one issue at a time, but obviously that doesn’t happen enough. I like the fact that it’s so cheap to make a branch that I might as well just do it all the time. If I didn’t end up needing it, no harm done. If I did end up needing it, because some other issue suddenly got more important and the one I’m working needs to chill, then I’m glad it’s there.

The real win here is that nobody has to know about my branch, which means they never have to wonder what’s in it, or if its up to date. This means there is no cost to my team because I have a branch that is 2 weeks out of date. There is cost to me, but no more than having multiple versions of the project checked out, or a set of patch files sitting there waiting for me to get back to it.

One more thing

The fact that every developer has a full copy of the repository on their computer, basically for free? That’s really nice. Sure, you do backups and the chances of your computer being the last one on Earth with a copy of the repo is slim, but the peace of mind is undeniable.

The Verdict

The only reason this is really a decision at all is that git is harder to use for normal stuff. The command line can be scripted, so if someone got the GUI to the point where you start with an SVN-style workflow and deviate as needed, there really would be no argument to using SVN from what I can see.

My decision is that git is the winner here, despite that massive failing because I have faith that a combination of two things will happen. I will learn the GUIs better because it’s worth my time to do so, and someone will eventually figure out how to make it smarter and simpler.

I could not fault someone for sticking with SVN, even for a new project, because you can always import it later, but I will be starting new projects in git.

Why don’t websites have credits?

Engineers of any discipline are largely an anonymous bunch. You don’t know who designed the fuel pump in your car, I’d even wager it would be extremely difficult for you find out if you wanted to. You don’t know who wrote the code for the OS X Dock or Windows Start bar or who wrote the Like button on Facebook. These people made decisions that affect you deeply every day, and you have no idea who they are.

The most interesting part of this is that those people are OK with it. If you ask them (myself included) they will tell you that it doesn’t matter, that what really matters is the quality of the work and the enjoyment you had doing it. Unfortunately, I think we’re wrong.

Should they?

I can’t seem to come up with a good framework for who figuring out who wants credit, never mind who deserves it. If you so much as make a photocopy during the production of a movie, you’re probably in the credits with some high-faluten title like “First deputy assistant duplication specialist”. Music credits are tied to royalties and managed very closely. Most authors wouldn’t think about publishing something anonymously, nor would artists or sculptors. Artists always sign their work.

This is not even strictly a software issue. Video games list credits, often in the box and at the end of the game, and they even have a IMDB-like site. Nor is it an “arts & entertainment” issue, any credible scientific paper will cite other works and acknowledge contributions. Patents have names on them, even when assigned to a company.

A few software packages have listed credits. If I remember correctly, Microsoft did it on old versions of Word and Excel, and Adobe had it on old versions of Photoshop and Illustrator. I’m curious why those were removed, or at least hidden. “The Social Network” had something about Saverin being removed and re-added to “masthead” of Facebook (although I don’t know what or where that is).

So it would seem that we might be in the minority here, perhaps due to convention rather than any specific reason. And if there’s one thing that bugs an engineer, it’s deviating from standards with no good reason.

So let’s do it.

Why do it?

  • Pride in your work – Sure there is some pride in doing a good job anonymously, but wouldn’t be just a little more motivated or happy now that your name is on it?
  • Being a stakeholder – We’ve all done projects we didn’t believe in, and consoled ourselves with the fact that “it’s not my project”. Well, now it is.
  • Reputation – We’ve got our resumes, but credits will verify them.
  • Honesty/Transparency – There is no good reason to withhold this information, so it should be out there.
  • All that money they spent on school – Show your parents your name on a website and watch them smile.

So who’s get listed?

I think the short answer here is, everyone. Movies do it, why not websites? It could be just a big list of names, or something more detailed with contributions, dates, whatever makes sense. Let’s just start throwing some names up there, and let the de facto standards evolve on their own.

If you know of any major sites that do this well, put it in the comments. Similarly, if you can think of a good reason why this shouldn’t happen, I’d love hear about it.

Linux for Desktop, finally?

I love linux for servers, and I like the idea of using an open source desktop, but it’s never worked out between us. Once a year or so, I go grab the friendliest desktop distro and play with it until it breaks or I find out that some key piece of software is missing or too many versions behind.

I have an aggressive, but reasonable time limit for tinkering before I have to give up. If I cannot get up and running in 4 hours or so, it’s back to Windows. I just don’t have the patience for this type of work to be hacking undocumented config files to do stuff that “just works” in a commercial OS.

I’ve tried various combinations of red hat, suse, debian with gnome, kde, even regular X back in the day. They all failed, usually miserably, often long before the 4 hour time limit.

I should state that this is not because it is bad software, the people writing are doing good work. It’s just been a little too hot rod/DIY for my taste.

This year, the attempt had a bit of a wrinkle, in favor of the candidate. As I’ve recently gone freelance, I’m trying to use a virtual machine per client. This has a number of benefits that I will get into in a future blog post when I’ve had more time to use it. This means that I’m not looking at a linux desktop as a full-on OS replacement, but as a guest OS for my development work.

So I don’t have to complain about how bad Gimp is, or even bother setting up IM or email clients or play music or connect my phone. I will run all of those in the host OS, which in this case is Win 7 Pro.

So last night, I set up Ubuntu 10.10 in VMWare player. The “easy install” was, in fact, easy. It just booted up, at the right resolution, without any warnings. JDK 6 was already installed. I found eclipse (3.5, not 3.6, but doable) through the “Ubuntu Software Center”, as well as MyQL Query Browser and Chromium. apt-get mysql-server and … everything still works. Install subclipse and m2eclipse and we’re basically done.

So I’ve got a complete dev environment up and running and I haven’t had to edit a single config file*, or even reboot the VM. So kudos to the Ubuntu team!

Of course, in true open source fashion, now that all the major bugs have apparently been ironed out they’re dropping Gnome as the default window manager in favor of shiny new Unity, so who knows what the future holds…

*I did have to edit a VMWare config file to enable the back/forward button on my mouse, but I don’t think this has anything to do with Ubuntu. Seriously, it’s 2011 and this isn’t the default or even a checkbox in the settings screen? For those who need it, put:

mouse.vusb.enable = “TRUE”

into your .vmx file and bounce the VM.

Logging Like it’s 2002

I’ve been going through my old code, looking for stuff that might be worth sharing. At the same time, I’ve been maven-izing my builds, and decided I should revisit each dependency, as some of this code is so old the dependencies are very out of date and/or included in the JDK now. Which brings me to log4j.

I’ve literally used log4j on everything I can ever remember doing in Java, but not anymore. Don’t get me wrong, I have no problem with it, and may continue to use it in my applications (if I don’t like logback), but I won’t be including it in any libraries anymore. After 8 years, I’ve finally adopted JUL. Here’s the options and why I chose JUL:

JUL (java.util.logging)

Pros

I’ll start with the victor, because the reason is the simplest. No dependency or version issues, one less thing to download, guaranteed to be there. There is also plenty of code out there to use one of the other frameworks to do your actual logging, so the config isn’t really a burden on the developers using the library.

Cons

Not sure yet. It’s not widely used, but I think that’s because there are far more Java frameworks/applications out there than libraries. There is also a performance issue with SLF4J when you have JUL logging set to a low level, but you shouldn’t need to run (presumably stable) libraries in debug or trace when performance is an issue (e.g. in production), only when you’re trying to debug something. The JUL’s actually logging isn’t really relevant here, as I think most applications will just be running it’s output through their own logging framework.

log4j

Pros

Works great, very stable, hasn’t changed in long enough that you only have version issues when you’re dealing with REALLY old code.

Cons

Less people are using it, more projects are going to SLF4J/Logback. These new frameworks do add some nice features, and log4j is basically abandoned, so I think it’s time to stop doing anything new with it.

commons logging

Pros

None?

Cons

I’ve always been against commons logging, because 99% of the time, it was just used to wrap log4j. The logic was that you could plug custom logging into it, but you can do that with log4j already, so it’s basically an abstraction of an extensible framework, with zero added value. Actually you have less value because you lose things like MDC. At this point it’s like a virus that just won’t go away, and always seems to end up in the classpath somehow. As far as I’m concerned, I consider this a completely superseded library.

logback

Pros

From what I can gather, logback really is (as claimed on their website) the continuation of log4j. Not having used it, I can only assume this is a good thing, and it just adds new features like parameterization. I’m going to try logback in my next application, and since logback includes slf4j, I will access my library logging that way.

Cons

It’s not really in wide usage yet, which means that a library requiring it is going to add an extra dependency.

slf4j

Pros

If logback is the modern version of log4j, slf4j is the supposedly useful version of commons logging, and supposedly improved version of JUL. It’s not a logger per se, it’s just an API/facade. It has the ability to combine multiple logging APIs and legacy frameworks into one stream, which is why it seems to be getting traction on complex applications.

Cons

I’ve had some serious versioning issues with slf4j, due to some methods being removed or changed, so you end up with older code throwing errors when you use a newer version, thus requiring that you only use the old version and introduce the chance for strange errors in code expecting the newer version. For this reason, I don’t feel very comfortable specifying any version of slf4j, and I will leave it to the user to add it if they wish.

Conclusion

So JUL seems like the best choice for a stable, single-purpose library to use, as it’s the least imposing on whatever uses it. It should be noted that I haven’t actually used JUL yet in a real app, so perhaps I will find out that there’s an actual reason for its lack of popularity. If that’s the case, I will likely use slf4j, and try and find out which methods cause issues so I can avoid them, and not be the person someone else curses for requiring it.

Scripting Language

On a mailing list I’m on, a few very smart, very experienced programmers were discussing the term “scripting language”. I had nothing of non-semantic value to add to the conversation, but I’ve heard this debate enough times that I figured I’d put my stock response here.

To the question “is X a scripting language?” the answer is “yes”. If the person is unhappy with this answer, the answer is “no”. At this point I ask “What will the correct answer to this question get you?”, and things unravel from there.

All rules someone can come up with to determine if something is a “scripting language” will be violated by at least one language they consider to be one. I assume there’s some fancy logician term for this, I’ll call it a paradoxical assignment until someone corrects me.

The term is vague, and the assignment of the term is typically in place of a more meaningful assessment such as “it’s not compiled”, or “it’s short”, so when someone asks this question, just dig a little deeper, and if someone says “just use a scripting language”, use Perl.

The 3 Ingredients Necessary to Make a Really Good Developer

I’ve been doing development long enough that I can now look back and have some perspective on the art/craft/profession. I’ve been asked many times “how do you become a developer?” and I now have a decent partial answer. They are not being “super smart” or “good with computers” or things like that. I think those are artifacts of these other attributes. I’d also guess that these apply to engineering in general, but I’ll limit myself to my own turf.

Curiosity

Good developers have a compulsion to understand how things work. they open files with text editors to see if its just xml or a zip file with a different extension. They run benchmarks against things that don’t seem to matter. They add query parameters like debug=true to websites. They try to break stuff. And not just software, they probably know how an internal combustion engine or an air conditioner works. They probably can tell you a bit about how the minimum wage affects inflation. My grandmother used to give me old radios and gadgets strictly so I could disassemble them.

This attribute is probably the one that separates the wheat from the chaff the most. There are lots of people who can code, or manage a system, but the ones that excel will need to understand how things work, and know that every juicy answer yields even more delicious questions.

Focus

The ability/requirement to focus is the subject of many other blog posts, but I view focus in a slightly different way. Focus is not eliminating distractions or even maintaining “flow”, focus is the ability to keep a problem in your head until you’ve solved it. Distractions can hamper this, so can multitasking or other external factors, but good developers can work on something, go to lunch, or go home for the evening, and pick up right where they left off.

Hard Work/Genuine Interest

I think there is a certain amount of innate aptitude, but I don’t think development is an exception to the 10,000 hour rule Malcolm Gladwell popularized. Luckily, its a trade where we can log those thousands of hours at an early age and make it look like we’re goofing off. I started with LOGO in the second or third grade, moving onto translating my piano music into BASIC, learning that a coda is just like a GOTO. On to writing databases to track fantasy baseball status with MS Works, and so on. Sure, we spent many college nights taking over IRC channels with bots, and crashing MUDs with scripts and floods, and that may have looked like simple nerd mayhem, but those experiences have tremendous value in the “real world”.

You can know all the buzzwords and put on a good show, but you can’t really fake the level of interest that side projects demonstrate. There are plenty of good-enough developers out there who punch in and out, and there are even jobs where you do cool enough stuff that you don’t feel compelled to break out of the rut (we try to be this way), but if you still need to hack on your own, you’ve got potential.

Contrary Opinion: Code coverage by checked exceptions

This is the first in what will hopefully be a series of statements that I may not entirely believe, but to put out there to see if they are viable…

Conventional Wisdom: Unit tests are great, you should have lots of them. More code coverage is better.

Conventional Sentiment: Checked exceptions are a hassle, more trouble than thier worse, promote sloppy coding, etc.

Contrary Opinion: Checked exceptions are a better way to address code coverage than unit tests.

On a scale of 1-10 of using and advocating for checked exceptions, I’m probably an 11. I completely disagree with pretty much all of the conventional complaints against them. They do not promote spaghetti code, they actually clean up your normal logic and neatly compartmentalize your error handling. They do not promote sloppy behaviors like exception swallowing, that’s entirely the programmer’s fault. Adding or removing exceptions breaks client code. Yes, it does, why is this a problem? You added additional error conditions, meaning you changed your contract with the clients, and they should be revisited. If you didn’t have this obvious way to signal a change, chances are that the client would handle things incorrectly. Even some of the major voices in computer science have fallen out of love with them, but usually the reasoning is based on people using them wrong (or being too hard to use right).

On a scale of 1-10 of using and advocating for unit tests, I’m probably a 2. They absolutely have their uses. A straight-up algorithm, especially things like math and parsing, should be unit tested for various input values to assure they’re returning the proper value. However in most modern business/consumer software these represent a very small portion of your code. There are far more lines of code dealing with things like authentication, user inputs, file loading, database and network operations, etc. These are complex activities. Even simple CRUD applications can end up invoking hundreds of functions across dozens of libraries for every operation.

The crux of my argument is that if you use exceptions properly, you don’t need to test if an operation completed properly. If the operation completes, it did so properly, all other conditions would fail to complete. Since you want to be using exceptions properly anyways for cleaner code, unit testing this code is redundant but a waste of time and energy to write and maintain. Not only that but exceptions give you code coverage at compile time AND error handling at run time, which unit tests cannot do.

If you disagree, please tell me why!

Fire the user experience designer

This post makes a case for having a specialized “user experience designer”. The author makes the case that usability and interaction design is too complicated to be handled by someone responsible for other tasks. This is false.

If you are on a team responsible for a website or something similar, EVERYONE on your team should understand usability and interaction design. It’s not a special skill, it’s core competency, like communication skills and ethics. The real experts out are rare, and I mean “you’ll probably never even meet one” rare. Most people who specialize in it are just washed-up designers or coders.

You need your designers thinking about how people will interact with your program, or you’re going to end up with brochureware. You need you programmers thinking about it or you’re going to end up with a clumsy UI. You need your QA people to think about it or you’re going to end up with spotty test plans. You need your managers thinking about it to understand what’s important. You need your salespeople thinking about it to compare against your hapless competition.

Having someone responsible for it is a bad idea because not only are they probably going to suck at it, it’s just going to make everyone else lazy.

Palm Pre

Palm PreI was impressed with the iPhone when it came out, but not enough to warrant the expense of the device and the overpriced service plan and dealing with switching carriers (especially to AT&T). I had the Spring PPC-6700 at the time, which was decent and got me hooked on having a mobile calendar and contact database without carrying a PDA. When Palm announced the Pre, I decided I would wait for it, and if it wasn’t up to snuff, I’d cave and buy an iPhone.

Luckily, the Pre is a fantastic device. I haven’t spent enough time with the iPhone to compare it honestly, but it seems like a much more polished experience. It automatically syncs my Google calendar, my girlfriend’s calendar, my work calendar, the Red Sox schedule. It easily unifies contacts from Facebook, Google, AIM, etc. I can send someone an IM, they can respond by SMS, it all goes into a common stream. It comes with both Sprint GPS and Google Maps.

Hardware-wise, it’s a little thicker, and a little shorter than an iPhone, more along the lines of a conventional phone. The screen is fantastic, it works in the sun and is crystal clear. The touch screen is well designed even for my round fingers, it seems to intuitively know what I meant to click on. The keyboard is small, but effective.

Minor issues so far: A bug in the instant messaging client that chews through an entire battery in 6-8 hours. I disabled it and it now goes 2-3 days with light internet usage. Supposedly this will be fixed via software update. Charging uses a tiny USB connector and a cover that doesn’t feel very durable once opened (but very durable when closed). I didn’t get a touchstone (the new induction charger) yet, but I probably will, which should remedy that. The fact that it doesn’t do video is not really an issue for me, my other phone did it and I think I used it once in 2.5 years.

My major problem with now is that for some reason, Palm has not released the SDK to the public yet, and has not accepted my application. This means they missing really useful features like an RSS reader or the hundreds of other standard apps out there. There are only 30 applications to download right now, the iPhone has 50,000. Even if 99% of those are total crap, that’s still alot more than Palm is offering. They really need to open this up soon, while they’ve got some shininess.

Neil Poulton is a Jackass

I recently bought a 1TB Lacie Hard Drive that was “designed” by Neil Poulton. I imagine that the design session went something like this:

Lacie: “Neil, we need you to design a hard drive for us”
Neil: “Excellent! I’ve got a brilliant idea!”
Lacie: “Tell us!”
Neil: “Well, I will tell you but you have to put my name all over the box, and not put anyone else’s name on it. Especially the people that are actually going to work late nights and weekends to do the engineering required to make this piece of commodity hardware fit into my stunningly brilliant design”
Lacie: “Sold!”
Neil: “OK, here it is, picture in your mind a shiny black rectangle”
Lacie: “I love where this is going!”
Neil: “Now picture in your mind a blue light”
Lacie: “Ooooh, sexy”
Neil: “Excellent, I’ll send over the invoice”
Lacie: “…”