affinity.txt

SEO sucks. It’s a fun little game to play for a while, but at the end of the day almost everyone loses. The searchers lose because they can’t find the best stuff any more. The search engines lose because their searchers see worse results and are less happy. Legitimate creators and business lose because they have traffic siphoned off by spammers and scrapers. They’re forced to waste brainpower and money on this ridiculous game that, from where I’m standing, they’re losing.

I’m not pretending that there was some golden age without spammers, they’ve always been there, and they always will be. Originally we had straight-up content matches, then keywords. Search quality was approaching unusability before Google came on the scene. They did a good job, PageRank put a trust network in the mix, and worked great for a while. Eventually that well was poisoned too. They’ve done many things since, and I bet many of them helped. The new site blacklist is a good step but certainly a tricky one to use. The rise of Bing is actually a good thing, I think Google is in a much better position to take risks and change things when they’re at 70 or 80% of the market as opposed to 95%.

So let’s stop talking about SEO as a black art. Let’s stop making our sites worse to prolong a losing battle. Let’s take that energy and put it into something else. Put your content out there in the best format you can. Forget code-to-content ratios and maximizing internal link structures that don’t benefit your users.

Instead, let’s think of ways that we can help explicitly affect results. Here’s one: Think robots.txt meets social graph meets PageRank. site.com lists other sites in it’s /affinity.txt file, and defines some coarse relationship. Something like this:

widgetfactory.com/affinity.txt

www2.widgetfactory.com self
*.widgetwiki.org follow
*.widgetassociation.net follow

So we’ve got the “self” tag that basically says “this is another version, or a related version, of me.” If www2.widgetfactory.com/affinity.txt has a “widgetfactory.com self” entry, you’ve got yourself a verified relationship. We’ve also got something like a follow tag that says “we like these sites and think they’re valuable, you should go there (and follow links from us).” It’s basically a vote. Unidirectional votes are useful for quality, and mutual votes are a big clue about transferring trust.

How many votes do you get? No idea, I don’t see any reason to limit it. I think those things will work themselves out. I don’t see regular sites managing thousands of links in there, I think they just link to enough to make it meaningful. Or maybe there is some hard limit, so there’s less of a guessing game on how to pick who goes there. 100 per site? 1000?

Now you might think, “this is just pagerank” but it actually is different I think. First, it’s much easier to spot foul play. There are far fewer domains than pages, so the graph is much smaller and easier to traverse. Junk sites are going to stick out like a sore thumb. A spammer can’t really use this channel by making artificial networks of trust because it would be so easy to kill them en masse. It’s also difficult to mask or overwhelm like linkfarms.

Does this hurt the little guy? Not any more than spammers, I think. I think even though the expression of the data is simple, the interpretation of it can be very complex. If you’ve got a little blog that just blathers on about computer stuff and doesn’t even have any ads on it, you may not need much trust to rise up on specific content searches. If you’ve got a site with 3 million pages that look an awful lot like wikipedia pages, and your only votes are from other similar sites, and your whole cell has no links from the larger graph, well, maybe, despite your flawless, compact markup and impeccable word variances, you’re not really adding much value, are you?

I’m not saying this particular idea solves all of our problems,, heck, I’m sure there are some problems with it as described, but I think approaches like this will not only affect the quality of our search results, but they will ultimately affect the quality of the web overall.

When will Google buy VMWare?

Google’s Chromebooks are starting to go mass-market. For those that don’t know, these are essentially laptops that only have a web browser on them. No Windows, no OS X or Linux. To many people, this seems ludicrous. You need apps, right? You need data?

The truth is, the majority of people already only use browser “apps”, which we used to call “websites”. Google has been leading the charge on this, by pushing the envelope on in-browser apps with Google Docs and the Chrome App Store. There are other players too, Apple is training people to buy apps, and not worry about having to reinstall them when your hardware fails. DropBox is training people to sync everything. Amazon is training people to have virtual CD shelves. Steam is training people to have virtual game libraries. Citrix and LogMeIn are training people to work on remote desktops, and so on.

These are all coming together to get people to the point where we basically go back to dumb terminals. Your computer is nothing more than a local node on the network. That, however, is not the interesting part, people have been saying that for years.

The interesting part, to me, is that it’s not actually going to be the typical early adopters going there first. My girlfriend’s computer literally has nothing on it. She only uses browser apps and iTunes, which is connected to our NAS where her photos are also stored. She uses GMail, Facebook, Google Docs, etc. With the exception of syncing her iPod, which I have to assume someone will figure out how to do in the ChromeOS ecosystem, I’m not sure she would really notice any difference.

Now my computer(s) are a long ways from there. I’ve got development environments, SQL servers, mail servers, all sorts of infrastructure set up. I could certainly move to a remote desktop or a remote terminal on a server, but the change would be much more disruptive and not without some costs.

Along a different path, we’ve seen a long progression of advances in virtualization. I actually do most of my work in VMs now, for a number of reasons, but one of which is that I’m not dependent on a particular piece of hardware. If my laptop is destroyed or stolen, I’m back up to speed very quickly. The only thing I’d need to do is install VMWare, plug in my drive, and I’m good to go.

I think these two paths are going to meet up soon. I think ChromeOS is a way to get the low-demand computer users on board. If Google buys VMWare, they can come at it from the other end as well. I think VMs will get leaner while browsers get more robust, and we’ll end up with a hybrid of the two. A lightweight OS that is heavily network/app/web based? I wonder where Google would get one of those? Oh, right, they already did.

To switch or not to switch, Part 1

This entry is part 1 of 3 in the series To switch or not to switch

I was listening to a talk the other day and the speaker derisively mentioned “those people who are happy writing Java for the rest of their lives”, and I thought “Am I one of those?” and then I thought “is that a bad thing?”. As part of my “question everything” journey, I decided that it was time, after 10+ years, to have Java report for inspection and force it to defend its title.

I should make it clear, that I am not a language geek, or collector. I generally disagree with “use the right language for the right problem”, I prefer “use the right language for most of your problems”. So far, Java has been that for me. Some things I do in Java are more easily done in other languages, but not so much so that it overtakes the headaches of heterogeneous codebases. If something is really difficult, or impossible in your main language, bring something else in, but keep it simple. I also think it’s fine to have more than one main language, a client of mine is currently transitioning off C#, keeping Java, and adding Python. What they don’t have is random parts of their infrastructure done in erlang or perl or tcl because that’s what someone wanted to use that day.

I could make this task easier and just look at the “marketable” skills out there, which is a small subset. While I think it’s unlikely that there is some forgetten language just waiting for its moment, it’s certainly possible I could find a neat one that’s fun to play with. Languages like Ruby and Python spent years before people could find jobs doing them. So I’m going to look at literally every single language I can find, and put them through a series of tests. If you find a language I haven’t mentioned, let me know and it will be given the same chance as the rest.

Round 1:

The point of this round is to identify languages that have any potential for being useful to me.

Qualifying Criteria

Rule 1. It must be “active”.
This is admitedly a subjective term, but we’ll see how it goes. Simula is clearly not active, while Processing clearly is, with a release only weeks ago.
Rule 2. It must compile and run on modern consumer hardware and operating systems.
This means, at minimum, it works on at least one modern flavor of Linux, because I will want this to run on a server somewhere, and I don’t want a Windows or OS X server, or worse, something obscure. For bonus points, it will also work on Windows 7 and/or OS X.

So, that’s it for now. There are no requirements for web frameworks or lambdas or preference for static versus dynamic typing, I think those elements will play out in later rounds.

  • Ada
  • Agena
  • ALGOL 68
  • ATS
  • BASIC
  • BETA
  • Boo
  • C
  • C#
  • C++
  • Clean
  • Clojure
  • COBOL
  • Cobra
  • Common Lisp
  • D
  • Diesel
  • Dylan
  • E
  • Eiffel
  • Erlang
  • F#
  • Factor
  • Falcon
  • Fantom
  • FORTH
  • Fortran
  • GameMonkey Script
  • Go
  • Groovy
  • Haskell
  • Icon
  • Io
  • Ioke
  • Java
  • JavaScript
  • Logo
  • Lua
  • Maple
  • MiniD
  • Mirah
  • Miranda
  • Modula-3
  • MUMPS
  • Nu
  • Objective Caml
  • Objective-C
  • Pascal
  • Perl
  • PHP
  • Pike
  • Processing
  • Pure
  • Python
  • Reia
  • Ruby
  • Sather
  • Scala
  • Scheme
  • Scratch
  • Self
  • SPARK
  • SQL
  • Squeak
  • Squirrel
  • Tcl
  • Tea
  • Timber
  • Unicon
  • Vala
  • Visual Basic .NET

This list is actually a LOT longer than I expected, and yes, there actually is a modern version of ALGOL 68. Stay tuned for part 2.

The Ultimate Music App

There’s an ever-growing number of online music services out there, but none of them have really nailed it for me. Here’s my list of demands:

  • Instant Purchase – Simple one or two click purchase, which adds it to my portfolio. Downloading from one place and uploading to another is dumb.
  • Standard format/no DRM – This is why subscription-based services won’t work.
  • Automatic Download/Sync – As seamless as DropBox, maybe even with a few rules (per playlist, etc).
  • Smart Playlists – The only reason I use iTunes is that I can set up playlists with dynamic criteria, like “stuff I like that I haven’t heard in 2 weeks”. This entails tracking what I listen to and being able to rate stuff.
  • Upload My Own – No reason for me to have to buy things again. I’m fine with paying a small extra fee for this, but I should also be able to work that off by buying new stuff. Amazon hosts stuff I’ve bought from them for free, but charges me for uploads, so in the long run they could actually end up costing me more. They should give me a 50MB bonus per album to upload other files.
  • Mobile – My phone is my music player now, I should be able to stream/sync/download from it as well as my computer.

Bonus Features

  • API – Let me have another program talk to your service to do things like recommendations and missing tracks.
  • Podcasts – This doesn’t necessarily have to be done in-service, if the API allowed uploads someone else could do it, but it seems pretty trivial to add on if all of the above things are in place.

Don’t Really Care

  • Sharing – Nice to have but I’d be fine with a service I can’t share. I’d prefer the option to sign into more than one account at a time.

Software that isn’t afraid to ask questions

An area that user-focused software has gotten better at in the past 10 years or so is being aware, and protective of, the context in which users are operating. Things like autocomplete and instant validation are expected behaviors now. An area that software is really picking up steam is analytics, understanding behaviors. You see lightweight versions of this creeping into consumer software with things like Mint.com and the graphs in Thunderbird, but most of the cool stuff is happening on a large scale in Hadoop clusters and hedge funds, because that where the money is right now.

But where software has not been making advancements is in being proactively helpful, using that context awareness, as well as those analytics. If that phrase puts you in a Clippy-induced rage, my apologies, but I think this is an area where software needs to go. I think Clippy failed because it was interfering with creative input. We’ve since learned that when I user wants to tell you something, you want to expedite that, not interfere. Google’s famed homepage doesn’t tell you how, or how to search. They’ve adapted to work with what people want to tell it.

I’m talking about software that gets involved in things computers are good at, like managing information, and gets involved in the process the way that a helpful person would. We’ve done some of this in simple, mechanical ways. Mint.com will tell me when I’ve blown my beef jerky budget, Thunderbird will remind you to attach a file if you have the word “attached” in your email. I think this is a teeny-tiny preview of where things will go.

Let’s say you get a strange new job helping people manage their schedule. You get assigned a client. What’s the first thing you do, after introducing yourself? You don’t sit there and watch them, or ask them to fill out a calendar and promise to remind them when things are due. No, you ask questions. And not questions a computer would currently ask, but a question like “what’s the most important thing you do every day?”. Once you’ve gotten a few answers, you start making specific suggestions like “Do you think you could do this task on the weekends instead of before work?”.

Now, we’re a long way from software fooling people into thinking it cares about them, or understand their quirks, but we’re also not even trying to do the simple stuff. When I enter an appointment on Google calendar, it has some fields I can put data in, but it makes no attempt to understand what I’m doing. It doesn’t try to notice that it’s a doctor’s appointment in Boston at 9am and that I’m coming from an hour away during rush hour, and maybe that 15 minute reminder isn’t really going to do much. It would be more helpful if it asks a question like “are you having blood drawn?”, because if I am, it can then remind me the night before that I shouldn’t eat dinner. It can look at traffic that morning and tell me that maybe I should leave even earlier because there’s an accident. It can put something on my todo list for two weeks from now to see if the results are in. All from asking one easy question.

Now, a programmer who got a spec with a feature like this would probably be speechless. The complexity and heuristics involved are enormous. It would probably get pared down to “put doctor icon on appointment if the word doctor appears in title”. Lame, but that’s a start, right? I think this behavior is going to be attacked on many fronts, from “dumb” rules like that, to fancy techniques that haven’t even been invented yet.

I’ve started experimenting with this technique to manage the list of ideas/tasks I have. In order to see how it might work, I’ve actually forbidden myself to even use a GUI. It’s all command line prompts, because I basically want it to ask me questions rather than accept my commands. There’s not much to it right now, it basically picks an item off the list, and says, “Do you want to do this?” and I have to answer it (or skip it, which is valid data too). I can say it’s already done, or that I can’t do it because something else needs to happen first, or that I just don’t want to do it today.

If it’s having trouble deciding what option to show me, it will show two of them and say “Which of these is more important?”. Again, I’m not re-ordering a list or assigning priorities, I’m answering simple questions. More importantly, I’m only answering questions that have a direct impact on how the program helps me. None of this is artificial intelligence or fancy math or data structures, the code is actually pretty tedious so far, but even after a few hours, it actually feels helpful, almost personable.

If you know of any examples of software that actually tries to help in meaningful ways, even if it fails at it, let me know!

Ideas are like rabbits

I’m not an advocate or follower of any particular productivity framework, or even that you should have one, but I’ve recently rediscovered a fragment of one that has been very significant. The short version is that writing down ideas makes room for more, the long version follows.

As hinted at in my previous post, I’ve been experiencing a bit of overwhelming intellectual stimulation lately. I credit this to my new freelancing ways. I can spend an hour, or a day following a thought, where before, I had to run it alongside my work responsibilities. Not to say I have no responsibilities to my clients, but I think I’ve set expectations to the point where I can balance things better.

Last week I was feeling a bit overwhelmed with tasks, and went back to a GTD technique which is to essentially write down everything you need to do. There is a cathartic aspect to this, as well as the feeling of “oh, well that’s no so bad” once you see the list. In the past, my list was 20, 30 maybe as many as 50 items. This time, I railed off over 200 in a row, and many more in the following couple of days. This did NOT instill a feeling of “oh, well that’s no so bad”.

Once my brain was sufficiently debriefed, I looked at the list. I noticed that most of the things were my own projects. Research this, test that, learn that, add feature X to library Y, and so on. It turns out that many of my todos were more idea-related than work-related. And almost everytime I looked at one, others would pop up. I guess this makes it more like mitosis than rabbits, but you get the idea, and rabbits are cuter anyways. Regardless, it was, and is, out of control, and it’s great.

This is not a new invention, it’s basically brainstorming, and there are other people doing it too. I think the part of it that’s new for me is that I’m not drawing any distinction between ideas and tasks, and I’m not seeing any value to doing so. I’m not getting overwhelmed by tasks I haven’t started because in fact, I have started them, by logging ideas related to them. For software at least, expressing ideas is not far removed from doing the actual work, so I’m almost tricking myself into being productive.

There are two problems, however. The first is that my list is scaling poorly, and there are no tools that work for me. I’ve tried mind-mapping in the past, and might try it here, but am not especially hopeful. The second problem is that I simply don’t have enough time to do it all, and I’m afraid no program will ever help there.

Freelancing vs. Contracting vs. Employment

I’ve been freelancing for a little over two months now, and already it has been an exciting, rewarding experience. I figured I would share a few observations.

What do I mean by “freelancing” as opposed to contracting. I admit there is no official differentiation of the two, this is strictly my own, and I apply it narrowly to software/knowledge work.

I see freelancing as the extreme end of transient employment. They may be hired on a per project basis, or a per day basis. There isn’t any implicit availability, they’re booked (and paid) or you’re not. They don’t go to meetings that aren’t very relevant, or get unwillingly reassigned. If there are no projects where you can offer expertise, you’re done. If a company tightens it’s belt, freelancers are the very first to go, with no notice, no severance.

I see contracting as provisional employees. They aren’t vested in the company, but they essentially perform like an employee. They probably go to company-wide meetings, fit into the normal reporting structure, etc. After the freelancers go, the contractors are next in line. Usually contractors are given time to “wrap things up” or have a planned end date.

Employees are, well, employees. They work for the company, and only for the company. Hopefully if the company makes a lot of money, the employees benefit, where the previous types don’t. Many people think employment offers job security, but short of official agreements like tenure or unions, I disagree. I define job security as the ability to get a job, not to keep one. I’m not cynical about fatcat CEOs laying people off for fun, I’m just realistic in terms of what employment actually provides.

The reason I choose freelancing is that I want to be intellectually promiscuous, at least for now, and I want that arrangement to be very clear to clients. I have a self-imposed 20 hour per client per week cap. This may sound silly to you, and it certainly hasn’t been popular with clients. It’s a tough sell, and it has cost me some otherwise good opportunities. I admit I haven’t really mastered it yet, but I think I’m getting better at it, and am grateful that my clients have been accommodating. On the other hand, I don’t think that I’d be nearly as stimulated as I am now, but that’s a topic for another post.

Ubuntu: See you in 2012

As a follow-up to my previous post, I’ve just finished moving off my Ubuntu VMs. I don’t necessarily blame Ubuntu for this, but it’s just a little too laggy in a VM. I bet it’s only a few ms most of the time, but it’s noticeable and it’s frustrating when you’re in a good flow. Perhaps next year, if there have been improvements on both the Linux and VMWare side.

I did try VirtualBox, which seemed slightly more responsive but was very flaky, it would randomly lock up in strange ways. I also tried Virtual PC, which isn’t really an option since it lacks real multi-monitor support, but didn’t offer any improvements anyways.

Another issue which may seem minor but any programmer will tell is definitely not is that my code font, Lucida Console, doesn’t work the same on Linux. I’m not familiar enough with font mechanics to say how, but I tried everything including fractional font sizes to going through probably 50 other fonts, and it just doesn’t have the right density. Fonts on Linux are getting better though, I did enjoy the Ubuntu font in the OS UI.

Alternatively running Windows 7 in VMWare so far has been almost completely transparent. I haven’t done it enough to say it’s a total success but I’ve already gotten confused when I’m in full-screen, which is a good sign. Of course the downside is that licensing issues are tricky and expensive, but since these are for paying projects, I can recoup the $200 fee. I may try an OEM license for the next one, it appears to be legal and about half the cost.

To Git or not to Git

You might notice from my past few posts that I’m basically going through my entire stack of tools and re-evaluating everything. This time it’s version control.

A little history

I’ve mostly used SVN, and before that, CVS. I’ve tinkered with some of the more heavyweight ones like ClearCase, TrueChange, and Visual SourceSafe as part of consulting gigs, but only enough to know that they were skills unto themselves, and ones I didn’t especially want to let into my brain. My personal repo up till now has been SVN after finally switching over from CVS a few years ago.

Why SVN?

The short answer is, because it’s easy. The longer answer is that it’s easy to set up, it’s fairly hard to break, and it has a decent Eclipse plugin. You might notice that I didn’t mention anything about branches, or rollbacks, or speed, or centralized vs. distributed. Those things don’t really matter to me if the first three requirements aren’t satisfied.

Branches are the devil

I don’t hate branches because they were a legendary nightmare in CVS. I don’t hate branches because svn merge rarely works. I hate branches because of the mental cost they inflict on a team.

Having a team work in multiple branches is, as far as I’ve ever seen it, a sign that your team is too big or your project is too monolithic or your effective management and oversight capabilities are lacking.

There are cases where branches don’t impose such a cost, however. If there are no plans to ever merge a branch back to trunk, they are simply an experimental offshoot where a few snippets might be pulled into trunk/master, that’s fine.

If everyone is switching to a new branch, that’s also fine. At one point we had to rollback to code from over a month prior, and weren’t sure if we were going to get back to trunk or not. Everyone switched to the new branch, and luckily everyone was able to switch back to trunk a few months later. It took days to merge back, and that wasn’t SVN’s fault, that was just a lot of human time that was required to pull two very different versions together safely and not leave landmines all over the place.

Not on my resume

I don’t put any VCS software on my resume, because, while it’s absolutely important to use one, I don’t think it’s that important which one I know or use (it is a good interview question, though). They aren’t really that hard to learn, and since everyone uses them slightly differently there’s no avoiding some ramp-up time. If an employer ever has me and another candidate so close that our VCS experience is the deciding factor, please, take the other person.

Dirty little secret

I don’t actually have the command line version of svn or cvs installed on any of my workstations. Nor do I have standalone GUIs or shell integration like Tortoise. I know the command line, and use it on servers, but I do all my actual development with Eclipse’s integrated client. I’ve actually even used Eclipse to manage svn projects that were Flash or C. I just find the command line so restricting and linear for what really is a very non-linear task. The eclipse subversion plugin took years to go from bad to passable, and it’s still not as good as the CVS version, which is one reason I never grew too attached to SVN.

Why? I need to see everything that’s changed, that’s dirty. I diff every file individually before I commit. I often find changes I didn’t really need to make, and nuke them. Sometimes I find that I actually didn’t account for a situation the old code did. Many times just looking at my code in this way makes me think just different enough that I come up with a better way of doing it. I simply don’t have this visibility amidst the >>> and <<< of a command-line diff, because it’s not my native development environment.

Enter git

So, even though I’d be happy to continue using SVN, I need to see what all this git hubbub is about. It’s been around for over 5 years now, so clearly it’s not a fad. It’s also been vouched for by enough voices I respect that it deserved a shot. There is also Mercurial and Bazaar, but I haven’t seen nearly the same level of buy-in from trusted people for those.

My sideline view is that git is favored by the python people and the “ruby taliban”. Mercurial seems to be favored by the Microsoft and enterprise crowd, and bazaar is somewhere out back playing with package maintainers. Java is still mostly in SVN land, probably because it’s more mature, more corporate, and slower moving. 5 years isn’t a long time in Java years these days, so I’d bet that a high percentage of projects people are still working on are from when git was just Linus stomping his feet. The Spring/JBoss people seem to have gone the Mercurial route, while Eclipse is going git.

Git also has github, which is used by some people I know, while I don’t know anyone personally who is using Mercurial’s version, bitbucket (or even using Mercurial for that matter). So I ultimately went with what Eclipse and my friends were using over the other interests, and started with git. From what I understand the differences are slight in the early stages anyways, this was really more a matter of trying DVCS vs. VCS.

First steps

I started off with Github’s helpful handholding, which included installing msysgit. I imported my projects and used it in earnest for a few days. Once I was confident in my ability to actually get stuff up there, I dug a little deeper.

I read the Pro Git book, which I need to call special attention to because it’s really, really, good. It’s short, concise, has diagrams where you need diagrams, and ranks high in terms of how computer books should be written. If you don’t know anything about git, spend a night reading through this, and you will know plenty.

What about the secret?

So yeah, I was using the command line. Lean in closer so I can tell you why…BECAUSE THE GUIS ARE TERRIBLE. They’re not just ugly, they’re interface poisoning at a master level, think what an even more complicated Bugzilla would look like. And yes I say “they”, because there is more than one, and they’re all basically someone jamming the command line output into various GUI toolkits. Part of this is git’s design, but I know that someone will figure this out.

Git’s design allows for a huge number of permutations of workflow, which means there are a number of extra steps when comparing to something like subversion. On the command line, this doesn’t seem to hurt that much (in comparison). But GUIs don’t deal with situations like this very well. They can either be helpful and guide you down a path, or play dumb and wait for you to hold it’s hand. All of the Git guis I’ve seen so far do the latter.

Am I being a stick in the mud and saying that something as marvelous as git should be constrained to the simplicity of dumb old svn? Actually, yes. I should be able to edit some files, see a list of those files, diff them against the “real” version of the file (as in the one everyone else sees) and commit, with message. Then go home. I don’t care about SHA-1 hashes because I don’t remember them, I only need them when you need to tell me that two things are different. I don’t care about branches other than knowing which I’m in (we’ll get to this next). I don’t want to be bothered with any of this fancy information if nothing is broken (or going to break if I continue).

This isn’t actually a problem of git. This is a problem of people being indecisive when it comes to UIs. If you do everything, you fail. If you do nothing, you fail. If you do any subset of everything, you fail for some people. That’s OK, don’t worry, you can optimize as you go. Your first priority should not be exposing the power of git, it should be letting me put my code on the server so I can go home. Let me drop into all this fancy stuff with local branches and pushing tags and rebasing and such on a case-by-case basis, when I need it, and when I’m good at it.

What about the branches?

Git does branches right. I could go into more detail, but Pro Git does it so well I won’t bother, so just go read that. What git does not do very well, and I’m not sure if anything can do as long as humans are involved, is mitigate the mental cost of a distributed branch. It does make merging much more feasible, and allows for a larger number of cases where branches are not going to cost much, but they still need to be used with discipline and restraint.

The new idea that git adds is the local branch. I haven’t had a chance to use this much yet, but this is the feature that may ultimately win me over. I can look back and say “when have I ever needed a local branch?” and the answer would be “a few times, but not often”. But if I look back and ask “when would I have benefited from a local branch” and answer would be “hmm, I don’t know, but probably more often than I needed one”.

The example of the hotfix scenario (where you need to fix/test something from last week’s release and trunk/master isn’t ready) isn’t very compelling to me as an SVN user. It’s easy to make an SVN branch for something like this. I svn copy and switch it if my local copy is clean. If not, I can check out the project again, or if its a big one, I copy it over and svn switch it. Not as easy as git, but then again, I don’t generally have to make alot of hotfixes either.

The issue scenario (work one one issue per branch, merge when/if complete) is more compelling. I’d like to say that it isn’t and that I try to start and finish one issue at a time, but obviously that doesn’t happen enough. I like the fact that it’s so cheap to make a branch that I might as well just do it all the time. If I didn’t end up needing it, no harm done. If I did end up needing it, because some other issue suddenly got more important and the one I’m working needs to chill, then I’m glad it’s there.

The real win here is that nobody has to know about my branch, which means they never have to wonder what’s in it, or if its up to date. This means there is no cost to my team because I have a branch that is 2 weeks out of date. There is cost to me, but no more than having multiple versions of the project checked out, or a set of patch files sitting there waiting for me to get back to it.

One more thing

The fact that every developer has a full copy of the repository on their computer, basically for free? That’s really nice. Sure, you do backups and the chances of your computer being the last one on Earth with a copy of the repo is slim, but the peace of mind is undeniable.

The Verdict

The only reason this is really a decision at all is that git is harder to use for normal stuff. The command line can be scripted, so if someone got the GUI to the point where you start with an SVN-style workflow and deviate as needed, there really would be no argument to using SVN from what I can see.

My decision is that git is the winner here, despite that massive failing because I have faith that a combination of two things will happen. I will learn the GUIs better because it’s worth my time to do so, and someone will eventually figure out how to make it smarter and simpler.

I could not fault someone for sticking with SVN, even for a new project, because you can always import it later, but I will be starting new projects in git.

Why don’t websites have credits?

Engineers of any discipline are largely an anonymous bunch. You don’t know who designed the fuel pump in your car, I’d even wager it would be extremely difficult for you find out if you wanted to. You don’t know who wrote the code for the OS X Dock or Windows Start bar or who wrote the Like button on Facebook. These people made decisions that affect you deeply every day, and you have no idea who they are.

The most interesting part of this is that those people are OK with it. If you ask them (myself included) they will tell you that it doesn’t matter, that what really matters is the quality of the work and the enjoyment you had doing it. Unfortunately, I think we’re wrong.

Should they?

I can’t seem to come up with a good framework for who figuring out who wants credit, never mind who deserves it. If you so much as make a photocopy during the production of a movie, you’re probably in the credits with some high-faluten title like “First deputy assistant duplication specialist”. Music credits are tied to royalties and managed very closely. Most authors wouldn’t think about publishing something anonymously, nor would artists or sculptors. Artists always sign their work.

This is not even strictly a software issue. Video games list credits, often in the box and at the end of the game, and they even have a IMDB-like site. Nor is it an “arts & entertainment” issue, any credible scientific paper will cite other works and acknowledge contributions. Patents have names on them, even when assigned to a company.

A few software packages have listed credits. If I remember correctly, Microsoft did it on old versions of Word and Excel, and Adobe had it on old versions of Photoshop and Illustrator. I’m curious why those were removed, or at least hidden. “The Social Network” had something about Saverin being removed and re-added to “masthead” of Facebook (although I don’t know what or where that is).

So it would seem that we might be in the minority here, perhaps due to convention rather than any specific reason. And if there’s one thing that bugs an engineer, it’s deviating from standards with no good reason.

So let’s do it.

Why do it?

  • Pride in your work – Sure there is some pride in doing a good job anonymously, but wouldn’t be just a little more motivated or happy now that your name is on it?
  • Being a stakeholder – We’ve all done projects we didn’t believe in, and consoled ourselves with the fact that “it’s not my project”. Well, now it is.
  • Reputation – We’ve got our resumes, but credits will verify them.
  • Honesty/Transparency – There is no good reason to withhold this information, so it should be out there.
  • All that money they spent on school – Show your parents your name on a website and watch them smile.

So who’s get listed?

I think the short answer here is, everyone. Movies do it, why not websites? It could be just a big list of names, or something more detailed with contributions, dates, whatever makes sense. Let’s just start throwing some names up there, and let the de facto standards evolve on their own.

If you know of any major sites that do this well, put it in the comments. Similarly, if you can think of a good reason why this shouldn’t happen, I’d love hear about it.