Database Naming Conventions

Time for another naming convention. This time it’s something people care more about than Java packages, we’re talking about databases. Here are the rules, and the reasons behind them.

Use lowercase for everything

We’ve got 4 choices here:

  • Mixed case

    • Some servers are case sensitive, some are not. MySQL for example, is case-insensitive for column names, case-insensitive on Windows for table names, but case-sensitive on Linux for table names.
    • Error prone
  • No convention

    • Same reasons as mixed case
  • Upper case

    • SQL is easier to scan when the reserved words are uppercase. This is valuable when scanning log files looking for things like WHERE statements and JOINs.
    • MySQL will always dump table names on Windows in lowercase.
  • Lowercase

    • Works everywhere. Some servers, like Oracle, will appear to convert everything to uppercase, but it’s just case-insensitive and you can use lowercase.

Only use letters and underscores and numbers (sparingly)

  • Most servers support other characters, but there are no other characters which all the major servers support.
  • Numbers should be used as little as possible. Frequent use is typically a symptom of poor normalization.

    • address1, address2 is OK
  • Whitespace isn’t allowed on most servers, and when it is you have to quote or bracket everything, which gets messy.

Table and column should be short, but not abbreviated.

  • You’ve seen the same thing abbreviated every way possible, like cust_add, cus_addr, cs_ad, cust_addrs, why not just customer_address? We’re not writing code on 80 character terminals any more, and most people aren’t even writing SQL, so let’s keep it clear, OK?
  • 30 characters is considered the safe limit for portability, but give some serious thought before you go past 20.

Table names should be singular.

Yes, singular! Oh yes, I went there. I used to use plural names, because it’s more semantically accurate. After all if each record is a person, then a group of them would be people, right? Right, but who cares. SELECT * FROM person isn’t any less clear than people, especially if you’ve got a solid convention. You don’t use plurals when you’re declaring class names for a vector of generics do you? Also:

  • English plurals are crazy, and avoiding them is good.

    • user -> users
    • reply -> replies
    • address -> addresses
    • data -> data (unless its geographic, then it’s datum -> data)
  • Singular names means that your primary key can always be tablename_id, which reduces errors and time.

Double Underscores for Associative Tables.

You’ve got your person table, and your address table, and there’s a many-to-many between them. This table should be called address__person. Why? Well what if you have a legacy_customer table that also ties to address. Now you’ve got address__legacy_customer. A new developer can easily pick up this convention and will be able to break down the names accordingly. Remember, no matter what the Perl/Lisp/Ruby/etc guys say, clarity of code is judged by how someone reads it, not how they write it.

Component Names of Associative Tables in Alphabetical Order.

This rule is somewhat arbitrary, but still beneficial. There’s no good way to determine which goes first. Table size, “importance”, age, who knows what else, and those assessments may change over time. Or, you might find that your manager assigned the same task to two people, and now you’ve got an address__person and a person__address table co-existing peacefully, when you only need one. Everyone putting them in the same order makes reading and writing queries easier.

That’s all I’ve got for now, but I encourage you to offer your own, or even refute some of the ones above (with some reasoning, of course).

New Toy Review: TeraStation Live

2 TB TeraStation LiveA few weeks ago I picked up 2 TB TeraStation Live, and so far I have to say I’m very happy with it. I got it for $800 from buy.com, which puts the whole device at about $400 over the price of it’s 4 500GB hard drives. I think that premium is well worth it.

It was very easy to set up, at least for someone with a base knowledge of setting up a file share. The administration is done via a slow, clunky, but straightforward and ultimately effective web UI. I have set up a few different shares with varying permissions, and haven’t had any problems so far. Among its nice/key features are the fact that it is virtually silent, has gigabit networking, and isn’t too big, about the size of a mini-tower PC.

The Mobile Revolution Begins! (and 3 years later the iPhone comes out)

I find the whole iPhone phenomenon interesting because people are seeing it as a technology advance when it’s really a textbook marketing/branding success (revolution?) in the making. Apple essentially tricked the entire media landscape into doing amazing PR, even “independent” sources like blogs and NPR.

Let’s be clear, the $500+ phone isn’t new. The phone with a google map that you can drag around with your finger isn’t new. Neither is one that can play gigs of mp3s, has a real address book, a real web browser, real email, etc. Having the option from a text message to reply or voice call the person back isn’t new. These things have existed for years, I know because I have it. Oh and mine lets you build/install apps, the sticking point many techies complain about for the iPhone, but very few of these apps have seen any success. It’s also available for any network, uses EVDO, has wi-fi and bluetooth, has handwriting recognition, serves as internet conduit, a real keyboard, etc. The reason nobody had to camp out for my phone, the reason I don’t pimp it to everyone I meet? It’s running a Microsoft OS, ho hum. So from a technology/feature standpoint, the iPhone is nothing new, but even well-informed tech people seem to have fallen for the slick ads, the expert PR “reviews”, and the general fanfare.

People are opining that this was Steve Jobs’/Apple’s greatest risk, which I disagree with, because they weren’t really taking a risk because they weren’t doing anything new. They’ve added that Apple shine to devices that have existed for years, that they’ve been able to watch people use, and they’ve fixed the mistakes. They’ve made the interface sexy with effects and constraints, a skill they’ve honed for even longer. They’ve used all their fanboys to turn a device into the Beatles.

I’ve done enough software to know how important (and potentially misleading, though that’s not the case here as far as I know) a well-executed interface is to adoption. I once co-wrote a CMS that was effectively ignored by everyone but its authors, but then I spent a day adding some logos, gradients, and javascript UI effects, and people were immediately scheduling meetings with me just to get a peek at it. Where Jobs & Co. deserve credit is that all of the “news” coverage their product is getting is lauding a revolution, instead of saying “Apple’s new iPhone is almost as capable as a 3-year-old Windows Mobile phone, but they executed the interface much better.” For more information, see the next edition of most marketing/branding books.

Facebook: 200 Hours (of dev time) FREE!

The soup du jour in the Web 2.0/startup community is Facebook’s new API. StyleFeeder has an app on it now, because it made sense and it’s a good fit. You should try it out. Phil did most of the heavy lifting, and we should pity him for it, because it’s very poorly documented, not completely built, and nerve-wrackingly unstable. But it works, and it’s pretty cool to be able to get in to a place people feel comfortable and offer them something we think is useful, that Facebook never would have got around to building. Not everyone is impressed about the whole idea though:

Kottke obviously wasn’t doing any internet development in 1994 or he would have heard “why isn’t my website as nice as stuff I see on AOL?”

“What happens when Flickr and LinkedIn and Google and Microsoft and MySpace and YouTube and MetaFilter and Vimeo and Last.fm launch their platforms that you need to develop apps for in some proprietary language that’s different for each platform?”

It’s pretty clear what would happen. People would develop for Google and MySpace, people are already developing for Microsoft, and the others would be ignored. You need massive scale and a big carrot (Facebook’s is a massive, clean network of users), to pull this off. Everyone else puts their faith in the W3C. Eventually the Facebook API will be deprecated or ignored, and if there are any good ideas in it, they’ll trickle out into the mainstream.

Valleywag has a screed by someone who bought the hype and blames Facebook for it, and is now apparently trying to generate some backlash to justify to management why it didn’t work as promised. Keep in mind that the hype was not from Facebook, I didn’t get any emails from them, see any ads, it was solely driven by tech-pundits, bloggers and VC panic. This person is sad that they didn’t get millions of users overnight and that Facebook “undercut” developers by preventing them from spamming their apps all over the place. If your app is useful or fun, it’s going to grow on its own, just like any other software distribution method. Sure it may seem unfair that some people got in and got big before the rules tightened up, but it’s also unfair that my parents got a house with an acre of land for $32k before I could (I was 3).

PuTTY: Custom Icons

PuTTY is the most popular free SSH client for Windows. It’s very stable and very lightweight, due to the developers keeping bloat out of the source. One piece of bloat that I would like to see is the ability to pick icons, which I find very valuable when working on multiple servers, a common task for most developers. Luckily it’s pretty easy to roll your own PuTTY, so I figured I would offer a little how-to here.

  1. Download and install Cygwin. I’m not sure which packages you need, as I typically just install everything.
  2. Download putty source code from here: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
  3. Open cygwin shell
  4. cygwin: mkdir putty
  5. cygwin: cd putty
  6. cygwin: unzip [wherever download is]/putty-src.zip
  7. cygwin: mv WINDOWS/PUTTY.ICO WINDOWS/PUTTY_ORIG.ICO
  8. Now put your icon (.ico) file in WINDOWS and name it PUTTY.ICO
  9. cygwin: cd WINDOWS
  10. cygwin: make -f MAKEFILE.CYG putty.exe
  11. You should now havea nice fresh putty.exe file in the WINDOWS directory, copy this wherever you wish.
  12. Copy the next icon to PUTTY.ICO and re-run make. Repeat until you have one executable per server.

Safari for Windows: No Thanks

I’m with many others, including Yuval, in saying that Apple porting Safari to Windows is unfortunate. Web developers have basically got 3 main browsers to deal with right now, and adding a 4th isn’t going to help. IE6 is dying, but still prevalent. IE7 isn’t great, but is obviously a force to be reckoned with. Firefox is bloated and slow, but comfy and relatively painless to develop against. Safari is outdated, buggy and less standards-compatible than IE7 or Firefox, often intentionally so (try styling a submit button).

The big claim Apple makes is page rendering performance. Firstly, I hope nobody takes Apple’s performance numbers seriously, they’ve been flat out lying about them across the board for a decade or more. Even if the stats are legit, picking a browser based on page rendering speed is silly, it’s like buying a car that goes 200mph instead of 150mph, when you live in Los Angeles and spend most of the day in slow traffic anyways.

I think Apple ported Safari to force people to support it, since it’s often sidelined due to low usage and its quirky behaviors. I’m really loathe to support a browser that offers no new features yet has a unique set of drawbacks. The solution here is for Apple to abandon the KHTML core and put it’s overstated development resources into the Gecko engine, giving them the ability to Mac-ify and brand a browser, but not impose increased costs on web developers. They should look to their own successful Airport line for how you can drive adoption, leverage/strengthen a standard, and maintain a strong brand at the same time.

Vista: Day 1

So, after finally calling it quits in my battle against my MacBook Pro, I retreated to Microsoft. You basically can’t buy a Dell without Vista now, so I figured I’d give it a shot. There’s been a fair amount of hype by Microsoft in favor of it, and a tremendous amount of anti-hype against it by basically everyone else. After my first day, I’ll say that neither side has much to stand on.

The system is what I’d consider an average developer box these days. Grand total with tax was less than $1150.

  • Dell E521
  • Athlon Dual-core 5000+
  • 2GB RAM
  • 250GB hard drive (no raid)
  • ATI X1300 video card
  • 20″ Dell 4:3 LCD
  • Windows Vista Home Premium

I hooked it up to my pre-existing 24″ LCD as primary monitor. I had bought a Radeon 9250 so that I could run the second monitor on DVI, but ATI doesn’t have Vista drivers for that, so I hooked it up to the VGA port until I return that card for a newer one.

Here’s what I installed:

  • JDK6 – No problems
  • Eclipse 3.3M4 – No problems
  • Jetty 5 and 6 – No problems
  • JettyLauncher (eclipse plugin) – Only works with JDK 5, not sure if this is a Vista thing, so…
  • Subclipse (eclipse/subversion plugin) – No problems
  • JDK5 – No problems
  • MySQL 4 – No problems
  • Firefox 2 – No problems
  • Yahoo IM – Crashed once, but it does that on XP too often too.
  • AIM 6 – Crashed once after I first started it, ran fine through several conversations later on.
  • Windows Mail (pre-installed, I configured for POP and SMTP over SSL) – no problems.

So the big complaint by the hordes has been performance. For a mid-range machine, with full Aero enabled on two monitors at 1920×1200 and 1600×1200, I see no lag at all. Aero is actually decent. It’s only major flourish is the new “flip-3d” where the windows stack up like something you would see on a Mac, but its really kind of useless, and I prefer alt-tab. The live previews when you over over the task bar are actually kind of nice, though not very useful. The transparency is fancy, but not overdone, the fade/shrink when you minimize is quick and nicely done. I haven’t disabled any of it yet after 7 hours of use, which is about 6.9 hours longer than the ridiculous XP theme lasted.

Programs launch and run faster, though it does seem like installers go slower and hang for a while. It also comes with a ton of nice fonts, I’m curious if we’ll start seeing those show up in CSS files. I set most of my stuff up for familiar 8pt Lucida Console, I’ll have to go through and see if they’ve added any other nice monospace fonts.

Microsoft seems to have adopted the unix idea of security when it comes to “sudo”. Whenever you do anything that affects the OS, it prompts you to allow it to proceed. If you want to do something like edit your hosts file, you’ll need to run your editor as an administrator, which is as easy as a right click. It’s all a bit annoying, but probably just because I was tweaking it alot. We’ll see how it plays out after a period of normal usage. The worst part is that when the box comes up, the whole screen flickers and takes on a lightbox type effect. Seems to be overkill and poorly implemented.

Other than that, I hate to break it to the Microsoft PR squad and the throngs of haters, but it’s really just good old Windows. The paths are a bit different, things are called slightly different names, but from my perspective, it’s all trivial stuff. Unless I come across something tragic or wonderful, I see no reason for people to upgrade, or to resist upgrading.

Macintosh Gripes: Poor Usability

I’ve had many computers over the years, and many of those have been Apple Macintoshes. I’ve had an SE, PowerMac 7200/75, iBook, Mini, and currently a MacBook Pro. Up through about 1997 I considered myself to primarily be a Mac user, but I switched because I was using and developing for the web more and more, and Mac web browsers were slow and poor, to put it kindly. I was also developing on Microsoft Access quite a bit, and there was no Mac version (and there still isn’t). When I started at StyleFeeder I had a choice and opted for Mac because that’s what the other developers were using, and I have to admit I regret that decision.

Macintosh OSes, especially OS X, is often praised for it’s quality and usability. Back in the old days, I’d agree. System 7 was hands-down better than Windows 3.1 in pretty much every way I can think of. These days, I’d say the tables have turned and can’t think of any way that OS X is better than Windows XP. They both crash infrequently, but I’d give a slight advantage to XP as it only crashes for me once or twice per year, while the Mac has done so at least twice since August. The Mac came with some decent software for dealing with movies and making music, but I don’t do much of that stuff and if I did, I’m sure I could find similar Windows software.

My main cause for regret is the overall usability of the OS. There are three main issues here. The lack of keyboard access to commands, the antiquated menu bar and the MDI.

As a full-time developer, I’m in the power user caste. I am constantly trying to find the shortest and easiest way to get things done. Nobody with half-a-clue about usability would argue that the mouse is an efficient command tool. It’s obviously the ideal way to select things and navigate spatially, but once you’ve gotten where you need to go, you are ready to start issuing commands. On Windows, every command is accessible via the keyboard. Common operations have simple key-combinations, more obscure ones will have more complex combinations. Control-P prints, Control-S saves, and so on. For tasks that are more specialized or used rarely you will likely have to use things like Alt-F-W-F (makes a new folder in explorer) or Shift-Control-F (formats code in Eclipse). These don’t need to be easy, but they are there and every user will find themselves learning a few of them depending on what they do often.

On a mac, no such luck. Some things like printing and saving are common, but after that it’s seemingly random and left to the application developer to implement commands, and most don’t. So I have to stop what I’m doing and reach for the mouse all the time. In usability terms, this is a fairly significant hurdle and has a high cost which I find unacceptable. Apple reluctantly introduced “universal keyboard access”, but it’s really bad and clearly a begrudged afterthought that makes you use the arrow keys to navigate menus.

The second gripe is the way OS X sticks the menu/command bar at the top of the screen. The logic here is that it’s easier to target something thats at the top of the screen because the mouse won’t go past it, and also that it’s always in the same place. That makes sense in theory, but fails in practice. As mentioned above, the mouse is a last resort for issuing commands, so by the time you’ve reached for it you’ve already incurred significant expense. Spending the extra 50ms target it is a minor addition to this.

Where the fixed menu bar really fails is when you use more than one monitor, because it sticks it to the top of the primary monitor. If you’re working in the second monitor you now need to grab the mouse, and drag it all the way to the second screen, then drag it back.

Also, when working in more than one program, which most people do, you can’t select commands from an inactive program. You need to find a safe place to click to activate it, which is expensive and varies widely, then you can access the menu bar. In windows you can click on an inactive menu bar and it will open with that same click.

Thirdly, almost all Mac programs use what is called an MDI, or Multiple Document Interface. This means that all the windows for a program are linked together as opposed to a Single Document Interface (SDI). Microsoft realized that MDI was a poor design in many cases, and fixed this in XP. MDI makes sense for some things, such as dialog boxes and the many windows that Photoshop uses to show things like layers and pallettes. This does not make sense for things email and word processing. Just because I have two documents open in the same word processor doesn’t mean there is any relation between the two. I should be able to alt-tab to a specific document and keep the other one minimized. A minor annoyance related to this is that you can have programs running with no windows at all, which ties up resources and is just plain confusing.

Perhaps the worst part of these problems is that they could be built into the OS and be optional behaviors, but they aren’t. Apple’s stance has always been that they know best, and that options are confusing. Unfortunately they compound this defect with the fact that they are extremely slow to change things even when they are clearly wrong. For proof of this you need only to see that they still put one-button trackpads on their frighteningly hot, overpriced laptops.

Cracked Foundation: Trac

We’ve been using Trac on a project lately, and it’s a good example of an otherwise decent product being rendered almost completely useless by a simple problem. To be clear, we’re using Trac for defect/issue tracking, so if you aren’t using that part of it, this wouldn’t affect you. The rest of the system seems OK, though I’m not sure why integrating SVN commits into tickets requires some hack via Perl script (that won’t work for us). However, that’s just a lack of convenience, the real problem is the way it handles the chain of custody for tickets. Here is Trac’s state diagram for tickets:

Track Ticket State Chart

I’ve worked with many different issue tracking systems, and if you have too, you’ve probably already noticed what’s missing. I call it the “full circle” concept for lack of a better term. The crux of the idea is that the person who opened the bug should be the only one to close it in most cases. Trac does not have this. The reporter can reopen the ticket, but a closed ticket basically falls off everyone’s radar, so no validation or verification would take place. Here is the state chart with the step I’m referring to:

Minimal Defect Tracking State Chart

Even on a small team, without this step we’re running into people with clogged-up queues because it’s pretty common for developers (ticket resolvers) to also create tickets from time to time and assign to another developer. We’ve take the safe route of reassigning to the reporter rather than closing, so things should be validated before closure, but this makes the ticket queues much larger and more difficult to manage, and sometimes tickets don’t get closed until after the fix has been published.