Workstation Setup 2011

A new workstation means it’s time to install lots of stuff, and we’re still a long way from DropSteam.  Here’s my log from fresh Windows 7 install in a new VM image to a functional development environment:

First, I hit Ninite and install:

  • All Browsers (I use Chrome as default)
  • Adobe Air
  • Adobe flash
  • VLC Player
  • IrfanView
  • Inkscape
  • Paint.NET
  • Foxit Reader
  • PDFCreator
  • CutePDF (yes, you need both PDF printers, as it’s fairly common for one of them to have a problem with a particular job)
  • CCleaner (tweak settings before running so you don’t nuke more than you want to, like browser history)
  • 7-Zip
  • Notepad++
  • WinSCP
  • JDK

Then I grab the ZIP of all of the Putty programs.  I put installer-less programs like this in C:\bin

Cloudberry Freeware for Amazon S3 buckets.

Download JavaDoc and install in JDK folder.

Download Eclipse (3.4, not impressed with 4.X so far) and then:

  • Set text font to 8pt Lucida Console
  • Most companies and many open source projects are still using SVN so I install the Subclipse plugin for Eclipse.
  • I’m not a huge fan of m2eclipse but I find that doing eclipse:eclipse from the command line costs you too much integration, so I use it.
  • Turn on all compiler warnings except:
    • Non-Externalized Strings – Enable as-needed
    • serialVersionUID – Not useful for most projects
    • Method can potentially be static – False positives on unit tests
  • Turn on line numbers
  • Install CheckStyle.
  • Install FindBugs.

Maven 3 seems a little rough around the edges so I still use Maven 2.X

Install Cygwin and add git, svn, curl, and ssh packages.

Install MySQL Community Edition.  During the installer I:

  • Change the charset to utf8
  • Fix the windows service name to something like MYSQL5
  • Add to windows path
  • Add a password

JRebel.  You’re using this, right?  If not, slap yourself and then go get it.  Pay for the license out of your own pocket if you need to.

Lombok.  I have finally used this on a real project and can say it’s ready for prime-time.  It does not work with IntelliJ IDEA but I haven’t really seen any reasons to use IntelliJ that outweigh the benefits of Lombok.

Photoshop Elements because while IrfanView is great for viewing and Paint.NET is great for simple edits, you will at some point need a more powerful editor.  Also most designers work in Photoshop so this let’s you open those files directly.

Photoshop Elements+ is basically a $12 unlock of some of Elements’ crippled features.  For me it’s worth it for tracking alone.

LastPass is useful even if you don’t store anything sensitive in it, it’s great for testing webapps with multiple users.

I use Git for my own work so we’ll need that. Don’t forget to set your name!

I also make some Windows tweaks:

  • Set desktop background to black.
  • Check “Show hidden files, folder and drives”.
  • Uncheck “Hide extensions for known file types”.
  • Set %JAVA_HOME to JDK directory.
  • Add Maven’s bin directory to %PATH
  • Add C:\bin to %PATH

I will obviously add more over time, but this is the stuff I know I will need.  What’s great is that almost all of it is free, and it can all be downloaded (except the original Windows install), so no disks required like the old days

You might think this is an incomplete list, where is my email client, my MS/Open office, my music player?  I don’t use those unless I have to.  Keep in mind that this is a VM so some of this software is installed on the Host OS, while the rest of it I prefer to use web-based solutions (Meebo, Google docs, webmail) so there’s no issues of having to keep updating settings.

The Most Important Performance Metric

Up until about 5-10 years ago, the performance gains in computers were huge from one generation to the next. Tasks that took all night soon took an hour, then 15 minutes, then 3 minutes, and so on. You felt these and they changed the way you worked. Eventually, hardware got beyond the point where it changed what you did and just became an “oh that’s nice” effect.

I recently decided, after months of hemming and hawing, to replace my main workstation in my home office. The previous one was a c2008 midrange Dell that has served me well. I had been using some better hardware at my job, a decent desktop with an SSD and then a decent laptop, also SSD. When I went freelance, I splurged on the best laptop I could find, with the exception of it not having an SSD, but rather twin 7200rpm drives in RAID 0. It has an amazing 17″ screen, 8GB RAM, i7 CPU, etc. It ran me $3200, and I figured I’d replace the drives at some point rather than spend the extra $1k from the manufacturer. I wasn’t in any rush because it felt as fast as my previous SSD boxes so I wasn’t anticipating it being worth the expense, much less the time spent reinstalling.

As for my new desktop, I went similarly all-out. It’s an i7-2600k, 16GB Corsair Vengeance 1600MHz RAM (the most you can reasonably buy at the moment), 2 x 120 GB Intel 510 SSD in RAID on an MSI Z68A-G65 mainboard.  The video card is an nVidia GTX 460 SE, which is not really in the same league as the rest of the hardware but this is primarily a work computer.  600W power supply, Cooler Master HAF X case, and Windows 7 Professional and it came in at about $1800 altogether, which actually isn’t bad. It was very easy to put together, probably 90 minutes, and booted fine the first time I powered it on.

I’ve been using it for the past day, and the difference is astonishing. It’s not just fast, it’s different, and I think I’ve just figured out why.

The most important performance metric for modern workstations isn’t how fast it completes arbitrary tasks, it is the percentage of how many of your tasks completely instantly.

This computer, compared to every other computer I’ve used, Macs, PCs, Linux, whatever, does far more things without any delay whatsoever. So much so that my rhythm is off and I’m actually a little uncomfortable at the moment (I know this will pass).  The familiar delays I was used to, even when they were 50-100ms, whether opening a file or pulling up an autocompletion are gone.

I guess my point is that if you spend all day on a computer and think that there’s no point in upgrading any more except to replace things that break, I think you will be pleasantly surprised if you decide to splurge a bit.

Java 7

Java 7 (AKA Java 1.7.0) is out! What does this mean, is this a Big Deal like Java 3 or 5 or a snoozer like Java 6 or something in the middle like Java 4? It all depends on what you do with it, but for most Java developers, we’re looking at a Java 4-ish release. There are some new APIs, and some really nice syntax cleanup, but ultimately it’s not a big deal.

Here’s the list of what’s changed:

The stuff I hope I don’t have to care about (AKA The Boring Stuff)

These are the under-the-hood changes I will probably only encounter when debugging someone else’s code.

Strict class-file checking

I’m not entirely sure what this is, and it doesn’t seem like too many other people are either.  I think it means that Java 7 files have to declare themselves as different than previous versions and do some pre-verification as part of compilation.  I don’t know if this is actually something new or just declared baggage.

As specified in JSR 202, which was part of Java SE 6, and in the recently-approved maintenance revision of JSR 924, class files of version 51 (SE 7) or later must be verified with the typechecking verifier; the VM must not fail over to the old inferencing verifier.

Upgrade class-loader architecture

Fixes some deadlock issues when writing non-hierarchal custom classloaders.  If you’re writing custom classloaders, this is probably great news.  Unfortunately if you’re writing custom classloaders you’re pretty much screwed with or without this improvement.

More information here.

Method to close a URLClassLoader

Pretty self-explanatory.  if you’re using a URLClassLoader, you may at some point want to close it.  Now you can.

More information here.

SCTP (Stream Control Transmission Protocol)

From what I can gather SCTP is somewhere between UDP’s lightweight “spray and pray” approach and TCP’s heavy-duty “armored convoy”.  Seems like a good choice for modern media narrowcasting and always-on sensor networks, but don’t quote me on that.

More information here.

SDP (Sockets Direct Protocol)

Implementation-specific support for reliable, high-performance network streams over Infiniband connections on Solaris and Linux.

SDP and/or Infiniband seem to networking where you can get more throughput by bypassing the nosy TCP stack, but without completely ignoring it.  Kind of like flashing your badge to the security guard and letting all your friends in, I think?

More information here.

Use the Windows Vista IPv6 stack

I’m assuming that this also applies to Windows 7, but I’ll worry about this when my great-grandkids finally make the switch to IPv6 in 2132.

TLS 1.2

Better than TLS 1.1! (TLS 1.2 is SSL 3.3).

More information here.

Elliptic-curve cryptography (ECC)

The official description sums it up:

A portable implementation of the standard Elliptic Curve Cryptographic (ECC) algorithms, so that all Java applications can use ECC out-of-the-box.

XRender pipeline for Java 2D

Now, when you aren’t gaming, you can  leverage some of your GPU power to do image rendering and manipulation in Java instead of just mining BitCoins.

Create new platform APIs for 6u10 graphics features

Let’s you use opacity and non-rectangluar shapes in your Java GUI apps.

More information here.

Nimbus look-and-feel for Swing

Your Swing apps can now look like outdated OS X apps.

More information here.

Swing JLayer component

Useful way to write less spaghetti when trying to make Java GUI apps behave like other modern frameworks.

More information here.

Gervill sound synthesizer

Looks like a decent audio synthesis toolkit.  Not sure if this really needs to be in the JDK but now you can add reverb to your log file monitor’s beeping.

More information here.

The important stuff I don’t really use (AKA The Stuff I’m Glad Someone Thought Of)

These are the changes that probably only help if you’re on the cutting edge of certain facets of using Java.  Eventually these will probably benefit us all but I doubt many people were waiting for these.

Unicode 6.0

Also self-explanatory, new version of Unicode.  Has the new Indian Rupee Sign (?) and over 2,000 other new glyphs, so it’s certainly important if you use those, but otherwise not so much.  I don’t know enough about switching versions of Unicode to say if this is something to be worried about, but if you have lots of it in your app, it’s probably safe to say you should do some major tests before switching.

More information here.

Locale enhancement

Locale was previously based on RFC 1766, now it is based on BCP 47 and UTR 35.

This isn’t just new data, this is a new data structure, as the number and classification of languages on computers has evolved beyond the capabilities of RFC 1766.  I’ve always been glad that Java is so robust in it’s internationalization support even though I’ve never really taken advantage of most of it.

Separate user locale and user-interface locale

Now this is interesting, but I can’t find any good use cases yet.  My guess is that this is part of Java’s ongoing push towards multi-platform/multi-language interfaces, notably Android.  You can set a Locale (and the rules a locale dictates) not only on geo/cultural basis but on a device/environment basis.  Locale.Category only has two entries but I wouldn’t be surprised if more of them creep in in future versions.

JDBC 4.1

java.sql.Connection, java.sql.Statement, and java.sql.ResultSet are AutoCloseable (see try-with-resources below).

Also, you can now get RowSets from a RowSetFactory that you get from a RowSetProvider.  If you’re still writing JDBC this probably makes your life easier, but only in the “at least I”m not writing custom classloaders” way.

Update the XML stack

Upgrade the components of the XML stack to the most recent stable versions: JAXP 1.4, JAXB 2.2a, and JAX-WS 2.2

Your XML library is always a version that’s too old or too new, so you might as well go with the new one.

The important stuff I don’t use yet (AKA The Cool Stuff)

InvokeDynamic – Support for dynamically-typed languages (JSR 292)

The short version is that this makes things better for people implementing dynamically-typed languages on the JVM, like Clojure.  While the JVM is actually less statically typed than you might think if you’ve only used it via normal Java, methods in particular were difficult for these languages.

The long version is here.

Enhanced MBeans

I’ve never knowingly used MBeans (Managed Beans), but they do seem useful for large systems.  Regardless, they’ve been enhanced:

Enhancements to the existing com.sun.management MBeans to report the recent CPU load of the whole system, the CPU load of the JVM process, and to send JMX notifications when GC events occur (this feature previously included an enhanced JMX Agent, but that was dropped due to lack of time).

The Important Stuff I Use (AKA The Good Stuff)

These are the reasons I’ve even bothered to look at Java 7.  I’ll probably write a follow-up on each of these when I have some experience with them.

JSR 203: More new I/O APIs for the Java platform (NIO.2)

I’m including this here because I’m actually doing some data/file work where this will come in handy.  Basically this exposes a lot of I/O and Filesystem functionality that you previously had to dip into native libraries for, like actually telling the OS to copy a file rather than reading and writing the data yourself.  If you’re not doing any file-based work right now, keep a mental note to look into this the next time you type “File”.

More information here.

NIO.2 filesystem provider for zip/jar archives

Included here for similar reasons as the previous item, but I think it’s pretty cool that you can treat a zip file as a filesystem and operate within it.  Unfortunately it looks like the basic implementation is read-only, but I’m guessing the pieces are there for a read/write version.

More information here.

Concurrency and collections updates (jsr166y)

java.util.concurrent, as far as I’m concerned, is something everyone who considers him/herself a senior Java dev should be familiar with.  It’s rare that your code will not be able to take advantage of it in some way.  One of the missing pieces was a good fork/join component, where you can split jobs and wait for them all to complete in parallel before moving on.  I actually built a version of this approach (sadly not open sourced), so I’m eager to tinker with the new standard one.  I expect that list most of the concurrent library you’ll need to sprinkle in some of your own helper code to make things elegant, but the hard parts are solved and built and ready for assembly.

More information here.

Project Coin – Small language enhancements (JSR 334)

Ah, the sugary goodness that you’ve been waiting for.

  • Strings in switchNot terribly useful since we could (and probably still should) use enums, and you still have to use if blocks for anything fancy like regex matches.
  • Binary integral literals and underscores in numeric literalsI never really thought of this as a problem, much less one worth solving, but sure, why not.
  • Multi-catch and more precise rethrowNow this, as an ardent fan of exceptions, I like.  It not only cuts down on copy/paste exception handling, it helps avoid subclassing exceptions, which is almost always the wrong approach.  More info on this later.
  • Improved type inference for generic instance creation (diamond)Great idea that really should have always been there. Example:
    Map<String,Integer> myMap = new HashMap<>();
  • try-with-resources statementProbably the most important syntax change.  Now you can skip all of those finally { resource.close(); } blocks that were important but verbose even for my taste.  Doesn’t really help with the situation of the close method throwing an exception of its own though, so I’m curious to see how I end up handling that situation.
  • Simplified varargs method invocationJust avoids some methods that really never should have been there in the first place.

More information here (PDF).

Summary

So basically, a bunch of “nice to have” stuff.  Code will be a little cleaner, resources will be closed a little more often.  GUI apps will be a little fancier.  Some more people might try concurrency.  Programs will run a little faster.  The next thing you know we’ll be looking at Java 8

 

 

 

Readability + Kindle + Something Else

I really like my Kindle. Beyond all of the more tangible/advertised benefits it has, the most important thing it’s done for me is that I’ve been reading more since I started using it.

I also really like Readability, I think it’s an optimistic and hopeful view of the future of content on the internet, rather than the arms race of ad blockers and the AdWords-fueled plague of content scrapers.

The fact that these two things I like can join forces is also great. I can send an article to my Kindle via Readability. If I see some long, thoughtful piece, I click two buttons and it will be there for me when I settle in for the evening. Unfortunately I don’t/can’t use this as much as I’d like for two reasons.

Lost Commentary

I find most of my new/fresh content via link-sharing sites. Starting long ago in the golden age of Slashdot, I’ve gotten into the habit of checking the comments on an article before I read it. I don’t usually read the comments, I just skim them, and get a sense of how worthwhile it is to read the article. If I see a healthy discussion, or lots of praise, it’s clearly something worth spending a few minutes on. Even if I see some well-written refutations, it can be valuable in a “know your enemy” sense. If I see something like “Here’s the original article” or “How many times will this be reposted?” then perhaps I’ll just move on.

After I’ve read the article I might go back and read those comments, or perhaps even leave one. With the Kindle/Readability combo, I can’t do that. Blog posts will come through with their own comments, but for whatever reasons, there always seems to be better discussion offsite.

Linkability

The “premium” content sources like major magazines or newspapers rarely link off of their stories. I think this is conditioning from the print era, but it actually plays well to this offline system. If an author talks about another website he’ll probably include the relevant details in the article, or quote it, or include a screenshot.

Blogs, however, are chock-full of links, often without context, sometimes just for humor, but sometimes as an essential part of the article. Very few blog posts are actually viable in a vacuum. I have a special folder in Google Reader called “droid” which are blogs that generally don’t do this, and are good for reading when I have idle time (via my phone, hence the name) and don’t want to deal with links.

Something Else

I’d like to have some way to read an article or post offline, that can pull in these other sources. Perhaps a “send to kindle” that actually generates an ebook with the article as the first chapters and comments from my favorites collated into other chapters. Or perhaps a Kindle app that can do this and stay updated. What I don’t want is a mobile OS-style app that pops open browser windows, as that’s an entirely different use case. A “send back to computer” would be useful for stories that require switching back to browse mode.

TLDR: Sometimes I just want to read, not browse.

Security Club

Sony has been getting repeatedly hacked. We’ve seen it before with the TJX incident, and many others, most of which never get reported, much less disclosed, or even discovered. In some of these cases, only email addresses are taken, or maybe passwords. In others, names and addresses are exposed, as well as medical conditions, social security numbers, and other very sensitive information. It seems to me that this is happening more often, and I think it’s for a few reasons.

Bigger Targets

The first reason is that targets are getting bigger and the rewards go up with size. Nobody is going to waste their time getting into a system with a few thousand random users when you can get tens of millions for the same effort. As more people use more sites, it’s only natural that there are going to be more million-user sites out there. This reason isn’t a big deal, it’s just the way things are.

Better Rewards

The second reason is that more companies are collecting more data about their users. This data is valuable, possibly the most valuable asset some of these companies has. Facebook and Google make much of their money from knowing about you, what you do online, the types of things you’re interested in, different ways to contact you.

Large companies like Sony can afford to take whatever information you give them and cross-reference it against various databases to get even more information about you. This lets them focus marketing efforts, tailor campaigns to you, shape product development and so on. This also lets them make the site more easy to use with pre-filled information, to increase sales and conversions.

We don’t even really question when a site asks us for our name any more. What’s the harm, right? Sure, I’ll give them my ZIP code too, and maybe even my phone number, they probably won’t call me anyways, right? Now ask yourself, why do you need to give your name, mailing address and phone number to a company to play a game where you are a pseudonymous elf?

The real answer is that they don’t. They might need it for billing purposes, but billing databases are kept much more secure for reasons I’ll explain later. They ask for this information because it’s free, and because you’ll give it to them, and because it’s valuable to them. It’s probably not protected very well, and when it gets stolen everyone shrugs, changes the password on the database, emails the FBI to make it look like they care, and gets back to more important business like social media strategies.

No Penalties

The companies involved are embarrassed and probably suffer some losses as a result, but these are mostly minor injuries. The news stories spin it to make the intruders the sole criminals, and lose interest. The only people who really pay for these incidents are the people whose data has been stolen. There are no requirements on what companies have to do to protect this information, no requirements on what they need to do if it is compromised, no penalties for being ignorant or reckless. Someone might figure out that it’s going to cost them some sales, and they put some money in the PR budget to mitigate that.

This is the reason why billing information is better secured. The credit card companies take steps to make sure you’re being at least a little responsible with this information. And in the event it leaks, the company who failed to protect it pays a real cost in terms of higher fees or even losing the ability to accept cards at all. These numbers make sense to CEOs and MBAs, so spending money to avoid them also makes sense.

How to Stop It

There are obviously a large number of technological measures that can be put in place to improve security, but there’s one that is far simpler and much more foolproof. But first, let’s look at banks. Banks as we know it have been around for a few hundred years. I’d bet that you could prove that in every single year, banks got more secure. Massive vaults, bullet-proof windows, armed guards, motion detectors, security cameras, silent alarms, behavioral analysis, biometric monitors, the list goes on and on, and all of these things actually work very well. But banks still get robbed. All the time. When was the last time you heard of a bank robber getting caught on their first attempt? They are always linked to dozens of other robberies when they do get caught. Why?

Because they’re full of money.

They can make it harder to rob them. They can make it easier to catch the people who did it. But the odds don’t always matter to someone who sees a pile of money sitting there for them to take if they can bypass these tricks.

People break into networks for many reasons, but the user data is often the pile of gold that they seek. So the most effective way to stop someone from breaking in and stealing it is to not have it in the first place. This advice works in 2011, it will work in 2020. It works on Windows, OS X and Linux. It works online and offline, mobile or laptop, and so on.

“The first rule of security club is you try not to join security club: minimize the amount of user data you store.” – Ben Adida

So if you’re in a situation where you need to figure out if your data is secure enough, or how to secure it, start with this question: Do you need it in the first place? Usability people say they want it. Marketing people say they need it. If you’re an engineer, it’s a losing battle to argue those points, because they’re right. Stop convincing people why you shouldn’t do it, and put a cost on it so they have to convince each other that it’s worth it.

Anyone who went to business school knows the cost/value balance of inventory. It’s pretty straightforward to discuss whether a warehouse should be kept opened or expanded or closed. Nobody wants to store anything for too long, or make too much product or have too many materials. But ask them about how much user data you should be storing and the response will be something like “all of it, why wouldn’t we?”.

So make sure that the conversion bump from using full names asking age and gender and doing geo-targeting covers the cost of the security measures required to protect that data. Make it clear that those costs are actually avoidable, they are not sunk. They are also not a one-time investment. They should show up every quarter on the bottom line of anyone who uses that data. And if nobody wants to pay for it, well, you’ve just solved a major part of your security problem, haven’t you?

Update 10/18/2011: “FTC Privacy Czar To Entrepreneurs: “If You Don’t Want To See Us, Don’t Collect Data You Don’t Need

Life After JSON, Part 1

XML. Simply saying that term can elicit a telling reaction from people. Some will roll their eyes. Some will wait for you to say something meaningful. Some will put their headphones back on and ignore the guy that sounds like a enterprise consultant. JSON is where it’s at. It’s new. It’s cool. Why even bother with stupid old XML. JSON is just plain better. Right?

Wrong. But that’s not to say that it’s worse, it’s just not flat-out better, and I’m a little concerned that it’s become the de facto data format because it’s really not very good at that role. I think its success is a historical accident, and the lessons to be learned here are not why JSON is so popular, but why XML failed.

Without going into too much detail, XML came out of the need to express data as more complex relationships than formats like CSV allowed. We had OOP steamrolling everything else in the business world, HTML was out in the wild showing that this “tag” idea had promise, and computers and I/O were finally able to handle this type of stuff at a cost level that didn’t require accounting for every byte of space.

In the beginning, it worked great. It broke through much of headaches of other formats, it was easy to write, even without fancy tools, and nobody was doing anything fancy enough in it that you had to spend a ton of time learning how to use it. After that though, things got a little, actually a lot … messy. User njl at Hacker News sums it up very nicely:

I remember teaching classes in ’99 where I needed to explain XML. I could do it in five minutes — it’s like HTML, but you get to make up tags that are case-sensitive. Every start tag needs to have an end tag, and attributes need to be quoted. Put this magic tag at the start, so parsers know it’s XML. Here’s the special case for a tag that closes itself.
Then what happened? Validation, the gigantic stack of useless WS-* protocols, XSLT, SOAP, horrific parser interfaces, and a whole slew of enterprise-y technologies that cause more problems than they solve. Like COBRA, Ada, and every other “enterprise” technology that was specified first and implemented later, all of the XML add-ons are nice in theory but completely suck to use.

So we have a case here of a technology being so flexible and powerful that everyone started using it and “improving” it. We started getting asked questions by recruiters like “can you program in XML” and “they used to do C++ but now they’re trying to use a lot more XML”. We had insanely complex documents that were never in step with the schemas they supposedly used. We had big piles of namespaces nobody understood in the interest of standards and reusable code.

Where the pain was really felt was in the increasing complexity of the libraries that were supposed to make XML easy to use, even transparent. There was essentially an inexhaustible list of features that a library “needed” to support, and thus none of them were ever actually complete, or stable. 15 years later there are still problems with XML parsing and library conflicts and weird characters that are legal here but not there.

So, long before anyone heard of JSON, XML had a bad rep. While the core idea of it was sound, it ultimately failed to achieve it’s primary goal of being a universal data interchange format that could scale from the simplest document to the most complex. People were eager to find a solution that delivered the value XML promised without the headaches it delivered. In fact, they still are, as nothing has really come close yet, including JSON, but perhaps we can build on the lessons of both XML and JSON to build the real next-big-thing.

XML ended up being used for everything from service message requests and responses to configuration files to local data stores. It was used for files that were a few hundred bytes to many gigabytes. It was used for data meant to represent a single model to a set of models to a stream of simple models that had no relation to each other. The fact that you could use it for all of these things was the sexy lure that drew everyone to it. Not only could you use it for all of these things but it was actually designed to be used for all of these things. Wow!

Ultimately, I think this was the reason for it’s downfall, and JSON’s unsuitability for doing everything has not surprisingly been the reason for it’s ascendance on its home turf (JavaScript calling remote services). Does it really make sense for my application configuration file to use the same format as my data caches and as my web services? Even now it’s hard for me to say that it’s a bad idea because the value of having one format to parse, one set of rules to keep in mind, one set of expectations I don’t need to cover in my documentation is nice. But with XML and JSON both proving this to be false in different ways, we have to take it as such.

The problem that’s on the horizon is that as JSON becomes the go-to format for new developers to use, the people that have been told all along that XML is bad and horrible and “harmful”, they’re using it for everything. We’re heading towards a situtation where we have replaced a language that is failing to do what it was designed to do with a language that is failing to do things it was never designed to do, which I think would actually be worse. Even if it’s not worse, it’s certainly still bad.

If we abide by the collective answer that XML is not the answer, or at least it’s the answer we don’t like, and my prediction that JSON isn’t either, what is? And why is nobody working on it? Or are they?

To be continued…

Books as Clutter

Like the culture at large, I’m moving from physical media to digital. I’m slowly getting rid of almost all paper documents via my scanner. I haven’t bought a CD in years. I’ve never been a collector of movies. I haven’t had a roll of film developed since the early 90s. Our printer isn’t even usually hooked up, and when it is it’s usually to sign-and-scan a contract or something, as I haven’t found a great replacement for that yet.

Even more than the digital conversion, I don’t even bother with much physical media. Files are backed up to online services like S3 or Rackspace via Jungledisk. I have some 1TB external drives for peace of mind and Verizon’s inevitable billing errors, but never burn anything to DVD or CD.

I kept all of my old CDs, because I wasn’t comfortable with throwing out full-quality versions of something. There is FLAC, but at the time I switched a few years ago I wasn’t happy with the FLAC-encoding tools so I went with 256k VBR MP3 files, and figured I’d re-encode them again someday and then be able to toss the discs. This argument does’t make a ton of sense given that I now pay for degraded copies of new music, but that’s a little psychology I’ll put off analyzing.

Books, however, are tough for me. I’ve been using a Kindle for a while, and love it, as do most who have one. I look at my bookshelf and think “this doesn’t really need to be here”. I’ve tossed a number of books, but I pick up an old Choose Your Own Adventure book and the innumerable hours I spent reading and re-reading them comes back to me. The thought of throwing it away is unsettling.

I will probably never even read these books again. I’m not sure if my potential future kids would bother with them, but the sentimentality runs too deep. So I keep them, and even my minimalist girlfriend probably understands. It’s not like I have thousands of them, there’s probably 100 books I can’t toss.

Some books I keep because technology just hasn’t caught up to them yet. Cookbooks, picture books, and so on. These will probably go eventually, I end up tossing a few each time I look through the shelves. The sentimental books that are signed by the author or were gifts I keep too, I don’t see a real clutter issue there either, and again, I don’t have too many of these.

The quandry comes with new books. These books have no sentimental value yet, nor will they ever, and I think that’s part of the trouble. The part of me that wants to move into the future, to be more mobile, more organized, more free of physical possessions has not yet found a decisive victory over the nerd who did a book drive for his Eagle project and spent those precious half-days of school lost in the annex at the Ames Free Library.

I’ve always looked at books as some strange kind of investment. I spend $10 on a book, and read it. I can read it again years later, or give it to someone else, there’s some residual value, (not monetary, selling books is hardly worth the trouble IMO). But now I click “buy it now” and I still realize what is arguably the real value of the book, yet I have nothing to put on my shelf and page through from time to time. Nothing to jog my memory when I see it, or spark a conversation when someone else sees it. Nothing I can hand to a friend and say “you really need to read this”.

I’m not even that concerned with Amazon going away or revoking my access to these books, while that would be unfortunate, I can always buy them again somewhere else. Physical books can be destroyed or stolen too, probably even more easily than e-books. Nor am I too concerned with the privacy issues, although I do recognize that lack of concern is a privilege not everyone has. The idea of an oppressive government “burning” or suppressing a book is real as well, but I think computers are so numerous now that this knowledge will find a way to survive. One small hard drive can hold literally millions of books, I’m sure at least one copy will survive.

Unfortunately this isn’t a very constructive post, as I don’t think there is an actual solution to this. I guess this is more of a eulogy. It’s a problem faced by most generations that see the things they grew up valuing being devalued, and it stings for someone who, if you asked him anywhere from age 5 to 15, probably would have said the most valuable/important thing he owned was his book collection.

To switch or not to switch, part 3

Continued from Part 2, we’re down to 41 languages that you could potentially build a modern, database-driven application with. I’d like to knock another 10-12 more off the list before I get into trying the language themselves, but I’m running out of black-and-white rules to do it with.

Concurrency is a big part of scaling, but that’s a tough characteristic to pin down. Any JVM-based language is going to have access to a great threading model, but people have been able to scale languages that lack threads at all, like Python (Edit: Python has threads) languages that have poor or difficult threading, to decent volumes as well, so clearly threads are not a requirement.

I spend much of my time writing web software, so robust support of that, like Java’s Servlet specification and third party additions like Spring MVC, is nice to have as well. Ruby and Python have made many of their gains based on the strengths of their web frameworks. However, an otherwise good language could add a web framework relatively easily, so again it’s not a requirement.

The way a language handles text is important when working with users, databases, web services, etc., but this can be addressed with libraries. Object orientation can be nice too, as can other models. Interpreted vs. compiled, machine code vs. byte code, exceptions, static typing, dynamic typing, the list goes on with important details that aren’t actually important enough that I can’t really live without them. I certainly prefer static typing, checked exceptions, painless threading, garbage collection, run-anywhere bytecode, running in a virtual machine, but I know smart people who can make valid arguments against every one of those, and maybe I just haven’t given the alternatives enough of a chance.

What is really important to me about a language is its community, leadership, and, for lack of a better term, motive. For this reason I’m going to knock the Microsoft-led CLI languages off the list. I know that CLI is a standard, and that the open source implementation is not beholden to it, but I’m simply not going to program in the Microsoft ecosystem. I think they just don’t care about open source and independent development. This eliminates:

  • C#
  • Boo
  • Cobra
  • F#

It’s kind of a shame because F# looks interesting and is something I’d like to tinker with some day. I also think C# is actually a pretty good language, and, while it initially seemed like Java Jr., it evolved to have some powerful features, moreso than Java in some ways. Unfortunately, even with Mono in the picture, it’s still a Microsoft product to me.

It should definitely be noted that I’m not exactly happy about Oracle owning Java now either. Due to Java’s inertia, I haven’t seen any Oracle decisions affect me yet, and I’ve probably got a number of years before I have to deal with that if and when they decide to do something bad. They could kill it entirely and people would still be using it for a long time.

Along similar lines, Objective-C’s fate seems to be tied closely with the whims of Apple. I don’t see any interest in using it for non-Apple OS projects, so it is bumped off the list as well.

  • Objective-C

Also some of these languages have such a small community or slow pace of development that they don’t seem worthy of investing in:

  • ALGOL 68
  • Clean
  • Dylan
  • GameMonkey Script
  • Mirah
  • Unicon

Three of these languages are active, but the community (and therefore the language) are solving very different problems, Processing with its graphics and visualizations, Scratch with its introductory/training aspects, and Vala is tied closely to GNOME applications.

  • Processing
  • Scratch
  • Vala

This leaves 27 languages moving on to part 4, most of which I’m definitely looking forward to learning more about:

  • Ada
  • Clojure
  • Common Lisp
  • D
  • Eiffel
  • Erlang
  • Falcon
  • Fantom
  • Factor
  • Go
  • Groovy
  • Haskell
  • Io
  • Java
  • JavaScript
  • Lua
  • MUMPS
  • Pike
  • Pure
  • Objective Caml
  • Python
  • Ruby
  • Scala
  • Scheme
  • Squeak
  • Tea
  • Tcl

Too much RAM?

I’ve been spec’ing out a new desktop and am intrigued by something I’m not sure I’ve seen before. It’s actually possible to get a computer with too much RAM now. 32GB is not really high-end, or expensive, and yet I can’t really think of any consumer/business application that would use that. Programs like photoshop or video editing or scientific engineering software have pretty much infinite appetites, so those programs would really be a question of diminishing returns.

As for me, I could literally put an an entire Windows 7 virtual machine on a RAM disk* with a ton of code, some decent-sized databases, and give that VM 4-8GB of RAM, and still have physical memory to spare. That said, I’m certainly going to get (at least) 32 because it’s a cheap upgrade from 8 or 16, so if I find a use for it, I’ll report back.

Making claims about the future can haunt you forever, even if you didn’t actually say it, so I’m not saying that we’ll never need 32GB, of there will ever actually be such a thing as “too much RAM” but I’m curious how long it will be until we do.

*You might think that the RAM disk is dead with the availability of SSD, but RAM is still dramatically (50-100x) faster than even high-end SSDs like Fusion-io, thanks to the limitations of the SATA or PCI bus.

To switch or not to switch, part 2

Continued from Part 1, I’m looking at candidates to replace my current main language: Java. I’m not actually eager to get rid of Java, I still really like it, and enjoy working in it, but I need to convince myself that it’s the right choice for me to continue to invest unpaid time in, or I need to find something that is. My career is, optimistically, 1/3 over, I’ve got 20-30 years left tops, and while I’d actually bet that Java programmers will be needed in 30 years, I’m not sure how much interesting stuff will be happening.

So, let’s add a couple more rules to winnow the set.

Rule 3: It have some kind of network database support.
Almost everything I do involves a database at some point. The volumes of data and network architectures we deal with today rule out simple file I/O, or even local-only databases. I did not look especially hard for the answer to this question, in my opinion if it wasn’t easy to find, it’s not sufficient. Technically, I could write/port my own driver, but if nobody else has done it, I have to suspect that the users are solving very different problems than I am. This eliminates:

  • Agena
  • ATS
  • BASIC
  • BETA
  • Diesel
  • E
  • FORTH
  • Icon
  • Ioke
  • Logo
  • Maple
  • MiniD
  • Miranda
  • Modula-3
  • Nu
  • Reia
  • Sather
  • SQL
  • Self
  • SPARK
  • Squirrel
  • Timber

Rule 4: This is a tricky one, but I’m going to say it must not have “fallen from grace”. This is essentially the state that Java is entering, depending on who you ask. It’s perfectly functional, and widely used, but it’s had its day and isn’t hip anymore. This doesn’t exclude languages that are just old, but were never at the top, like Eiffel, but I don’t see any reason to abandon Java and go with COBOL.

  • C
  • C++
  • COBOL
  • Fortran
  • Pascal
  • Perl
  • PHP
  • Visual Basic .NET

Now, some of those those languages, like C, are still very popular, and important. You could even say that they are continuing to get better and stay modern. C will probably outlive most of these languages, as none of them are strong candidates to rewrite Linux in yet. My argument is that nobody is really using C to solve any new problems in new ways. This leaves 41 languages that are active, capable of doing at least basic database operations, and have not entered decline.

  • Ada
  • ALGOL 68
  • Boo
  • C#
  • Clean
  • Clojure
  • Cobra
  • Common Lisp
  • D
  • Dylan
  • Eiffel
  • Erlang
  • F#
  • Factor
  • Falcon
  • Fantom
  • GameMonkey Script
  • Go
  • Groovy
  • Haskell
  • Io
  • Java
  • JavaScript
  • Lua
  • Mirah
  • MUMPS
  • Objective Caml
  • Objective-C
  • Pike
  • Processing
  • Pure
  • Python
  • Ruby
  • Scala
  • Scheme
  • Scratch
  • Squeak
  • Tcl
  • Tea
  • Unicon
  • Vala