Amazon Showrooms

Amazon caught a lot of heat over this past holiday season over some improvements to its shopping app.  It made it easier than ever to find out that you probably don’t need to buy that blender at Sears, when you can get it for 30% less on Amazon and don’t even have to carry it home.  There were cries that small businesses can’t compete with this and would all be dead soon.

There is nothing at Best Buy or Barnes & Noble that you can’t get on Amazon (or many other online stores).  It’s rare that it will not be cheaper online, even during a sale (which typically just brings the price down to a normal online price).  Is it sustainable to have a store where I can go and hold something, and then order it from somewhere else?  No.  Should we feel bad for the big box stores?  No.  Should we feel bad for the shopkeeper who sells a particular niche at a high markup without adding value?  No.

Stores that only sell commodity products are a recent innovation to take advantage of a temporary imbalance.  They will eventually go the way of dodos, video rental stores, and record labels.  We’re still going to have a few, because there are enough “need it now” purchases to sustain the Targets and Wal-Marts, and we’ll probably still have a few high-end ones where excellent service matters like Nordstrom, but most of the stores out there are turning into showrooms.

What if Amazon bought BJs?

(BJs is a consumer warehouse/bulk goods store, like Costco and others).  My Prime membership takes the place of my BJs membership.  Instead of walking around with a giant shopping cart and driving home with mass quantities of things, I simply browse the aisles for products I like.   When I see one, I scan it with my phone, and it’s on my doorstep the next day.  There are a few people on staff that might help, and there’s a hotline to specialists that understand the products and can answer my questions.  No need for a massive loading-dock infrastructure, or inventory control, or 50′ tall ceilings to heat, or many of the other overhead expenses that yield the current retail markups.

What about the little guy?

I’d like to see our shops go back to actually making things and/or adding value.  Custom products, not “regional dealers”.  There will definitely be less of them, but this will make more space and lower rents for the people who just want a spot where they can sell their craft, or for people to provide useful services instead of distribution.

Readability + Kindle + Something Else

I really like my Kindle. Beyond all of the more tangible/advertised benefits it has, the most important thing it’s done for me is that I’ve been reading more since I started using it.

I also really like Readability, I think it’s an optimistic and hopeful view of the future of content on the internet, rather than the arms race of ad blockers and the AdWords-fueled plague of content scrapers.

The fact that these two things I like can join forces is also great. I can send an article to my Kindle via Readability. If I see some long, thoughtful piece, I click two buttons and it will be there for me when I settle in for the evening. Unfortunately I don’t/can’t use this as much as I’d like for two reasons.

Lost Commentary

I find most of my new/fresh content via link-sharing sites. Starting long ago in the golden age of Slashdot, I’ve gotten into the habit of checking the comments on an article before I read it. I don’t usually read the comments, I just skim them, and get a sense of how worthwhile it is to read the article. If I see a healthy discussion, or lots of praise, it’s clearly something worth spending a few minutes on. Even if I see some well-written refutations, it can be valuable in a “know your enemy” sense. If I see something like “Here’s the original article” or “How many times will this be reposted?” then perhaps I’ll just move on.

After I’ve read the article I might go back and read those comments, or perhaps even leave one. With the Kindle/Readability combo, I can’t do that. Blog posts will come through with their own comments, but for whatever reasons, there always seems to be better discussion offsite.

Linkability

The “premium” content sources like major magazines or newspapers rarely link off of their stories. I think this is conditioning from the print era, but it actually plays well to this offline system. If an author talks about another website he’ll probably include the relevant details in the article, or quote it, or include a screenshot.

Blogs, however, are chock-full of links, often without context, sometimes just for humor, but sometimes as an essential part of the article. Very few blog posts are actually viable in a vacuum. I have a special folder in Google Reader called “droid” which are blogs that generally don’t do this, and are good for reading when I have idle time (via my phone, hence the name) and don’t want to deal with links.

Something Else

I’d like to have some way to read an article or post offline, that can pull in these other sources. Perhaps a “send to kindle” that actually generates an ebook with the article as the first chapters and comments from my favorites collated into other chapters. Or perhaps a Kindle app that can do this and stay updated. What I don’t want is a mobile OS-style app that pops open browser windows, as that’s an entirely different use case. A “send back to computer” would be useful for stories that require switching back to browse mode.

TLDR: Sometimes I just want to read, not browse.

Security Club

Sony has been getting repeatedly hacked. We’ve seen it before with the TJX incident, and many others, most of which never get reported, much less disclosed, or even discovered. In some of these cases, only email addresses are taken, or maybe passwords. In others, names and addresses are exposed, as well as medical conditions, social security numbers, and other very sensitive information. It seems to me that this is happening more often, and I think it’s for a few reasons.

Bigger Targets

The first reason is that targets are getting bigger and the rewards go up with size. Nobody is going to waste their time getting into a system with a few thousand random users when you can get tens of millions for the same effort. As more people use more sites, it’s only natural that there are going to be more million-user sites out there. This reason isn’t a big deal, it’s just the way things are.

Better Rewards

The second reason is that more companies are collecting more data about their users. This data is valuable, possibly the most valuable asset some of these companies has. Facebook and Google make much of their money from knowing about you, what you do online, the types of things you’re interested in, different ways to contact you.

Large companies like Sony can afford to take whatever information you give them and cross-reference it against various databases to get even more information about you. This lets them focus marketing efforts, tailor campaigns to you, shape product development and so on. This also lets them make the site more easy to use with pre-filled information, to increase sales and conversions.

We don’t even really question when a site asks us for our name any more. What’s the harm, right? Sure, I’ll give them my ZIP code too, and maybe even my phone number, they probably won’t call me anyways, right? Now ask yourself, why do you need to give your name, mailing address and phone number to a company to play a game where you are a pseudonymous elf?

The real answer is that they don’t. They might need it for billing purposes, but billing databases are kept much more secure for reasons I’ll explain later. They ask for this information because it’s free, and because you’ll give it to them, and because it’s valuable to them. It’s probably not protected very well, and when it gets stolen everyone shrugs, changes the password on the database, emails the FBI to make it look like they care, and gets back to more important business like social media strategies.

No Penalties

The companies involved are embarrassed and probably suffer some losses as a result, but these are mostly minor injuries. The news stories spin it to make the intruders the sole criminals, and lose interest. The only people who really pay for these incidents are the people whose data has been stolen. There are no requirements on what companies have to do to protect this information, no requirements on what they need to do if it is compromised, no penalties for being ignorant or reckless. Someone might figure out that it’s going to cost them some sales, and they put some money in the PR budget to mitigate that.

This is the reason why billing information is better secured. The credit card companies take steps to make sure you’re being at least a little responsible with this information. And in the event it leaks, the company who failed to protect it pays a real cost in terms of higher fees or even losing the ability to accept cards at all. These numbers make sense to CEOs and MBAs, so spending money to avoid them also makes sense.

How to Stop It

There are obviously a large number of technological measures that can be put in place to improve security, but there’s one that is far simpler and much more foolproof. But first, let’s look at banks. Banks as we know it have been around for a few hundred years. I’d bet that you could prove that in every single year, banks got more secure. Massive vaults, bullet-proof windows, armed guards, motion detectors, security cameras, silent alarms, behavioral analysis, biometric monitors, the list goes on and on, and all of these things actually work very well. But banks still get robbed. All the time. When was the last time you heard of a bank robber getting caught on their first attempt? They are always linked to dozens of other robberies when they do get caught. Why?

Because they’re full of money.

They can make it harder to rob them. They can make it easier to catch the people who did it. But the odds don’t always matter to someone who sees a pile of money sitting there for them to take if they can bypass these tricks.

People break into networks for many reasons, but the user data is often the pile of gold that they seek. So the most effective way to stop someone from breaking in and stealing it is to not have it in the first place. This advice works in 2011, it will work in 2020. It works on Windows, OS X and Linux. It works online and offline, mobile or laptop, and so on.

“The first rule of security club is you try not to join security club: minimize the amount of user data you store.” – Ben Adida

So if you’re in a situation where you need to figure out if your data is secure enough, or how to secure it, start with this question: Do you need it in the first place? Usability people say they want it. Marketing people say they need it. If you’re an engineer, it’s a losing battle to argue those points, because they’re right. Stop convincing people why you shouldn’t do it, and put a cost on it so they have to convince each other that it’s worth it.

Anyone who went to business school knows the cost/value balance of inventory. It’s pretty straightforward to discuss whether a warehouse should be kept opened or expanded or closed. Nobody wants to store anything for too long, or make too much product or have too many materials. But ask them about how much user data you should be storing and the response will be something like “all of it, why wouldn’t we?”.

So make sure that the conversion bump from using full names asking age and gender and doing geo-targeting covers the cost of the security measures required to protect that data. Make it clear that those costs are actually avoidable, they are not sunk. They are also not a one-time investment. They should show up every quarter on the bottom line of anyone who uses that data. And if nobody wants to pay for it, well, you’ve just solved a major part of your security problem, haven’t you?

Update 10/18/2011: “FTC Privacy Czar To Entrepreneurs: “If You Don’t Want To See Us, Don’t Collect Data You Don’t Need

affinity.txt

SEO sucks. It’s a fun little game to play for a while, but at the end of the day almost everyone loses. The searchers lose because they can’t find the best stuff any more. The search engines lose because their searchers see worse results and are less happy. Legitimate creators and business lose because they have traffic siphoned off by spammers and scrapers. They’re forced to waste brainpower and money on this ridiculous game that, from where I’m standing, they’re losing.

I’m not pretending that there was some golden age without spammers, they’ve always been there, and they always will be. Originally we had straight-up content matches, then keywords. Search quality was approaching unusability before Google came on the scene. They did a good job, PageRank put a trust network in the mix, and worked great for a while. Eventually that well was poisoned too. They’ve done many things since, and I bet many of them helped. The new site blacklist is a good step but certainly a tricky one to use. The rise of Bing is actually a good thing, I think Google is in a much better position to take risks and change things when they’re at 70 or 80% of the market as opposed to 95%.

So let’s stop talking about SEO as a black art. Let’s stop making our sites worse to prolong a losing battle. Let’s take that energy and put it into something else. Put your content out there in the best format you can. Forget code-to-content ratios and maximizing internal link structures that don’t benefit your users.

Instead, let’s think of ways that we can help explicitly affect results. Here’s one: Think robots.txt meets social graph meets PageRank. site.com lists other sites in it’s /affinity.txt file, and defines some coarse relationship. Something like this:

widgetfactory.com/affinity.txt

www2.widgetfactory.com self
*.widgetwiki.org follow
*.widgetassociation.net follow

So we’ve got the “self” tag that basically says “this is another version, or a related version, of me.” If www2.widgetfactory.com/affinity.txt has a “widgetfactory.com self” entry, you’ve got yourself a verified relationship. We’ve also got something like a follow tag that says “we like these sites and think they’re valuable, you should go there (and follow links from us).” It’s basically a vote. Unidirectional votes are useful for quality, and mutual votes are a big clue about transferring trust.

How many votes do you get? No idea, I don’t see any reason to limit it. I think those things will work themselves out. I don’t see regular sites managing thousands of links in there, I think they just link to enough to make it meaningful. Or maybe there is some hard limit, so there’s less of a guessing game on how to pick who goes there. 100 per site? 1000?

Now you might think, “this is just pagerank” but it actually is different I think. First, it’s much easier to spot foul play. There are far fewer domains than pages, so the graph is much smaller and easier to traverse. Junk sites are going to stick out like a sore thumb. A spammer can’t really use this channel by making artificial networks of trust because it would be so easy to kill them en masse. It’s also difficult to mask or overwhelm like linkfarms.

Does this hurt the little guy? Not any more than spammers, I think. I think even though the expression of the data is simple, the interpretation of it can be very complex. If you’ve got a little blog that just blathers on about computer stuff and doesn’t even have any ads on it, you may not need much trust to rise up on specific content searches. If you’ve got a site with 3 million pages that look an awful lot like wikipedia pages, and your only votes are from other similar sites, and your whole cell has no links from the larger graph, well, maybe, despite your flawless, compact markup and impeccable word variances, you’re not really adding much value, are you?

I’m not saying this particular idea solves all of our problems,, heck, I’m sure there are some problems with it as described, but I think approaches like this will not only affect the quality of our search results, but they will ultimately affect the quality of the web overall.

When will Google buy VMWare?

Google’s Chromebooks are starting to go mass-market. For those that don’t know, these are essentially laptops that only have a web browser on them. No Windows, no OS X or Linux. To many people, this seems ludicrous. You need apps, right? You need data?

The truth is, the majority of people already only use browser “apps”, which we used to call “websites”. Google has been leading the charge on this, by pushing the envelope on in-browser apps with Google Docs and the Chrome App Store. There are other players too, Apple is training people to buy apps, and not worry about having to reinstall them when your hardware fails. DropBox is training people to sync everything. Amazon is training people to have virtual CD shelves. Steam is training people to have virtual game libraries. Citrix and LogMeIn are training people to work on remote desktops, and so on.

These are all coming together to get people to the point where we basically go back to dumb terminals. Your computer is nothing more than a local node on the network. That, however, is not the interesting part, people have been saying that for years.

The interesting part, to me, is that it’s not actually going to be the typical early adopters going there first. My girlfriend’s computer literally has nothing on it. She only uses browser apps and iTunes, which is connected to our NAS where her photos are also stored. She uses GMail, Facebook, Google Docs, etc. With the exception of syncing her iPod, which I have to assume someone will figure out how to do in the ChromeOS ecosystem, I’m not sure she would really notice any difference.

Now my computer(s) are a long ways from there. I’ve got development environments, SQL servers, mail servers, all sorts of infrastructure set up. I could certainly move to a remote desktop or a remote terminal on a server, but the change would be much more disruptive and not without some costs.

Along a different path, we’ve seen a long progression of advances in virtualization. I actually do most of my work in VMs now, for a number of reasons, but one of which is that I’m not dependent on a particular piece of hardware. If my laptop is destroyed or stolen, I’m back up to speed very quickly. The only thing I’d need to do is install VMWare, plug in my drive, and I’m good to go.

I think these two paths are going to meet up soon. I think ChromeOS is a way to get the low-demand computer users on board. If Google buys VMWare, they can come at it from the other end as well. I think VMs will get leaner while browsers get more robust, and we’ll end up with a hybrid of the two. A lightweight OS that is heavily network/app/web based? I wonder where Google would get one of those? Oh, right, they already did.

The Ultimate Music App

There’s an ever-growing number of online music services out there, but none of them have really nailed it for me. Here’s my list of demands:

  • Instant Purchase – Simple one or two click purchase, which adds it to my portfolio. Downloading from one place and uploading to another is dumb.
  • Standard format/no DRM – This is why subscription-based services won’t work.
  • Automatic Download/Sync – As seamless as DropBox, maybe even with a few rules (per playlist, etc).
  • Smart Playlists – The only reason I use iTunes is that I can set up playlists with dynamic criteria, like “stuff I like that I haven’t heard in 2 weeks”. This entails tracking what I listen to and being able to rate stuff.
  • Upload My Own – No reason for me to have to buy things again. I’m fine with paying a small extra fee for this, but I should also be able to work that off by buying new stuff. Amazon hosts stuff I’ve bought from them for free, but charges me for uploads, so in the long run they could actually end up costing me more. They should give me a 50MB bonus per album to upload other files.
  • Mobile – My phone is my music player now, I should be able to stream/sync/download from it as well as my computer.

Bonus Features

  • API – Let me have another program talk to your service to do things like recommendations and missing tracks.
  • Podcasts – This doesn’t necessarily have to be done in-service, if the API allowed uploads someone else could do it, but it seems pretty trivial to add on if all of the above things are in place.

Don’t Really Care

  • Sharing – Nice to have but I’d be fine with a service I can’t share. I’d prefer the option to sign into more than one account at a time.

Why don’t websites have credits?

Engineers of any discipline are largely an anonymous bunch. You don’t know who designed the fuel pump in your car, I’d even wager it would be extremely difficult for you find out if you wanted to. You don’t know who wrote the code for the OS X Dock or Windows Start bar or who wrote the Like button on Facebook. These people made decisions that affect you deeply every day, and you have no idea who they are.

The most interesting part of this is that those people are OK with it. If you ask them (myself included) they will tell you that it doesn’t matter, that what really matters is the quality of the work and the enjoyment you had doing it. Unfortunately, I think we’re wrong.

Should they?

I can’t seem to come up with a good framework for who figuring out who wants credit, never mind who deserves it. If you so much as make a photocopy during the production of a movie, you’re probably in the credits with some high-faluten title like “First deputy assistant duplication specialist”. Music credits are tied to royalties and managed very closely. Most authors wouldn’t think about publishing something anonymously, nor would artists or sculptors. Artists always sign their work.

This is not even strictly a software issue. Video games list credits, often in the box and at the end of the game, and they even have a IMDB-like site. Nor is it an “arts & entertainment” issue, any credible scientific paper will cite other works and acknowledge contributions. Patents have names on them, even when assigned to a company.

A few software packages have listed credits. If I remember correctly, Microsoft did it on old versions of Word and Excel, and Adobe had it on old versions of Photoshop and Illustrator. I’m curious why those were removed, or at least hidden. “The Social Network” had something about Saverin being removed and re-added to “masthead” of Facebook (although I don’t know what or where that is).

So it would seem that we might be in the minority here, perhaps due to convention rather than any specific reason. And if there’s one thing that bugs an engineer, it’s deviating from standards with no good reason.

So let’s do it.

Why do it?

  • Pride in your work – Sure there is some pride in doing a good job anonymously, but wouldn’t be just a little more motivated or happy now that your name is on it?
  • Being a stakeholder – We’ve all done projects we didn’t believe in, and consoled ourselves with the fact that “it’s not my project”. Well, now it is.
  • Reputation – We’ve got our resumes, but credits will verify them.
  • Honesty/Transparency – There is no good reason to withhold this information, so it should be out there.
  • All that money they spent on school – Show your parents your name on a website and watch them smile.

So who’s get listed?

I think the short answer here is, everyone. Movies do it, why not websites? It could be just a big list of names, or something more detailed with contributions, dates, whatever makes sense. Let’s just start throwing some names up there, and let the de facto standards evolve on their own.

If you know of any major sites that do this well, put it in the comments. Similarly, if you can think of a good reason why this shouldn’t happen, I’d love hear about it.

Fire the user experience designer

This post makes a case for having a specialized “user experience designer”. The author makes the case that usability and interaction design is too complicated to be handled by someone responsible for other tasks. This is false.

If you are on a team responsible for a website or something similar, EVERYONE on your team should understand usability and interaction design. It’s not a special skill, it’s core competency, like communication skills and ethics. The real experts out are rare, and I mean “you’ll probably never even meet one” rare. Most people who specialize in it are just washed-up designers or coders.

You need your designers thinking about how people will interact with your program, or you’re going to end up with brochureware. You need you programmers thinking about it or you’re going to end up with a clumsy UI. You need your QA people to think about it or you’re going to end up with spotty test plans. You need your managers thinking about it to understand what’s important. You need your salespeople thinking about it to compare against your hapless competition.

Having someone responsible for it is a bad idea because not only are they probably going to suck at it, it’s just going to make everyone else lazy.

Locked Doors, Open Windows

Most people with any clue about interaction design know that Jakob Nielsen is a jackass. There are thousands of other usability professionals who offer opinions as fact, don’t take their own advice, but Nielsen was there in the early days, and for some reason caught on with his obvious or wrong ideas.

His latest “alertbox” (apologies for linking to such a horrible looking site) says that users are so completely dumb and clumsy that they can’t type passwords in correct, and that masking is a bad idea. Wow. I’ve never mentioned or linked to Jakob Nielsen on this blog before, but I feel a duty to contribute what meager link juice I have to making this astonishing bit of advice the highest ranked page on that site. What would cause someone to suggest that this first layer of security is a detriment?

More importantly, there’s usually nobody looking over your shoulder when you log in to a website. It’s just you, sitting all alone in your office, suffering reduced usability to protect against a non-issue.

Really? My apologies Mr. Nielsen. All these years I thought your ideas were bad because you just made stuff up and wanted to sound like you knew what you were talking about. Little did I know that your lack of clue regarding how people use computers is the result of the fact that you don’t work with actual people. You should do one of your infamous studies (preferably of indeterminate size and method, as usual) and see if people log in to websites from exotic venues like the “train station” or maybe even a “meeting”.