Archive for October, 2011

Apt to get lost in listings

After spending hours upon hours sifting through many and varied apartment rental listings, it became very apparent to me that having a better way to compare rental listings would be a great benefit to consumers. Similar to the Schumer box summary for credit card terms, a standardized table of terms, costs, and details of a property rental could go a long way towards cleaning up the always-painful process of apartment hunting.

Many apartment listings (like those on most apartment hunters’ go-to resource, Craigslist) do allow for basic filtering based on monthly rent, number of bedrooms, and so on. However, without reading each individual listing it’s often impossible to make truly accurate comparisons. For example, one apartment may bill water usage separately while another may include water and heat in the price of rent. If both apartments are listed at $1500 per month, it’s not clear that the first apartment could cost another $50 per month (or $600 per year!) for water usage.

Different markets bundle costs in different ways — for example, I don’t know a single Boston renter who pays a separate water bill, but in Portland it seems to be the norm — so it makes sense to spell out all of the standard living expenses associated with the property. It may not be possible to provide exact dollar amounts for each cost; one tenant may use more or less heat than the next, for instance. But simply noting that a separate expense exists is more information than many listings currently provide, and more ambitious landlords could provide utility usage details (three year averages of past usage, for example). Detailing the cost of parking separately from the base rental price would also be hugely beneficial, regardless of whether the prospective tenant owns a car.[1]

Including a floor plan could also save both landlords and renters a lot of wasted time. There are many apartments that may meet a renter’s criteria in terms of total floor area (e.g. 1000 square feet), but a particular room may not meet the renter’s needs (e.g. 10 feet wide to fit a particular piece of furniture with a comfortable margin). It doesn’t serve the landlord or the would-be renter to visit the property and take the time and effort to measure it themselves. It may take time to draw up floor plans for each unit that a landlord owns, but it’s a one-time expense and all prospective tenants will gain from it. (And come on, I bet everyone knows a few out-of-work graphic designers who would be more than willing to fire up Illustrator and draft floor plans at reasonable rates.)

On the technical end, it would behoove all involved to draft a standard way to represent this information digitally. Off the cuff I’d say that an XML schema could do the trick, with floor plans included in the file as SVG. Once a standard format is created, it would be easy to use any XML-capable application (desktop, mobile, or web) to compare, sort, and display listings.

Landlords could still be free to write up any rental listing they saw fit, but the standardized summary would need to be provided as an addendum (or at the very least, on demand). I suspect that if enough consumers favored listings which included the summary, most landlords would be inclined to just go ahead and always include it to save themselves the time of dealing with individual requests.[2]

I created a quick and simple first draft [3] of what such a summary could look like — see the PDF below, which has some example values filled in.

Property rental summary example (PDF, 62 KB)

  1. More on this in a forthcoming review of Donald Shoup’s The High Cost of Free Parking.
  2. Supposedly this is how the free market works, but more often than not information disclosure has to be mandated.
  3. The floor plan in particular is rough. More detail would be necessary to be useful; specifically, measurements for each room are key.
Advertisements

Back to the Future

Browsing the web has changed plenty since the debut of the dub-dub-dub in the early 1990s, but the basic design of the graphical web browser is still remarkably similar to the first entries in the field. From NCSA Mosaic 1.0 (1993) to Internet Explorer 6 (2001) to Google Chrome 14 (2011), a degree of consistency has been firmly established and users now have solid expectations about the core functions of the browser.

So when a website decides to roll its own navigation controls and warns, “Don’t use your browser’s Back button,” it places an enormous cognitive burden on the user. By explicitly warning users not to follow their instincts, the website developer is implicitly acknowledging the likelihood that users will rely on those instincts.

The recent push by browser developers to reduce browser chrome to all but the bare necessities means that the Back button is one of the few UI elements outside the web content itself. All browsers place the Back button in the prominent upper left corner; some browsers go as far as making the Back button larger than the Forward button, thus making the Back button the largest and most visible widget in the browser. Telling users to simply forget about that very large and very useful button is not a recipe for success.

“But my website is special! It’s OK.” No, it’s not. Jakob Nielsen’s Law of the Web User Experience states that “users spend most of their time on other websites.” And if you design a website that will fail under normal expected usage, you can bet that they will spend all of their time on other websites.

10,000 reasons why

The United States Department of Defense budget for 2010 was $680 billion dollars (if you need to see the zeros to put that in perspective, that’s $680,000,000,000).

To  simplify the following what-if, let’s pretend it was $500 billion[1]. One percent of one percent (one ten-thousandth) of that is $50 million dollars. Let’s say that we reduce the DoD budget by said 0.01% and divvy up that $50 million, giving each state a cool $1 million[2]. Keep that $1 million number under your hat for a few.

I was previously involved with a fantastic non-profit organization, the Boston Area Gleaners[3]. Gleaning both reduces waste and assists hunger relief efforts, and specifically fills the niche of providing fresh produce to food relief agencies which often are supplied primarily with dried and canned goods. In 2010, the organization gleaned 37,545 pounds (17,009 kg) of fresh, local produce from farms and salvaged 74,000 pounds (33,565 kg) of retail food, which was then distributed to numerous food relief agencies in the Boston area. The organization’s operating expenses for 2010 were less than $80,000, including the value of in-kind donations and pro bono work[4]. Let’s round that up to $100,000.

Coming back to that $1 million dollars per state, the Boston Area Gleaner’s entire operation could be funded with 10% of that amount. That still leaves 90% ($900,000) for other organizations and projects, not to mention the other $49 million for the other 49 states.

To recap: By reducing the defense budget by 0.01% (that is, leaving 99.99% of its current funding intact), five hundred such food relief and waste reduction programs across the nation could be fully funded.

We are the 99(.99)%.

  1. $180 billion dollars is an astronomical amount of money to just wave away. Just keep in mind that there’s more money tied up in the DoD than my hypothetical example demonstrates.
  2. Equal distribution to all states doesn’t factor in population, density, etc. Again, just keeping the numbers simple.
  3. I was a volunteer and a member of the board of directors. I did not receive any income from the organization.
  4. More details available in the 2010 annual report. Note that 2010 retail salvage operations were for a single Trader Joe’s location. Think for a second about how many grocery stores and other retail food outlets there are in the Boston area.

Gettin’ Higgy wit It

Human interface guidelines go far beyond the idea of a simple visual style sheet — “Buttons should have X units of padding” or “Window title bars should be dark gray” — and provide a comprehensive framework for developing consistent and enjoyable user interfaces. As my interest grew in the overall user experience, I dove into the HIGs for the major desktop environments: Mac OS X, Windows, GNOME[1], and KDE. The level of thoroughness and strictness of each HIG roughly mirrored my layman’s opinion of each environment, though there were some surprises.

Mac OS X

Far and away the most useful HIG in my opinion. Sure, Apple has been known to deviate from their own HIG (sometimes in baffling, trivial, or confusing ways), but the baseline from which they work is much clearer in both explicit specifics and its implied intentions.

Of particular interest to me were the menu guidelines, for both the menu bar and contextual menus. I was working on improving both menu types in a major product for the company I was working for at the time, and Apple’s HIG provided clear guidelines on how to standardize the common elements and rationale for managing custom elements. One thing which sets Apple’s HIG out from the pack is that it doesn’t shy away from explicitly telling you not to do something (contrast with the Windows HIG, below).

OS X Lion’s bothersome tendency towards skeuomorphic nonsense notwithstanding, Apple is unsurprisingly the leader when it comes to defining the user experience of their platform.

Windows

The Windows HIG is expansive, but it suffers from the platform’s long history of allowing a myriad of ways to accomplish any given task. The ability to say no is an important quality of any editor, and the strength of the Windows HIG is sometimes diluted by allowing similar widgets which operate in subtly different ways. If developers don’t have a clear directive of which widgets to use for a given situation, it stands to reason that most users won’t grok the differences in behavior when one developer chooses A and another chooses B.

I’ll be interested to see what the Windows 8 HIG looks like, as the Metro UI will require a tighter rein to maintain a consistent experience.

GNOME

Having only using Linux in general and GNOME in particular sporadically at best, GNOME’s HIG turned out to be a good deal more thorough than I expected. Concise guidelines for most standard uses were provided, and importantly, explanatory text was included to help define the rationale. This supporting text helps to inform decisions when a situation is encountered that is not explicitly covered by the HIG.

As one might expect the GNOME HIG is not as comprehensive as the Mac OS X HIG (or as restrictive, some might say), but it does show that a collaborative project can execute a clear vision.

KDE

I have had the least hands-on experience with KDE. Working primarily from a vantage point of the stereotype that KDE appeals to the most devoted hackers and tweakers, I found the KDE HIG to mostly reinforce that view. In contrast with the GNOME HIG, which gave me a real sense of the GNOME design approach, the KDE HIG was an incomplete collection of loose guidelines. That’s not to say anything directly about the KDE environment itself, but the HIG didn’t paint a very clear picture. One might infer that a lack of clearly defined guidelines would manifest itself in the end user’s experience, but I’d have to demur on that point out of personal inexperience with KDE.

Beyond

Evaluating these HIGs individually and collectively was an enlightening exercise. Not only did it sharpen the focus of each platform, but seeing where there was (and wasn’t) common ground between the HIGs provided a universal de facto desktop standard of sorts.

HIGs are also available for each of the major mobile platforms — iOS, Android, and Windows Phone 7[2]. The design choices outlined in each provide a look into both current and future mobile development, and as OS X Lion and Windows 8 are demonstrating, possibly the future of the desktop as well.

  1. This was prior to GNOME 3.0. I’m looking forward to exploring both GNOME 3.0 and Unity when I get the time, and I plan on posting after my first hands-on experiences with them.
  2. webOS also has a HIG, which could be an interesting read, but perhaps of muted practical relevance.

UX in PDX

After receiving a tip to check Calagator (calendar + aggregator) for tech-related events in the Portland area, I decided to attend the October meeting of CHIFOO (pronounced “ky – FOO”), the Computer-Human Interaction Forum of Oregon.

First on the agenda was the “CHIFOOd” meet-up at a pub[1] for some food, drinks, and discussion, which proved to be a good introduction for a newcomer like myself. The speaker for the night’s event (who I arbitrarily sat down next to before realizing who he was) proved to be as approachable and talkative as anyone else there. And perhaps fitting for a meeting of people who explore the human side of computer interaction, it was there that I first heard that Steve Jobs had died that day.

After polishing off some sliders and pints[2], it was time to amble over a few blocks to the main event. Thomas Tullis presented a program titled, “Why It’s Time to Move Beyond the Usability Lab,” which discussed a number of engaging examples of how and why to explore options beyond the traditional methods. In addition to keeping the floor open for quick questions during his presentation, several times during the program all in attendance were able to participate in live polls via text messaging. (I did acquiesce and buy a cheap-o burner mobile phone before embarking on our cross-country drive. I had the phone on me that night and abstained from voting in the first poll, but eventually gave in for the second and subsequent polls. Damn you, snakes!)

All in all, a very interesting program and a group whose meetings I’m sure I’ll be attending more of in the future.

  1. In spite of (because of?) Calagator’s technology focus, “beer” is one of the most prominent entries in the site’s cloud tag.
  2. This pub in particular offers the choice of US or Imperial pints. I was tempted to ask for metric, but ultimately restrained myself.

Self-checkout lanes are checking out

It turns out that my avoidance of self-checkout lanes at supermarkets wasn’t just a demonstration of my personal tendency to be a stubborn old curmudgeon: major chains like Albertson’s and Big Y are phasing them out in favor of standard service lanes (Boston.com, Consumerist.com).

Of the many things I have principled objections to, using automated tools isn’t one of them — my use of ATMs vs human bank tellers is easily 20:1 in favor of robots, for example[1]. But tools have to fit the job, work well, and most importantly, offer a noticeable benefit to the end user. Self-checkout lanes at supermarkets, in my experience, offered fewer advantages than they did drawbacks. Though there was a chance of enabling a speedier exit from the store, it was more likely that I would encounter a computer barking at me that there was or wasn’t some expected item in the bagging area, an item or coupon that couldn’t be scanned, or some other unspecified failure which required a manager’s attention.

These problems highlight how important it is to consider the overall user experience. Automation may be alluring to a business looking to shave costs, but when its effect on customer satisfaction is considered, the total cost of ownership may be much higher than expected. (Not to mention some of the other concerns mentioned in the linked articles, such as intentional and unintentional theft.) Sometimes a new technology just isn’t better than its predecessor.

  1. ATMs are often finicky and have a slew of minor usability issues of their own, but nonetheless generally earn a passing grade. I’ll be signing up for a new bank soon, so mayhap a future post will chronicle my first attempt to bumble through an ATM system I’ve never used before.