Universal Principles of Design

Based on Adam’s writeup I picked Universal Principles of Design – a new book by Will Lidwell, Kritina Holden, and Jill Butler – down off the shelf and subsequently to the cash register. Adam’s take on it is accurate. Given my experience teaching – where people with undergraduate degrees in communication design from the Parsons School of Design still don’t know many of these principles – I’d say just about every novice to advanced designer could benefit from this book.

The book does a very good job at talking in simple ways that cut across design disciplines. It simultaneously shows the application of a principle to the Macintosh GUI and the Segway. Rare are the times when graphic, industrial, architectural, and software designers have overlapping areas of interest. The qualities of various sorts of rubber will keep industrial designers talking all night, and I couldn’t care less, and furthermore couldn’t keep up even if I did care. For this reason I’m curious to see where other efforts to unite these designers will go. One such effort is Interaction Designers, a group that strives to speak across these boundaries.

Another is the Interaction Design Institute Ivrea, and specifically the Hub, a community blog launched by the lovely Molly Steenson. Here Andrew sums up the difficult going in early posts as well as the great potential. The same sort of immediate sharing and the resulting learning that blogging has given many of us threatened to leave universities behind. Likewise, there are topics covered in these schools that aren’t disseminated to the rest of us, and this blog becomes a welcome conduit.

Semantic Web: Paul Ford responds to Clay Shirky

Usually a fan of Clay Shirky, I heaved a heavy sigh at yet more strawman arguments directed against the Semantic Web efforts. I was pleased to see Paul Ford took the time to respond at length, giving concrete examples as well as demonstrating the techniques on his own site.

The critics of the Semantic Web, methinks, simply lacking patience. If everyone only thought one, three, or five years into the future we’d never solve the very hard problems. The Semantic Web as a practical reality might be 10 or 20 years off, but that’s not really so long. And if the W3C wasn’t doing this work, we’d all be sitting around complaining, “Someone has to think about the future of the web, coming up with the strategic plan for web technology and use. Why isn’t the W3C doing this?!”

One-to-Many Recommendations

Mark Hurst set up a nice little site called the Uncle Mark 2004 Gift Guide and Almanac. The cynical might see it as an inflated Amazon affiliate play, but I like the idea of looking to particular people we trust for recommendations. I’d love to see, for example, a list of David Byrne’s favorite new records. A while ago I wrote about a record store in Hamburg that operates this way (see April 24 entry). And isn’t this the idea behind the Oprah Book of the Month?

Published
Categorized as Unfiled

Readers are forgiving about superficial disorganization

Readers are forgiving about superficial disorganization‘ writes the author behind http://www.hypertextnavigation.com/. The site is a few years old, but in many ways he’s dealing with the same issues IAs do today, such as the taxonomy dance. This bit, to me, describes a place of tension where meaning meets presentation, where a user’s experience meets organization…

having a coherent, interesting concept for a site is more important than having an organized site. Coherence of the ideas being communicated is not the same as coherence of presentation. Coherent thinking *leads* coherent presentation. Ideas are *good*, before they are *organized*. Readers are forgiving about superficial disorganization if the information is interesting enough to them.

Published
Categorized as Unfiled

Where Nav Meets Taxonomy

I had a great session at work recently massaging a general taxonomy to be navigable. Hunched over wireframes and a hierarchical view of the taxonomy with a programmer, business analyst, and manager we were all able to communicate and understand the issues.

This is an important area of IA that is getting to be more like science and less like art. Personally, I still think bottom-up design sucks when used for navigation. The idea that you can, say, create a taxonomy without knowing who will use it or how is just ridiculous, and the more different users and users you apply to it the more its usefulness is diluted; the effect is proportional.

For example, a hierarchical taxonomy may not be balanced (an equal number of children for each parent), it may be deeper in some places than others. This may make it difficult to pull the data out and put it into a standard template (which is the advantage of having the info organized that way ahead of time), ending up with some pages that have too little content and others which have too much. With the philosophy that the UI is designed for the users’ needs, it’s the taxonomy that is the problem here.

Depending on the labels, leaf nodes may not be findable from the top of the tree (can someone look at the top-level category and ‘smell’ what’s a few levels down?). If we have this problem, we might start collapsing some levels. Then we look to see if this results in pages that are too long and balance the levels and the size of the pages: the taxonomy dance. But our information isn’t static, and we can’t always predict how the taxonomy and the information inside it will change.

That said, sometimes you have an underlying structure like a hierarchical taxonomy and need to stick a user interface on it. When this happens it’s best to have a layer of abstraction between the two so that the UI can serve the needs of the user. The layer of abstraction might just be very clever database queries. But this assumes the database is modeled to allow a flexible UI, for example, not hard coding hierarchical associations. I still haven’t arrived at the point of a method we can follow when doing this, it’ll take collaboration with a database programmer to do so.

Later…I had asked “can someone look at the top-level category and ‘smell’ what’s a few levels down?” That’s important for directed search, where you may be looking for grandchildren. It’s not as important for exploratory browsing: If you can smell the children categories under each parent, you can gradually work down the generations.

How Designers Follow Constraints

Notes on Web Site Designs: Influences of Designers Experience and Design Constraints (PDF) by Aline Chevalier and Melody Y. Ivory, which ‘demonstrates that the designers’ levels of expertise (novice and professional) as well as the design constraints that clients prescribe influences both the number and the nature of constraints designers articulate and respect in their web site designs.’ It’s part of the WebTango project at the iSchool, University of Washington.

‘We assert that understanding designers’ activities and identifying difficulties they encounter are essential to improving web site quality.’

‘we found studies showing that constraints are extremely important for understanding and for solving a design problem’

‘there is a wide gap between designers’ articulation of constraints and designers’ effective implementation of them.’

Those pesky designers! Seriously, reading this I feel like we could do a better job making constraints explicit in our personas and scenarios. Most I see are filled with a lot of nice details on our fictional character meant to make them more real but doesn’t add anything to the design process.

Also, I feel like we need something in-between the personas/scenarios and the design, an interaction model. More on this in a future post.

Even the experienced designers could only satisfy 75% of the constraints they were given. While they achieved up to 95% of the client constraints, they couldn’t satisfy more than half of the user constraints.

‘professional designers in the condition without constraints were able to infer client constraints, because they had contextual knowledge acquired through experience (stored as mental schemata)’

‘Results from the first two studies show that professional and novice designers encounter difficulties in effectively considering users’ needs during the design process, even though they focus mainly on users’ needs during the evaluation process.’

‘we argue that heuristic evaluation with ergonomic criteria suggested by Nielsen (2000) has not been adapted for web site designers (who have no human factors knowledge), because the ergonomic criteria are both too abstract and too numerous. Our hypothesis is that it would be more effective to provide designers with a subset of ergonomic constraints that respect the users’ real needs.’

Absolutely. There are simply too many guidelines to follow these days. There has to be a way of winnowing them down. Design patterns might help. A better design method might help.

Results: ‘1. Help novice designers to consider both user and client constraints. 2. Help professional designers to focus more so on user than client constraints or at least help them to strike a balance between the two actors. 3. Help designers, regardless of their levels of expertise, to consider and implement ergonomic constraints in their sketches.’

For the first two points, we suggest developing a knowledge-based system that fits the designer’s level of expertise (see Fischer et al., 1991). Specifically, the system should provide the following support:


  • The system should help novice designers to identify constraints that need to be respected in the web site design.
  • This system should also help novice designers to generate new constraints, through a design step oriented on the expectations of the client and the users. The system could help designers determine, based upon the current state of the design activity, additional information the designer may need to consider. For example, the system could propose questions for novice designers to ask the client.
  • The system should help professional designers deal with a client who has many expectations, in particular, to help the designer consider more user constraints. For example, the system could suggest relevant constraints that the designer did not consider.

As solutions they suggest a focused questionnaire designers could use to evaluate designs through the design process, or an automated tool to evaluate the design. Both are probably helpful, but to truly advance I think we need to improve the method itself, not just devise better ways to find design flaws.

Thank you Aline and Melody.

Published
Categorized as Process

Transitional Volatility

Notes on Transitional Volatility (PDF) by David Danielson (2003), also the topic of his master’s thesis. It’s a rare, rigorous look at the common guideline to ‘make navigation consistent’ in a world that has big websites where the navigation must change from time to time. His finding showed that complete consistency is not always the best route.

Essentially he studied how users reacted to changes in the navigation appearance from page to page. He tested use of three versions of one site, each with a different navigation scheme. The ‘full overview’ had a full site map-like outline in the left nav, the ‘partial overview’ only listed the second and third level categories for the currently selected top level category, and the ‘local context’ listed the second level choices for the currently selected top level category plus the third level categories for the current second level category (see screen shots in the paper). Users were given directed-search tasks, or low complexity fact-finding missions.

His sample site used a ‘well-formed’ and limited hierarchy, such that each of the two upper levels’ items always had subordinates, and no item at the third level had any subordinates.



From the abstract: The results suggest an interesting pattern of interaction effects: When users are provided with partial overview navigation support, navigational volatility predicts increased disorientation, decreased perceived global coherence and decreased ease of navigation. In contrast, when provided with a more locally focused navigation scheme, navigational volatility predicts increased perceived site size and increased perceived global coherence. The results generally supported a model with a direct causal link from navigational volatility to disorientation.

Transitional Volatility: the extent to which users encounter changes in the Web interface as they move within or between sites.

On user expectations: The user becomes habituated within the recent navigation patch. The user predicts content and navigation option changes in page -to-page transitions. The user reorients at the destination page of a transition. The destination page becomes part of the recent navigation patch, continuing the cycle. So it’s just not the page right before the change that sets expectations, the user has a subtle and complex memory of pages they’ve experienced.

His incorporation of user expectations integrates the volatility question with a main challenge of navigation, the ‘behind-the-door’ problem of user’s ability to understand what a link will lead to.

On causes of disorientation:moving among unrelated information topics in the Web space appears not to have been related to disorientation. More importantly for the purposes of this discussion, top-level switches do not, in and of themselves, cause disorientation… disoriented users consciously recognize the hyperlinks that will lead to navigational changes (navigational predictability), and then decide that such change is desirable… it was the home page link in the top-left corner of the page, just as one might have expected, that was "inviting" disoriented users, not the top-level hyperlinks.) [Spool would love this] A reasonable conclusion seems to be that the all-or-nothing nature of users’ navigational volatility distributions in the Partial Overview condition were more noticeable, and more disorienting, than the more subtle and graded changes typically encountered with a Local Context navigation scheme… The extent to which a user is habituated in a navigation patch may make the changes seem even more dramatic when they do occur

The take-away is (to oversimplify) keep navigational changes subtle. The subtlety should be in proportion to how much the user becomes habituated to the navigation. Later we see that subtle changes may themselves be helpful, assisting users in perception of connections among links.

On perceptions of site size and complexity: The study results suggest that navigational volatility leads to increased site size perception… A significant effect was not found linking perceived global coherence to perceived site size.

On perceived global coherence: navigational differences (navigational volatility) allowed users with Local Context support to see connections between distal pages they otherwise would not have seen – and so led them to view such pages as more related. Fascinating, a change in navigation actually helped the users perceive connections.

On context: The broader goal may be to precisely determine the set of factors affecting a navigator’s ability to map hyperlink attributes at a source page to characteristics of the hyperlink’s destination page. As this investigation shows, the factors will not be limited to hyperlink attributes (such as what the link’s text snippet itself says), but, rather, will extend to broader contextual factors, such as the sorts of volatile transitions the user has already been exposed to. So just looking at scent, or just shape, ain’t enough. It’s a complex interaction of several components.

Variables: he nicely controls for navigation structure and page layout. I wonder if it’s additionally possible to control for info scent and shape?

Thank you David.

Later, a comment from JJG offers a fun name for this: ‘One new wrinkle is this notion that the more navigation changes from page to page, the larger and more complex the site will be perceived to be. Call it the Seven Veils Effect: by alternately showing and hiding what’s available, you create the impression that there’s more to see than there really is.’ I’d argue that you’re just cleverly showing what’s actually there, but I love the name.

Nielson’s Model of User’s Expertise

Plowing through research on navigation, just about everyone cites user expertise as a factor, regardless of the task being studied.

I’d like a better, more quantifiable, way to summarize a user’s level of expertise in a persona. This could lead to generalizations about what types of interaction will work for certain types of user.

Jakob Nielson, in Usability Engineering (as cited here), divides computer users into six categories along three dimensions based on the user’s experience: users with minimal computer experience and users with extensive computer experience for the dimension of knowledge about computers in general; novice users and expert users for the dimension of expertise in using the specific system; and user ignorant about the domain and users knowledgeable about the domain for the dimension of understanding of the task domain.

Thanks Jakob.

Published
Categorized as People

Usable Browse Hierarchies

Notes on Toward Usable Browse Hierarchies for the Web (.doc) by Kirsten Risden of Microsoft Research (1999). Looking at Yahoo-style category and sampling of sub-category navigation, she seems to suggest that using polyhierarchies are a way of compensating for ambiguous categories and labels. Well yes, one might say, but if it helped people find what they’re looking for, isn’t this just an academic argument? But the point is well taken: if you’re turning to polyhierarchies to solve a findability issue than perhaps your categories (or any categories) aren’t the solution.

Thanks Kirsten.

What Do Web Users Do?

Notes on What Do Web Users Do? An Empirical Analysis of Web Use (PDF) by Andy Cockburn and Bruce McKenzie, University of Canterbury, New Zealand. It was published in 2000, meaning the work was done earlier, but I still found the results useful.

They looked at the title, URL and time of each page visit, how often they visited each page, how long they spent at each page, the growth and con- tent of bookmark collections, as well as a variety of other aspects of user interaction with the web.

They only looked on 17 people, but gathered a lot of data on them. Netscape v4.x browsers.

Page views per person: The mean daily page visit count was approximately 42 pages for each user per day… …earlier studies… had approximate daily visit count means of fourteen (Catledge & Pitkow 1995) and twenty one (Tauscher & Greenberg 1997).

How often they revisited pages: Previous studies have shown that revisitation (navigating to a previously visited page) accounts for 58% and 61% of all page visits. Our study shows that page revisitation is now even more prevalent, accounting for 81% of page visits when calculated across all users.

This raises questions of how we can focus our sites, or individual pages, given how they are revisited, especially if a goal of the site is loyalty. Put another way, if users are loyal to certain pages, how should that affect the navigation?

Temporal aspects: The results show that browsing is rapidly interactive. Users often visit several pages within very short periods of time, implying that many (or most) pages are only displayed in the browser for a short period of time. Figure 3 shows that the most frequently occurring time gap between subsequent page visits was approximately one second, and that gaps of more than ten seconds were relatively rare.

Whereas Dillon discusses navigation as meaning in a context of info seeking, this result talks more about users having route knowledge, again with implications for navigation. A typical user comment: ‘I’ve never bookmarked the library’s search page. I keep forgetting because once I’m there I start my search rather than thinking to bookmark it. Anyway, I’ve got a good shortcut. First, I click `Home’ which takes me to the Department’s homepage, then I click on the link to the University’s homepage, and from there I click on `Departments’ and then `Libraries’. It takes quite a few clicks, but it doesn’t take too long.’

So when devising an interaction model, it’s good to consider the nature of the content and navigation as well as whether the users are new are repeat.

A community doesn’t exhibit homogeneous web use: These results show that there was a surprising lack of overlap in the pages visited by this fairly homogeneous community of users.

Conclusions: the authors offer now familiar recommendations, such as support revisitation, design pages to load quickly, shorten navigation paths, and minimize transient pages. Of course doing this is the real world is harder. Should all pages load quickly, or is it alright for ‘destination’ pages (with target content) to be larger? If we have a lot of information but must shorten navigation paths, should websites be smaller?

Thank you Andy and Bruce.

Published
Categorized as People

AAC: music to my ears

If you’re using iTunes/iPod/QuickTime 6 you can start burning music in AAC format, the audio spec for MPEG-4. The encoding is better than MPEG-3 resulting in smaller file sizes (at the same bit rate) and higher quality sound. Mp3s couldn’t match CDs for sound quality because they relied on earlier, imperfect perceptual coding algorithms; AAC comes much closer.

You can make this change in iTunes preferences. Upping the bit rate also helps, I’ve got mine set on 192 bps. 128 is more common and uses less space, but 192 sounds better. You can always down sample it later if you run out of space, but you can’t up sample without re-burning the track.

You can tell Apple is behind this migration because they’re selling songs in their store in AAC format, and iTunes even has ‘convert to AAC’ menu choice, making the process easy. I’ve been holding off on the mass burning of my CDs due to mp3’s limitations, but now I’m ready.

Published
Categorized as Audio