Please tell us what you think of the new Bulletin interactive pdf!  Feedback

Bulletin, June/July 2007


Data-Driven Design: Using Web Analytics to Validate Heuristics System

by Andrea Wiggins

Andrea Wiggins received her MSI from the University of Michigan School of Information. She is a web analytics practitioner. She is also associate instructor for the University of British Columbia's Award of Excellence in Web Analytics certificate program. Her website is www.andreawiggins.com

. She can be reached by email at akwiggins<at>gmail.com.

Web analytics, the practice of web traffic analysis, provides intelligence for marketers and executives responsible for proving return on investment (ROI). However, web analytics’ greatest potential lies in improving the online user experience. When analytics data is shared with the design team, a subtler and more sophisticated user-experience design can emerge.
Web analytic data is mined from site activity logs and can offer a wealth of information about user behavior. Information architects, when interviewed about the web analytic measures of greatest value to their craft, center their comments on its value in providing context for decisions and heuristic assumptions. There are a variety of popular heuristics in common use, and here we discuss ways that web analytics can be applied to using Robert Rubinoff’s user experience audit [1]. 
His four basic factors of user experience – branding, functionality, usability and content – constitute a heuristic evaluation framework. This framework aims to provide “a quick-and-dirty methodology for quantifying the user experience,” rather than a fixed algorithmic approach or a completely subjective one. Rubinoff’s audit is a flexible assessment tool, with which the site development team selects audit statements for each factor and uses them to evaluate how well the site serves its users. A selection of audit statements provides the basis for the discussion of the role web analytics can play in evaluating the validity of the assessment.
Using Heuristics
Providing a context for heuristic assessments is a useful application of web metrics in a site redesign project. Analyzing pre-existing web traffic data can yield insights into user behavior and measure how well a site design meets user needs [2] [3] [4]. By comparing user data from web analytics to an information architect’s heuristic evaluation, a basic form of validation can emerge for otherwise subjective performance measures [5].
Web analytics is not a silver bullet against subjectivity, however; web traffic measurement will never be able to reveal the complete user picture, and this limitation is a good reason to use web analytics in combination with other tools such as surveys, customer databases and user testing to evaluate the experience design. Web analytics reveals what users do, but not why they do it, so measures for evaluation must focus on actions that can be most directly mapped to user intent. This focus is a significant challenge, as many heuristics are subjective and abstract, but the value of this effort extends beyond the design project. Creating a framework for measurement is critical to future evaluation of the success and value of strategic but intangible investments like information architecture.
Validating Rubinoff’s User Experience Audit
To provide context for design, Rubinoff’s user experience audit presents useful parameters (statements) that information architects can easily assess. His example uses four broad, equally weighted categories with five evaluative statements in each. The most functional aspect of this tool is its complete customizability. In fact, this level of flexibility requires that care be taken so the audit statements reflect the site owners’ and users’ goals. 
Because every domain operates in a different context, information architects and web analysts will achieve greatest success by consulting on the evaluative points that will be most verifiable and valuable for each website. By selecting success measures cooperatively early in the project’s definition or discovery phase, the design and evaluation will be in alignment from the start. Both the information architect and web analyst will then be better positioned to prove the value of their services and assure that the project’s focus remains on business and user goals. 
Creating an appropriate analysis of site traffic for supporting the information architect’s needs requires the generation of validation measures for the site, congruent to the heuristic considerations that the information architect uses. Since the heuristics or user experience audit statements will vary among information architects and projects, a selection of representative evaluative points are examined here, drawn directly from three categories in Rubinoff’s user experience audit:

  • Branding
    • The site provides visitors with an engaging and memorable experience.
    • Graphics, collaterals and multimedia add value to the experience.
  • Usability
    • The site prevents errors and helps users recover from them.
    • The site helps its visitors accomplish common goals and tasks.
  • Content
    • Link density provides clarity and easy navigation.
    • Content is structured in a way that facilitates the achievement of user goals.
    • Content is appropriate to customer needs and business goals.

Ideally, the most relevant metrics or key performance indicators (KPI) should be comparable both before and after a site is redesigned, although direct comparisons are often impossible after fundamental changes to site structure are implemented. By nature, a site redesign generates new points of measurement, typically enhanced by improved data collection strategies, and only a handful of the site’s previous KPI might still be directly applicable.

Branding. A popular yet extremely subjective measure of the online brand experience requires judging whether the site provides visitors with an engaging and memorable experience. Because direct measurement of brand value is elusive in any medium, the most direct measure of this ephemeral value is the ratio of new to returning visitors. The desirable return visitor ratio for a site will vary. For example, tech support sites typically prefer a low proportion of return visitors, but for an e-commerce site, a continual infusion of new visitors is needed to ensure viability, even when returning visitors have a higher conversion value. To effectively measure the online brand experience, the team must identify the ideal proportion of return visitors as a KPI specific to the business goals of the site. The development team should track this statistic over time to evaluate their ongoing success in providing a positive brand experience [6] [7]. 

The length of the average and median site visit, in both time and pages viewed, also provides a level of verification of the information architect’s assessment of the brand experience. Using the statistics for the proportions of site visits of each particular visit length, the design team can set a specific goal for engagement length as a KPI for ongoing analysis of site performance.

The ability to validate the assessment of an interactive element’s brand value is directly dependent upon whether the interaction was created with measurement in mind. Engaging online experiences like multimedia games, Flash and AJAX are all measurable – as is the ratio of visitors returning to interact with them and those visitors’ behaviors – but only if the design incorporates JavaScript tagging to report key interactions. In a redesign, KPI for interactive site elements should be considered in the development of the experience design. For example, the length of time users spend with an interactive element, whether averaged or examined by audience segment, provides a concrete idea of how long users are engaged by the application. The average number of return visits would be a simple statistic to assess how memorable the experience is.

Another branding element that Rubinoff suggests evaluating is the benefit that graphics and multimedia bring to the experience. Aside from ensuring that the graphics do not make the pages load too slowly, measuring the experiential value of graphics can be more difficult unless they happen to be clickable graphics leading the user to further content. Tools such as Crazy Egg’s heatmap overlay of visitor click patterns can immediately reveal whether clickable graphics drive more traffic than text links.

Usability. Rubinoff’s usability statements echo several of Jakob Nielsen’s usability heuristics, and because web analytic technology is often sold as a usability tool, there is a temptation to believe that it can replace proven usability testing, which is not true. Traffic analysis, no matter how reliable and sophisticated, is not able to provide the thoroughness or level of insight that lab testing can achieve. However, by combining usability testing with web analytics, customer satisfaction information and a/b optimization, usability testing expenses can be cut dramatically, and purportedly without loss in quality. A/b optimization is using pair-wise comparisons to force users to state preferences, as in the optometrist’s familiar “Which is better – A or B?”

Error prevention and recovery is a commonly selected point for evaluation, and information architects can easily determine whether a site is offering appropriate error handling. It is harder to quantify the value that error handling creates for the site’s users. The most direct measures are simple proportions: the percentage of site visits including a 404 (file not found) error, the percentage encountering a 500 (server) error and especially the percentage of visits ending with an error. A quick check can reveal whether errors merit a further investigation, and, if we dig deeper, the analysis becomes more interesting with the addition of navigation analysis. For example, examining the pages most commonly viewed one step before and after a user error can allow the development team to literally recreate it so that it can be understood and remedied.

Another favored usability heuristic asks whether the website helps its visitors accomplish their goals and common tasks. Analyzing task completion success rates helps to determine whether a site meets this usability goal. Task completion is now a relatively straightforward set of web analytic measures achieved by applying scenario or conversion analysis to specific, ordered tasks and examining leakage points and completion rates. Leakage points, the places where users deviate from the designed process, should be of particular interest: when visitors leave a process unexpectedly, where do they go, and do they eventually return? 

The straightforward percentage calculations that comprise conversion analysis are simple to determine once the appropriate page traffic data has been collected. However, determining the significance of conversion rates is another matter entirely. Conversion rate definitions and goals are often KPI determined by committee, and understanding these measures will facilitate a deeper understanding of the business and user goals that the design is intended to support. A primary difficulty often lies in choosing the tasks to assess; these measures are best defined with the assistance and resultant buy-in of the parties who are ultimately responsible for driving conversion. Shopping cart analysis is the most common application of this type of evaluation; online forms completion carries just as much weight for a lead generation site as checkout does for an e-commerce site.

Even the most common type of conversion analysis, shopping cart analysis, can be somewhat misrepresentative of what it seeks to measure. Since a significant proportion of shoppers engage in online research to support offline purchasing, a high shopping cart abandonment rate may not be as negative as it would first appear. Another notable trend in shopping cart behaviors is that customers often make use of the shopping cart as a calculator for their purchase totals and shipping rates, so frequent shopping cart item deletions can also be misleading – perhaps the customer is simply on a budget. Shopping cart analysis can prove very useful for determining which shopping tools and cross-marketing opportunities may add value. 

Many other user behaviors can indicate the site’s success in supporting user goals. Pogo-sticking, where users bounce back and forth between two levels of a site’s architecture, usually indicates that the organization of the content is problematic. Pogo-sticking behavior is an excellent example of the benefits of collaboration. In this case the web analyst identifies the phenomenon and may be able to infer some of the user needs behind it to share with the design team. The information architect, armed with the knowledge of specific user behaviors and needs, can design a more successful, usable site [8] [9] [10].

Content. The evaluative points Rubinoff provides for the content category include simple queries about the site’s navigation, organization and labels. These foundational characteristics of a site’s information architecture are simply enormous territories for measurement [11]. There are many ways to attempt to measure these heuristics, and the applicability of any given measure is entirely dependent on context. Despite challenges in data collection and analysis, navigation analysis findings are typically too valuable to disregard, and they have a host of applications from scenario analysis to behavioral modeling. One way to use navigation analysis findings in context is to apply it to augmenting page traffic statistics. 

Performing navigation analysis on the most popular content pages of the site shows the paths users traverse to arrive at and depart from those pages. Looking for trends in visits to content versus navigation pages can also be indicative: if navigation pages are among the site’s most popular pages, there is probably good reason to spend some time considering ways the site’s navigation might better support user goals. Examining the proportions of visits using supplemental navigation, such as a sitemap or index, can also reveal problems with primary navigation elements. In these cases, however, web analytic data is more likely to point out the location of a navigation problem than to identify the problem itself.

Determining whether content is appropriate to visitor needs and to business goals is a complex problem. To validate the information architect’s analysis of content value for visitors, individual content pages or classes of pages could be examined and compared on such measures as the proportion of returning visitors, average page viewing time length, external referrals to the page and visits with characteristics indicative of bookmarking or word-of-mouth referrals. 

Content group analysis is another common approach to measuring the value of website content. Site content performance is often measured by dividing the content into logical, mutually exclusive groupings and monitoring traffic statistics and user behaviors within these content groups. The most useful content group analysis will slice and dice the data across several sets of content groupings. These grouping may include audience-specific content tracks, product-related content comparisons by numerous levels of granularity (often presented in a drill-down report that allows a simultaneous view of every sub-level of a content group) and site feature-specific content groupings.

To determine how well site content matches user expectations, few tools can outperform search log analysis [6]. If analysis of site search query terms reveals a significant disparity between the language that the site visitors use and that which the site employs, the chances are good that the content does not fit user needs. The terms used to find the site in visits referred by search engines is another source of information for comparing user needs to site content, and these results should be considered in proportion to the volume of traffic that search engines deliver to a site. 

A final heuristic from Jakob Nielsen merits mention: validating the match between the site and the real world. In Nielsen’s words, “the system should speak the users’ language, with words, phrases and concepts familiar to the user…Follow real-word conventions, making information appear in a natural and logical order” [12]. Closely related to the question of whether site content aligns with user needs is the slightly different question of whether the site vocabulary matches that of the site’s users. If the content is right but the words are wrong, the site will have difficulty attracting its audience. 

A straightforward example shows the results of using a more common word in the navigation labeling versus a more site-centric word. In the original design of a regional live professional theatre’s website, the site section for content related to actors, playwrights, staff and board was labeled “Artists.” In a redesign of the site’s information architecture, the original content and structure of the content in the “Artists” site section was left intact, but the section and its top-level page were renamed “People” in keeping with the more common use of this content category throughout the Internet. Figure 1 shows the immediate impact of this change to visits to that page.


Figure 1. When a page’s name was changed from “Artists” to “People,” visits to the page immediately increased by nearly 90% on average. A T-test of visit numbers over 13 days on either side of the name change event shows a significant difference, with a p-value of 0.0002.

By comparing the user’s words to the website’s text and mining data from search engine visit referral terms and onsite search queries, web analytics can identify problems with language and terminology. If the site has multiple audiences with significantly different vocabularies, such as investors, doctors and patients, comparing search terms and site text for the pages designed for these specific audience segments offers more targeted evaluation of whether the site’s labels and content meet user expectations. The same search term analysis can also provide insight into what users expected to find on the site but did not. This helps identify business opportunities or gaps in search indexing. For this reason, null results deserve a more thorough analysis than is typically afforded, but at minimum, the frequency of null search results should be monitored for anomalies, in keeping with general error handling analysis.

Conclusion
When the web analyst and information architect are evaluating the same qualities of a website, web analytics user data can serve as verification for information architecture heuristics, even when the available methods may reflect only a few facets of a complex heuristic concept or user experience audit statement. An information architect’s heuristic evaluation of the user experience is often subjective and gains value with a fact-based understanding of the actual user experience. Access to web analytic data during the earliest phases of a website redesign process informs the site’s architecture from the very beginning, ideally allowing for a shorter engagement and resulting in a design that significantly improves the site’s usability for its primary audiences’ needs. In the larger picture, including measurement in site design helps prove the return on investment of intangible investments in web presence. 

Resources
For Web Analytics

Web Analytics Association. (Professional association for web analytics practice) www.webanalyticsassociation.org

Web Analytics Demystified. (“A site dedicated to making web analytics professionals more successful.”) www.webanalyticsdemystified.com

Crazy Egg. (Free analytics tool for identifying location of clicks on a page) www.crazyegg.com

Google Analytics. (Free web analytics ASP using client-side data collection) www.google.com/analytics

Mentioned in the Article

[1] Rubinoff, R. (2004, April 24). How to quantify the user experience. Retrieved April 14, 2007, from http://www.sitepoint.com/print/quantify-user-experience

[2] Chatham, B. (2005). Integrated user profiles boost web analytics. Cambridge, MA: Forrester Research, Inc.

[3] Manning, H. (2004). Persona best practices: Developing your customer research plan. Cambridge, MA: Forrester Research, Inc.

[4] Sterne, J. (2002). Web metrics: Proven methods for measuring web site success (3rd ed.). New York: Wiley.

[5] Peterson, E. T. (2004). Web analytics demystified: A marketer's guide to understanding how your web site affects your business (1st ed.): Celilo Group Media, CafePress. For further information see the Web Analytics Demystified Website (www.webanalyticsdemystified.com).

[6] Inan, H. (2002). Measuring the success of your website: A customer-centric approach to website management. Frenchs Forest NSW, Australia: Prentice Hall.

[7] Peterson, E. T. (2005). Webs site measurement hacks. Sebastopol, CA: O'Reilly.

[8] Beyer, H., & Holtzblatt, K. (1998). Contextual design: Defining customer-centered systems. San Francisco: Morgan Kaufmann Publishers.

[9] Holtzblatt, K., Wendell, J. B., & Wood, S. (2005). Rapid contextual design: A how-to guide to key techniques for user-centered design. San Francisco: Elsevier/Morgan Kaufmann.

[10] Nielsen, J. (2000). Designing Web usability. Indianapolis, IN.: New Riders.

[11] Rosenfeld, L., & Morville, P. (2002). Information architecture for the World Wide Web (2nd ed.). Cambridge, MA: O'Reilly. 

[12] Nielsen, J. (2005). Heuristics for user interface design. Retrieved April 13, 2007, from www.useit.com/papers/heuristic/heuristic_list.html