Photo via Shutterstock
Photo via Shutterstock

Launching a new feature-set is seldom a home run on the first pitch. It can take inning after inning of foul tips, singles, and doubles to iterate your way around the bases.

Without the right approach to analytics, that new experience you’ve carefully crafted is more likely to end up in the feature graveyard — adding to your teams’ technical debt, rather than gracing your site’s homepage.

While there’s no shortage of analytics offerings, each with their own turnkey reporting capabilities, meaningful product analysis requires more than out-of-the-box answers. Anyone can copy and paste a report into a slide deck.

Simply regurgitating data, or “data pukes,” as coined by analytics guru Avinash Kaushik, isn’t helpful. In fact, more harm than good can result from “data pukes” because they are subject to interpretation.

Analysis, when done well, has a narrative style that considers the constituents from C-level to individual contributor. It produces a visceral desire to know what happens next — a clear path through a sea of data that moves us forward and never leaves us feeling lost.

It provides the back-up for how conclusions are reached; particularly when contesting strongly held views. It blends the quantitative with the qualitative to supplement the ‘what’ and ‘how much’ with the ever-important ‘why’ behind the data. And, most importantly, complete analysis has actionable recommendations.

All of this might sound complex; something suited for a well-trained analyst. And while thoughtful analysis does take some rigor, it doesn’t require an advanced degree — just a curious mind and a systematic approach.

Here’s my battle-tested process for effective, in-depth, feature analysis from resourcing to presentation.

Since there’s a lot of ground to cover, this is a two-part post.

Part I covers planning, resourcing, tracking integration, and testing — the crucial first steps that make effective analysis feasible. Part II covers analysis and presentation. Both can be found below.

Because examples are helpful, let’s assume the feature is a multi-step process like an e-commerce product finder experience. But any multi-step process will do. As for analytics packages, I’ll reference Google Analytics (GA) since it’s ubiquitous, but most of the top players have similar capabilities.

Resourcing

Proper analysis starts at a feature’s inception. Any product manager will tell you to define your success criteria early, whether its improved conversion, new accounts, or better retention rates.

However, the core metric is the tip of the iceberg.

The first, and arguably most crucial phase in analysis planning, is resourcing. Resourcing occurs on two levels.

First, when you’re pitching a feature and advocating for time on the roadmap, block out additional development time beyond what’s needed for the initial launch.

Not only do you want to make sure you budget time to integrate your metrics (typically measured in hours, not days), you also want to account for time to iterate post launch.

Roadmaps get full fast. If you don’t carve out time in advance for iteration you’ll lose out. Set this expectation early; otherwise you’ll get valuable data and not be able to do anything with it.

Planning & Integration

Thoughtful tracking planning and integration is crucial to effective analysis. For a complex feature set, I break tracking into two groups; business metrics and UX metrics. Your business metrics are typically pretty easy to measure thanks to GA‘s Goals and Advanced Segment capabilities.

User experience driven metrics, like funnel completion rates, take a bit more configuration, but are no less important. After all, a successful experience, wherein the overwhelming majority of your users complete your multi-step process in our example, is far more likely to reach a successful business result. What’s key to both is clearly defined site Goals.

The free Google Analytics version offers up to 20 configurable Goals that can be defined on a few dimensions such as reaching a destination URL (great for our example), hitting an engagement threshold (like page views per session or time on site) or even triggering a custom ‘Event’ (more on that later).

Custom URL goals can also include a funnel report that measures completion rates page-by-page — perfect for measuring the efficacy of multi-step processes. Since Goals are limited they are great for high-level business and user experience objectives that don’t change from release to release.

Detailed Usage Tracking

While GA Goals and segmentation analysis will enable you to tell if your users are successfully completing your process and driving your business objectives, it’s the detailed usage tracking that will put you on the path for how to iterate on your feature.

If you see a 20% drop between two stages of your funnel, knowing things like scroll depth, cursor insertion, error rates, or time on page will indicate what to iterate on to bring up the larger success metrics.

In GA, these interactions are measured in explicitly crafted ‘Events,’ but they need to be custom configured in the client-side code. Map your Events out field-by-field, but be sure to establish an intuitive labeling scheme so your reports logically align with your interface. Avoid generic names like “homepage-test” where specific names like “primary-module-hover” are more meaningful. Remember your data has multiple audiences so try not to make the organization too complex.

An approachable core data source is extremely valuable; especially as your team scales.

Of course, most site changes don’t require such an extensive approach. You may just want to influence a single metric like improved scroll rates or bounce rate reductions.

If you find yourself questioning how much to invest in tracking, ask yourself if the item is something a user can experience. If it is, track it. Better to track it and not use it than not and regret it later.

Qualitative Metrics

When preparing your integration, consider what your analytics package can’t tell you. Most quantitative analytics packages won’t tell you how users regard a feature’s helpfulness; nor will they give you customer suggestions. And while a ‘faster horses’ approach rarely leads to great experiences, knowing the emotional-drivers behind behaviors can produce powerful insights. That’s where qualitative research comes in handy — layering attitudinal findings on top of the behavioral.

The good news is that there are a number of qualitative research options; many of which are fairly turnkey and inexpensive. From targeted email surveys and on-site intercepts to traditional focus groups or one-on-one interviews, you’ll find an option for your budget and skill set.

Whichever method you choose consider inclusion of structured questions around key topics like helpfulness and ease of use. Having answers on a numerical scale is great for benchmarking as you iterate. If your audience includes management consultants or board members, consider integrating something with industry benchmarks like a Net Promoter Score.

While qualitative research takes time, much of it can be done with little to no engineering resource. You’ll find the investment time well spent. Quantitative findings invariably lead to qualitative questions, and vice versa.

Testing

Like any good feature, you need to test your tracking before launch. There are a couple of ways to approach this. First, you can simply use a browser debugging tool to make sure your Events are appropriately firing as you test the experience.

Google offers a Chrome Browser Extension called the ‘Event Tracking Tracker’ – which is worth the free download. Second, you can create a staging or test instance of your GA profile, which monitors your test infrastructure and not your production servers.

I recommend doing both. There’s nothing worse than fudging your tracking data. Finally, make a friend in the QA team. Not only can they help test your integration, they also contribute great ideas on what to measure.

With your metrics properly integrated and tested you’re ready for launch (assuming all the other planets have aligned of course).

As a sanity check, verify that the data you want is being captured on your production GA instance the day after it’s live, but hold off on any meaningful analysis until at least a few days have transpired. The amount of time and traffic you need varies depending on your business and the nature of the feature.

PART II

Now that you’ve integrated your metrics, tested them, and launched your new offering, it’s time to dive in and prepare your analysis.

For multi-step features, like the product ‘finder’ experience in our example, I typically divide feature analysis into five buckets:

  1. Traffic & Merchandising
  2. Completion & Utilization
  3. Customer Perception
  4. Executive Summary
  5. Recommendations

This structure serves as the outline for your presentation deck and nicely follows a user’s path from discovery to completion, which can be grounding should you have any linear thinkers in your audience.

Traffic & Merchandising

Before diving into a feature’s performance, offer some context on the efficacy of promotion. There are two benefits to this approach. First, starting with feature discovery is a nice way to get your audience thinking about the customers’ journey.

Second, feature merchandising, and the resulting traffic, is just as important as the feature itself. If your users cannot find it or, worse, don’t find the positioning compelling, then it doesn’t matter how great it is.

To tell this story effectively, weave screen shots of each promotional placement alongside corresponding click data into your presentation. Include other, similarly treated promotional elements, and their respective click conversion numbers as well.

Without this context, and just raw numbers standing alone, your information is simply data and has little value. “Great, 50k unique visitors saw the ‘Finder’ last week. Is that good?”

Finally, make sure you check your traffic sources. Understanding the nature of your traffic is key to understanding utilization. A large email blast to your house file can drive lots of less qualified traffic which can result in significantly different behaviors than a simple on-site promo.

Completion & Utilization

When it comes to in-feature behaviors, I recommend starting with the headline. In multi-step processes, that’s your funnel completion rate. Funnel reports are powerful and shouldn’t be reserved for checkout metrics alone.

Give your abandonment rate at each layer of the experience along with top exit choices. Where your customers are heading can be illuminating as to why they left – your customers are voting with their clicks.

Again, don’t show your funnel alone. Marry the data to the on-screen experience. Chances are your audience doesn’t have each screen memorized like you do.

Keep the data and the experience apart and you’ll find yourself pogo-sticking back and forth from slide to site – which isn’t a great way to keep your audience from looking at their phones.

With completion rates addressed your audience is ready to dive into more detailed performance metrics. As I stated in Part I, your Google Analytics (GA) Goals will help tell you if you’re being successful against your business and broad UX goals, but it’s the detailed usage tracking that will put you on the path for how to iterate on your feature.

For GA users, this is where you’ll rely heavily on custom ‘Event’ tracking. What volume of users experience an error and on which fields? What is the usage rate for inline help content? Which UI elements go unused?

Don’t offer raw numbers — give percentages. With some simple math every UI element has a conversion number, so make sure you can measure yours. Each of these is a dial you can influence through a refined design.

And sometimes those refinements are the difference between a feature’s sunset or continued promotion.

For each of your slides, make sure you have your back-up. In site analytics terms, your back up is the path to replicate your findings.

Anyone who has spent time in GA will tell you that it can be hard to replicate a carefully crafted query if you’re not paying attention. There are often several ways to approach basically the same ‘ask’ of your data, so it’s easy to come up with a different approach and a different result.

This becomes particularly important when you want to revisit queries as you iterate and look for performance improvements. Additionally, if you’re challenging someone’s pet project or a widely held notion, you should expect challenges to your methodology. Be ready.

For bonus points, consider running a cohort analysis. A cohort is a defined group of users at a point in time. In our example you can easily take Advanced Segments of users who did, and did not, utilize the product finder and compare their respective cohorts.

From there you can see if users of your feature exhibit other positive behaviors like stronger user retention, longer session duration, increases in other goal completions, and of course, transactions.

Customer Perception

After you’ve walked through usage data, your audience is apt to have questions. As stated in Part I, quantitative findings address the ‘what’ and ‘how many’ questions, but often lead to qualitative questions of the ‘why’ variety.

This is the circular defense of good metrics: qualitative backs up quantitative and quantitative backs up qualitative. Absent any qualitative findings snap judgements will be espoused, hidden agendas may bubble up, and office politics can rear its ugly head.

Hearing directly from customers on the usefulness and appeal of your offering, in addition to their suggestions, can dispel many a pet theory and put your meeting (and your product) back on track.

Executive Summary

If you’ve crafted a clear narrative backed with key insights and data points then the heads should already be nodding.

Obviously your initial business goal is the headline. If you’re not hitting your initial objective don’t shy from it. Do you need a handful of tweaks or a substantial pivot?

Your summary is your opportunity to demonstrate that you have a thorough understanding of what’s transpiring and why. Your supporting points should follow the flow of the story to drive home the key messages from promotion to completion and customer regard to cohorts.

Fortunately, thanks to your diligent preparation, you’ve avoided the worst possible outcome – immeasurable results.

Recommendations

Finally, it’s time for your recommendation set. As with the summary, these should nearly be foregone conclusions. Keep your recommendations targeted and specific.

Each recommendation should address the questions of “what,” “why”, and “how much.” If you have time, engage your design team and sketch UI modifications to bring ideas to life. Even if you have pre-approved time to iterate on the feature post launch come prepared to defend your ideas. If there’s one thing you can count on it’s that everyone wants a piece of the roadmap.

If you’re in a mid-size or large organization, make sure you have a groundswell of alignment before the meeting. The larger and more political your organization, the more the meeting before the meeting matters.

As for the presentation itself, don’t get too worked up about it. If you’ve done your homework, you’ll know 10x more than your colleagues. Focus on making sure your analysis tells a clear story.

Start by writing slides by hand to rough out the flow and basic contents. This will keep you from wasting time on polish before you’re ready.

Once you have a draft in place, pilot your findings with a friendly (but critical) audience to find any head-slapping gaps or errors before the larger group. However, even with a dry-run, you’ll undoubtedly get some questions you hadn’t considered.

Visually capture any inquiries as you go so your audience feels they’ve been heard then circle back later. Following up will build trust in you as an analyst over time.

Alex Berg
Alex Berg

If you present a slide deck like this to an audience that has only seen data pukes in the past, they will be floored with the depth of analysis that can be done with modern analytics techniques.

You’ve presented them with a data-driven story that details how your feature was received by its audience, why it responded to its goals the way it did, and where it needs to go next.

The feature is now in the seventh-inning stretch and you know which pitcher to put on the mound.

About the author: Alex Berg is the Director of Strategy & Analytics for Fell Swoop – a digital design firm in Seattle. Prior to Fell Swoop Alex held leadership roles with Ritani, Wetpaint, Expedia, and Blue Nile. Follow him on Twitter @alexwberg.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.