Avoid the Redesign Trap

Website redesigns.

They can sound like a great idea when your website is beyond repair. Often, the research that’s done for this type of project leads to an experience that looks, feels and HAS to be better. Unfortunately, this better-looking and feeling experience sometimes comes with a lower conversion rate than what you’re already used to.

It happens ALL THE TIME.

Why it can happen

When you completely overhaul a website you get rid of all the bad things, but you also get rid of all the good things.

Think about it. You or your marketing team spent a lot of time tweaking and optimizing that old website to get the most out of it. There was probably a lot of good stuff in there.

Now it’s replaced with a shiny new website built on modern research and by highly-paid experts. But it’s new. Nobody has spent any time optimizing it yet. It’s like you started scaling a new mountain.

To make it worse, because so many things change in a redesign, it’s very difficult to identify the exact changes that are causing the conversion rate decrease. 

So what should we do?

Sometimes we just have to reinvent and redesign. Incrementally optimizing something that is obviously not what you need doesn’t make sense.

Imagine if all we ever did was incrementally optimize the first website ever?

The first website ever, August 6, 1991The Internet would be much different if all we ever did was incrementally optimize this bad boy.

Follow some guidelines plus apply your best judgment about what will work in your organization, and you can reinvent your digital experience while minimizing the risk to your conversion rate.

#1. Build a project plan that includes optimization

What often happens: Company X starts planning for the launch of its new website. This includes resources for research, design, content, production, quality assurance, launching the site and puts it all in a nice schedule. It’s a necessary step that helps allocate teams, uncover gaps, estimate cost and set expectations with stakeholders or clients.

The problem: We assume that our project is going to yield the optimal scenario – a great looking website that converts better. Because of this we assume that the team can move on to other work after the launch, we schedule other work that’s dependent on the new website and fail to give ourselves a safety net.

It’s a huge gamble (many times a multi-million dollar one). 

Build your plan with optimization in mind: With any website redesign, the expectation must be set early that optimization will be needed. If it’s not needed because you nailed it the first time, then you can use those dedicated resources to make that great site even better; but don’t bet on it. Everyone, up to the CEO or VP that is sponsoring the project must understand that launching a new website is not the objective. The objective is to launch a website that increases revenue, leads, content consumption or whatever your KPI is.

Plan plenty of time for optimization and make sure everyone knows it will be needed. 

#2. Minimize your risk with A/B TESTING

What often happens: Company X sets a launch date. On this date, the old website ceases to exist and the new website is shown to all visitors. For better or worse, they take the plunge.

The problem: In this scenario all visitors are exposed to the new website and the old website is history. Again, this is a huge gamble.

If the new website has a lower conversion, it will have a negative effect on your ENTIRE user population, leading to huge losses that can be difficult to recover from. Even if you do analyze website performance pre- and post-launch, you still expose yourself to extreme risk. Additionally, pre/post comparisons include many environmental differences, making it impossible to have a fair comparison. 

Always run an A/B test: Before deploying the new website to everyone, run an A/B test with only 10%-15% of your visitors. During this initial test, measure all critical KPIs, and analyze any differences between version A and B.

This quickly uncovers whether the new website is better or worse than the old one. It also gives you a glimpse of your future conversion rate before you open the floodgates. If you learn that your new website converts lower than your previous one, you now have the opportunity to learn, analyze, and optimize before letting all your visitors see it.

#3. Have a measurement plan

What often happens: Nothing out of the ordinary. Most companies have reports that track advertising campaigns, email campaigns, visitor count, conversions, conversion rate and some basic engagement metrics.

The problem: Regular reports usually do not provide you with actionable insights. If conversion rate decreases after a redesign, it could be for 100 different reasons. It’s similar to getting a fever – you know something is wrong, but you don’t know exactly why. If you have just regular reports when you launch a new website, you may know what happened as a result of the new site, but you’ll have no idea why. 

Do some investigative analysis: Prepare to do some serious investigative analysis as soon as that new website has any user behavior data. Before you even start the redesign, do the following things:

  • Create a measurement plan that clearly defines how the new website will be evaluated against the old one. This will help you and all stakeholders understand how success will be measured.
  • Assign your best analyst or analysts to the task of identifying the pages on the new website that have poorer engagement compared to the old website. And if possible, why.
  • Make sure everything you plan on measuring and investigating has proper tracking. It’s pretty frustrating to begin an important analysis only to find out that there’s no tracking or it’s inaccurate.

#4. Let the data guide your optimization

What often happens: The new website launches and conversion rate decreases. Top-level metrics are often tracked and evaluated, but web usage patterns are not investigated. Eventually, a design/user experience expert is brought in to fix the problem. Company X then implements creative changes dictated by this expert and crosses its fingers once again that the conversion rate increases.

The problem: Without investigative data, you really don’t know what the problem areas. This leads to poor attempts at fixing the situation. Design and user experience experts can certainly help, but without carefully analyzing site usage patterns, you severely decrease your chances of success.

Let the data guide you: Take the time to understand how visitors are using the new website. Bring in that awesome analyst to investigate the situation and draw insights from the usage data or visitor feedback (surveys and user testing are great tools for this). Then use those insights to steer that design and user experience experts in the right direction.

#5. Have a flexible schedule

What typically happens: A timeline is created with hard dates in order to fit other projects into the year’s schedule. The new website must be launched on a specific date and all work on it must stop at a certain point so that the people on that team can start working on other important projects.

The problem: If the new website doesn’t convert as well as the old one, and the team is no longer available to work on doing some much needed optimization, bad things will occur.

Having such hard timelines with such an important project puts you between a rock and hard place – you’ll either have to live with the new website’s poor conversion or put off some other important work. 

Plan to win, not to finish on a certain date: Set the expectation that the new website project does not end until the new experience is at least as effective as the old one. The goal should be to create something better, not meet a deadline.

The team working on this project needs enough time to run multiple iterations of analyzing web behavior, drawing out insights and A/B testing their hypotheses until they have an experience that improves business KPIs to an acceptable level.

In summary

Launching a new website that hurts conversion is worse than having done nothing at all, but I’m not suggesting that you shouldn’t try to innovate and redesign as needed.

Most of the time, website redesigns produce insightful research, great design and more efficient code. However, the best teams understand the need for learning how their audience use this new experience and aim to deliver something that looks great, feels intuitive and converts higher than what you had before.

Anything else is gambling.

The ROI of Reporting is $0

Scenario #1: Reporting Data

Lets pretend that I sent you an email that nicely communicated that you earned $75,000 last year. In this email I broke down how $65k was from your salary, $5k was from your bonus and $5k was from some stock dividends.

How much is that email worth to you?

What if I included pie charts and graphs that compare your income to the average person in your state and showed your income growth over the last 5 years? What’s it worth to you now?

My best guess would be something close to $0.

Actually, it would probably have negative value because I wouldn’t figure that out for free.

Scenario #2: Using Data to Drive Action

Now, instead of just telling you what happened last year, what if I also told you that your three co-workers, who aren’t nearly as capable or experienced as you, earned 30% more than you did? Also I just looked up that old extra car that you haven’t driven in 4 years and you could probably get about $10k for it on eBay – not to mention a cost saving on registration and insurance once you get rid of it.

What is this new information worth? Well, if you successfully ask for and receive a 30% raise and sell that car, my email was worth at least $17k (before taxes)!

There’s Only Value in Analytics if You Act on It

Everybody has web analytics nowadays. Google gives it away for free (or at least in exchange for your data) and even provides you with an easy-to-read dashboard right out of the box. But if all you’re doing with your data is simply reviewing it and keeping score, then I hope you’re not paying too much for it. There’s little value in that.

Now, if you hire an analyst to dig into your data and discover something that could help increase your web conversion or optimize your ad campaign mix, that’s worth something. Assuming you act on this newfound information, this analytics stuff can have some serious value.

Of course that analyst probably won’t be free, but if you don’t hire her, you’d never know what to improve. You might simply keep doing what you’re doing, missing a ton of opportunities along the way. Or worse yet, you might take action based on somebody’s opinion and make a bad move that hurts your business.

What About A/B Testing?

What if you asked for a raise and didn’t get it? You could still try selling that old car and gain something, but that’s not really the point. At the very least, if you don’t get the raise you now learned that you’re underpaid and what you may be worth in the job market. You have new knowledge that gives you options and puts you in a better position. If you never tried for that raise because you didn’t know what you were worth, you’d be stuck where you were and not know what you’re missing. That’s called being a sucker.

Testing is the smartest action to take when you have an insight that may help you reach your objectives faster. Even if you don’t get what you hoped for, at least you’ll learn that it wasn’t a good idea and not go down that path.

Get a win or avoid a loss – that’s some good ol’ ROI.

Why a Failed A/B Test is a Great Thing

I’ve been reading Failing Forward by John C. Maxwell, and in it, he describes how failing is a necessary step to being truly successful. It made me think of how many companies that start A/B testing encounter “unsuccessful” tests and let that experience become a roadblock to building a successful optimization program. These “failures” can even lead to a loss of credibility throughout the organization, which makes it difficult to expand or even keep going. In reality, a “failed” A/B test can be as good as finding a new experience that increases conversion and is a much better option than deploying a new experience without an A/B test. However, just like being successful requires having the right perception of failure, gaining something from a “failed” test requires a clear understanding of WHY it is actually a good thing.

A “FAILED” A/B TEST CAN SAVE YOU

What’s better, a company that reports no growth in their quarterly report or one that reports negative growth?

To put it another way, what is more likely to get you fired? Failing to get that new $10 million client or losing a once loyal $10 million client?

The benefit of running an A/B test where your test experience is not as good as your original experience is that it reveals a bad experience. If it wasn’t for the test, you could have just deployed that bad experience. It saves you from your own and others’ bad ideas!

Whether you conduct an A/B test or just deploy a new experience without a test, the same three outcomes are possible:

  1. A higher conversion rate than the old experience
  1. A lower conversion rate than the old experience
  1. The same conversion rate as the old experience

As you can see, two out of the three possibilities don’t favor finding a better experience. This means that you have to be prepared to face a situation where your test doesn’t yield a better experience and be able to communicate to others why it’s a good thing; especially at the beginning when you are trying to gain momentum and build a culture that embraces learning.

In an A/B test where your new experience is worse than the original experience, you will realize that you are better off with what you already have. You can then turn off the test and continue with your existing experience with minimal impact to your business. If you set up a well-designed experiment with proper tracking, you can even isolate exactly what doesn’t work which can lead to other improvements across your website or new ideas, but that’s a topic for another post.

PLAYING WITH FIRE: DEPOYING WITHOUT A TEST

Some companies just like to pull the trigger without testing and then try to analyze the before and after results. If you deploy something without an A/B test, you subject your entire audience to an experience that may or may not be better than what you already had. Because you’re likely trying to run a business, this can have serious financial implications. There’s another word for this; it’s called gambling.

To make matters worse when you try to compare new experience data to old experience data, it is difficult to determine how much of the difference is due to the actual experience change. Many factors come into play when you have a comparison over two different time periods that make it impossible to really understand the impact. Here is just a small sample of some of the differences:

  • Seasonality
  • Different prices and discounts
  • Varying competitor tactics
  • Weather changes
  • Holidays
  • Important news around the country/world
  • Technological problems at your company
  • Tracking differences
  • The list goes on!

In a worst-case scenario, some of these factors may even create the perception that a new, inferior experience is better than the previous one. This creates a false positive that can be very detrimental if you don’t realize the mistake early on.

Depending on your level of risk averseness, avoiding a disaster can be just as good, if not better than finding something better. And although avoiding a major mistake never gets the glory of making things better, those who know better realize it is an invaluable ability to have.

Optimizing for Customer Lifetime Value

A/B split testing, when accompanied by a sound process and methodology, is an effective way of optimizing conversion rates. However, for many companies that have subscription business models, conversion is only the beginning of a longer relationship with customers. For these companies, Customer Lifetime Value (CLV) is the most critical KPI of all.

CLV Challenges in A/B Testing

Measuring CLV of an A/B test on your website can be a challenge for various reasons. The following are some examples:

  • Using conversion rate as an indicator: A conversion rate increase does not always indicate an increase in CLV. A company may increase conversion by testing a discount off a new customer’s first month, but if these new customers are less likely to renew, it can actually hurt CLV.
  • Testing period vs. CLV period: CLV can take months to accurately calculate. Among other things, companies must consider conversion rate, average order value, renewal rate (i.e. “stick rate”) and future upsells. It is rare and often impractical for a website A/B test to run for multiple months or years. Usually A/B tests last only a few weeks, which can make it difficult to determine the connection between the content tested on the site and its long-tail effect.
  • Extreme sensitivity to slight inaccuracies: Many of the popular testing platforms use JavaScript to swap and deliver content. Because browser settings and connection speed can affect JavaScript, sometimes it doesn’t execute 100% as desired and you can have a small amount of unwanted effects. Because CLV is affected by compounding factors such as renewals and upsells, it is extremely sensitive to undesired testing effects caused by JavaScript weirdness or test setup mistakes.

Tips to Creating a Test Built for CLV Measurement

Working with a client that is seasoned in the discipline of direct marketing and testing, we used the following tactics to create A/B tests that make it possible to accurately measure the impact to CLV:

  • Use unique IDs for each test experience: To measure the CLV of an A/B test, tie conversions to renewals and upsells with a unique ID per test experience. For example, your website can assign a unique ID to experience A orders and a different one to experience B orders. These unique IDs should show up in your transaction data and also be passed to your customer tracking system. The IDs can be used to tie CLV back to specific test experiences and continue measuring test impact long after the test has been deactivated.
  • Don’t trust summary reports. Analyze detailed results: All testing platforms provide you with summary reports of how each test experience performs against the control. However, this type of reporting lacks the detail required to determine if you’re looking at clean results or to analyze based on the unique ID previously described. Some popular platforms provide detailed transaction data that contain stuff like products IDs or descriptions, number of products per order, revenue per order, transaction IDs and other fields that you can customize. With this level of detail you can review results carefully and identify any discrepancies that may inaccurately influence your CLV.
    image
    Detailed test data in Test&Target can be found in the “Type” drop-down under “Audit”
  • Be creative and work around the tech imperfections: When small data inaccuracies have a big impact on long-term ROI, a small amount of bad data is unacceptable. For some of our clients, a 0.5% shift in conversion can have as much as a $20 million impact! Recently, we were struggling with a testing platform inaccurately assigning unique IDs about 5% of the time. IDs from Experience A were being applied to orders in Experience B and vice versa. To solve this problem, we used Test&Target to split traffic and reload test pages with a unique campaign code per experience. The campaign code was then fed into a content management system that displayed the test experience associated the code. That unique ID was connected to users in the appropriate test experience and passed in order information. The result was the 5% inaccuracy being reduced to 0.1%.

In Summary

If your business decisions are made based on CLV, then that is the KPI that needs to be measured on any optimization efforts. Other KPIs like Conversion Rates and Revenue are good indicators, but ultimately they are only influencers to CLV.

Using unique IDs per experience and sharing them with your customer tracking systems, you can tie everything together and continue to review performance after your test period has ended. Analyzing detailed test transaction data can help you solidify your data integrity or uncover any inaccuracies that may have otherwise led to poor decisions. And finally, being creative with your test setup and deployment can help you overcome imperfect out-of-the-box testing solutions.

What Type of Optimization Tests Are There?

Recently, one of my colleagues asked me about the different types of conversion optimization tests that are possible. I started to explain the differences between A/B and multivariate testing, but he quickly stopped me. That was not what he wanted to know.

What he WANTED was to learn about the different types of things that could be tested on a specific web page or process. Some simple examples of this are: testing the hero image, testing the button or even testing entire redesigns. I think this is a question at least a few people out there have, so this post contains a list of different tests types along with pros and cons that may exist.

Redesign Tests

In a redesign test, everything is fair game and can be changed. You can change, move, remove and add whatever your heart desires in the name of making the page better.

Pros: Redesign tests are great for when you have something that you know is a bad experience and need to try something completely different. At that point having incremental optimization is not going to cut it. To paraphrase one of my favorite authors, Seth Godin, sometimes what you need is to start over, not to optimize something that is bad. You could be climbing up a short hill and be completely missing the giant mountain next to you (Yahoo vs. Google is a perfect example of this).

Cons: When you change too many things at once, it’s practically impossible to learn what helped or hurt your page. You may have used a new hero image that increased conversion by 10% but changed an important description that decreased conversion by 15%. Since you changed both at once, all you would know is that you had a decrease in conversion of 5%, which would cause you to disregard an awesome hero image. Redesigns can be great when done right, but you will give up a lot of learning and they should be used sparingly.

Tip: If you are going to do a redesign, make sure you maximize this opportunity by doing a lot of due diligence in understanding what your visitors want. Don’t just assume that you can think like a visitor. A lot is on the line, and you should make the most of all the work you are doing. Take the time to really dive into your web analytics, do usability testing, read surveys and go through use cases.

Description Tests

Words matter, A LOT. Changing what you say in a page title, photo caption, call to action or product description can have dramatic effects with practically no creative assets needed.

Pros: Description tests are a great way to incrementally improve a page that you don’t think is that bad. Additionally, since most of these tests involve testing some kind of HTML text, you aren’t likely to need a designer. They are also pretty easy to implement because changing text usually isn’t too technically complicated.

Cons: A lot of the time, people don’t take the time to read on the Internet, therefore a common occurrence with description tests is that you may see no conversion difference. In order to make an impact, you need to make sure that you are testing descriptions that people are actually reading.

Tip: Look through your web analytics to identify any keywords that may drive significant traffic to your page or look through your own internal search from that page to identify any popular keywords. You are more likely to get visitors’ attention if you use keywords or phrases that you know people are interested in.

Promotional Tests

In promotional tests, you can experiment with different prices, promotions and the way you position promotions to determine what visitors respond to most and how much they respond.

Pros: Promotional tests can be very useful in determining an optimal price for your products and how to best position promotions. It gives you the freedom to try promotions on a sample population and ensure that you offer only the most effective one to the entire population.

Cons: Promotional tests where you offer discounts can be trickier to interpret. In most cases, larger discounts create higher conversion rates, but if your discounts are larger than your conversion increases, you may start eating into your profits. It takes a little more sophisticated monitoring and analysis to ensure that you are making a good business decision. Additionally, you can really upset your potential customers if you’re not careful. Be wary of offering different pricing to different people. Some big companies have received a lot of bad PR for these types of tactics.

Tip: Try providing the same discounts to everyone, but positioning it differently in your test variations. You may find that your customers respond more to a percentage discount (e.g. 25% off) compared to a flat dollar discount (e.g. get $50 off $200). Or it’s possible that visitors are insensitive to $9.99 off but are more likely to convert if you offer $10 off.

Image Tests

Sometimes just using a different image on your page can make big differences. If you’re a vacation-planning site, lifestyle images of people having fun may work better than a nice shot of the beach. On the other hand, if you’re selling furniture, visitors may want to see the piece of furniture by itself.

Pros: I think all tests can be fun, but these tend to be easy and fun. Imagery is something that people actually pay attention to, and sometimes the right image can really move the needle in right direction.

Cons: I can’t really think of any cons to image testing. But if you can, please comment!

Tip: Many times less is more. If you have a page with 3 smaller images, try testing one bigger image that makes an impact. When there are too many things competing for your attention, nothing stands out.

Design Tests

This is where you try to optimize by testing color, fonts types and sizes, shapes or spacing. These are very popular among design-centric brands.

Pros: Design-oriented tests not only can help in improve your conversion, they also can allow you to test different design prototypes to ensure they don’t hurt conversion before you release them to the world. Remember that avoiding the implementation of something that hurts conversion is probably more important than finding something that improves conversion.

Cons: Design-oriented tests usually require help from a designer, which can make them more resource intensive. Additionally, it can be hard to find evidence to support that changing things like button color or increasing font size are likely to increase conversion, so you’re sometimes left with creating less data-driven test variations.

Targeting Tests

Many will say that targeting is not the same as testing. I agree, but you can test your targeting approach to ensure you are being as effective as you can be. For example, you may be targeting based on a referring keyword, but conversion rate may be higher if you targeted based on visitors being Mac users.

Pros: Targeting tests can really help you capture hard to reach customers by fine-tuning your personalization tactics. These types of tests are the next level once you have harvested all the low hanging fruit with your other testing.

Cons: Testing different targeting tactics is not as straightforward as testing visual items and requires a pretty deep understanding of your visitors. You will also need to invest in a tool that allows you to target because free tools such as Google Analytics Experiments do not do this.

Tip: For the most part don’t listen to anyone that says they tried targeting, and it doesn’t work. Except for rare situations, targeting will help increase conversion if you find a recipe that resonates with users. Like I mentioned in the cons section, good targeting requires a deep understanding of your visitors, but when you figure it out, you can really provide a great experience and improve conversion.

These are some of the more popular types of tests, but it is probably not an exhaustive list. I welcome any of my fellow testing practitioners to add to it or provide their own perspective.