1. 程式人生 > >5 Common Mistakes in Website Testing and How to Avoid Them

5 Common Mistakes in Website Testing and How to Avoid Them

Here at Sentient, we are well acclimated to the world of website experimentation. Whether you are a conversion specialist, marketer, or ecommerce manager, we wanted to share our take on  five common mistakes made in website testing, and how to avoid them.

1. Ending Tests Too Early

If you’re not letting tests run long enough, you won’t have enough accurate data to know if something is a true winner or loser. You need to have a big enough sample size to be able to get statistically significant results, this may mean running your test for at least a week. After the test is completed, make sure to look at unique results, impact on conversions, and the statistical significance of the results.

In a recent interview with conversion expert Eric Nalbone, VP of Marketing at Bellhops, we found that certain tools that allow customers to check in and micromanage tests actually resulted in poor testing practices as inexperienced testers would pull out of tests too early or try to change KPIs after seeing incremental drops before the tests had run their full course.

If you want a crash course on A/B Testing Statistics, check out this post by CXL. In this post you can learn how statistical significance is used and other important factors to consider when analyzing your results, e.g. confidence intervals, margin of error and statistical power.

2. Not centering your tests around a central idea hypothesis

Your experiment needs to have rhyme to its reason—a method to its madness. Testing things at random will result in inaccurate or inconsistent results and waste precious time and money. Testing towards the goal of proving or disproving a hypothesis will help you structure your experiments so that you can reach a valid conclusion. Without it, your tests will be a blob of random actions and events without a central theme—like an essay with no thesis statement. You may still find 10 things to test on the site at the surface level, but without a reason to test you are just guessing at what might drive improvement.

One thing you can do is pay attention to your visitors and your data and think of how and why they do what they do. From this, you can create a hypothesis around how to improve your visitor’s experience so that they will be more likely to convert. A simple hypothesis could be:  “Customers aren’t confident checking out because the information is not clear.” You may come to this hypothesis through a number of ways—interviewing your customers, creating surveys, looking at heatmaps, or looking at places where customers are dropping off in the funnel.

Once you’ve arrived at this hypothesis you can then structure your experiment around solving this issue. For instance, you may make the product photographs larger or make the product details easier to read. You may also include things like “free shipping” or “24/7 customer support” near your CTA buttons to give your visitors more confidence in buying. Creating tests around making the information more clear and appealing can help you see if your hypothesis does indeed impact your conversion rate.

Hint: Make sure to wrap your hypothesis around customer pain points. Figure out what is causing friction for them and how you can address this issue so you can make the experience better for your customer. Better experiences lead to higher conversion.

3. Only focusing on conversion rate

Pure conversion rate metrics like checkout, add to cart, or book a trip aren’t necessarily everything. You also need to look at other factors that affect your revenue such as upsell. For instance, if a travel company implements a change to their site that results in more tickets booked but results in significantly lower add-ons like car rental add-ons, the change may not be a good choice as the net revenue gain may go down. Just focusing on pure conversions would not paint the whole picture. Think about other metrics you might want to optimize for besides conversions like average order value or revenue. This will help you see the whole picture and help you successfully pursue your goals.

4. Failing to optimize for each traffic source

Your visitors come from a variety of backgrounds–from age and gender demographics, to users accessing your site from mobile devices versus desktops. Either way, you will need to segment your traffic, and give them a unique experience that resonates with their demographic, device, and desires. For instance, traffic from Facebook will convert differently than visitors who came in through Adwords. You need to test and optimize your site for each of these segments and deliver a specific experience in terms of site functionality, UX, content, etc to each audience segment. Once you’ve mastered this, you then can prioritize sources that are driving the most traffic to your site and causing the biggest positive impact on your conversions.

5. You are not testing enough

Testing is not a one and done process or something that you do when you get to it. Testing should be a part of your team’s ethos (and not just on your website!). While it is easy to say that you should always be testing, developing new team habits or increasing your testing infrastructure can be difficult and expensive. And if you are not an experienced optimizer, you may not know where to begin.

The Solution

Not to worry, we have some things that can help you organize a test plan and learn some techniques for doing research, developing hypotheses, and running experiments.

  • Digital Marketer’s Guide to CRO: In this guide, we explore how to come up with hypotheses to understand what needs to be tested and resources you can leverage to help you with your upfront research.
  • The Big Book of Ideas: Coming up with ideas of what to test even after you have created your hypothesis can be challenging for some. In this video, Sam Nazari, Sentient’s Head of Sales Engineering explores different ideas you can incorporate into your experiments.
  • Have a test plan: Using a test plan or having a template that you can rely on to organize your experiments and align your team can really help improve and streamline your testing efforts. There are many templates out there you can try or you can download our own test plan here.

Whatever your testing situation is, we hope you have taken these pieces of advice to heart. Remember that with testing, as with many things, practice makes perfect. The more you test, the more you will know what works, and the better you can make your online experience. We hope these resources helps you along your testing journey.