January 27, 2020
Bo Chipman, Senior Vice President, Data Science
No one debates the benefit of creating personalized, relevant consumer experiences. Consumers expect companies to tailor experiences to their preferences, and they reward companies that deliver.[i] On the flip side, studies show personalization generates positive business outcomes. Companies that personalize[ii]:
· Reduce acquisition costs by as much as 50%
· Increase revenues by up to 15%
· Improve marketing spend efficiency by 10% - 30%
The corporate website is a natural starting point for personalization. For most companies, it serves as a billboard, storefront, service center and more. Given the website’s central role in many customer interactions, most marketers understand investing in testing to create a personalized customer experience will pay dividends. However, many companies struggle to create value from testing despite their best efforts. This includes many companies that have invested in best-of-breed testing technology.
In our experience, companies often struggle with testing because they consider it a technology problem when, in fact, a successful testing program relies equally on people and process. We see many companies that do not get the expected return on testing software because they fail to invest in other elements of a robust testing program.
This blog highlights a handful of the challenges we have seen. It’s not meant to be exhaustive. Rather, we want to illustrate the importance of people and process using some common examples.
What are some of those challenges? Here are five we see frequently. We welcome other examples from readers. Everyone has a favorite.
2. Test Selection: Low potential tests can quickly reduce support for testing since it makes little sense to invest scarce resource on activities that have limited business impact. Access to development resources is a real issue, but in many cases, marketers select tests that have limited potential for other reasons. The primary challenge we see is the absence of a structured process for evaluating and prioritizing tests. They select tests for implementation based on hunches rather than a standard evaluation framework. Without an evaluation framework, marketers may never consider many high potential tests, resulting in missed opportunities.
3. Test Design and Measurement: Even high potential tests will not provide business value if they are not designed and measured correctly. Unfortunately, many companies do not have a process to ensure tests produce meaningful results. Problems happen at all stages of the testing process. Common examples include fuzzy hypotheses, bad test designs, insufficient sample sizes and failure to measure statistical significance. These failures often occur when marketers do not involve trained Data Scientists in testing.
4. Test Governance: Good tests can go bad in implementation. Many things can happen, but one avoidable issue we see often is conflicts between tests. In many companies, multiple groups ‘own’ different aspects of the website. These groups often pursue their own testing agendas with no formal coordination or communication to ensure the integrity of every test. There is no centralized calendar. Audiences overlap. Experiences vary in unexpected ways. Test results are corrupted.
5. Software Release Governance: Lack of coordination and communication can undermine tests in other ways. Another common example is the absence of robust communication around software releases. Data issues are by far the most common reason tests fail. Those issues often occur because of communication gaps within complex organizations. In many companies, the groups that own tagging and testing operate independently. On more than one occasion, we have seen tests spoiled by well-planned tagging upgrades. The tagging group did everything right in most cases, except notify one key constituent, the team.
Even if you have never experienced the issues outlined above, you have no doubt encountered situations where testing broke down because of people and/or process. Such issues occur because marketers too often assume testing software will solve all their problems. Software can certainly enable testing, but it is insufficient to enable an effective program. High impact testing requires investment in people, process and technology. Marketers that ignore the first two factors often do not get the value they expect from software investments. We will provide additional perspective on resource and process needs in future posts.
[ii] Matt Ariker, Jason Heller, Alejandro Diaz, and Jesko Perrey, “How marketers can personalize at scale,” Harvard Business Review, November 23, 2015.