So, you successfully ran some tests based on your strategic goals for the year. You tracked them effectively and you have clear results that indicate your winning package. Congratulations! You’ve already done more to improve your program than many other fundraisers.
Now what?
If you answered, “implement the control” you’d be right. Sort of.
Implementation should also be approached as a level of retesting. This is what I call “optimizing”. Optimizing differs from testing because it is refined, more like a series of subtle micro-tests. The process involves drilling down on your tested campaign or a single tactic and moving it incrementally closer to ultimate (and measurable) success. Optimizing comes with a tremendous sense of gratification as you creep up the ladder of success rung by rung.
Within your optimization, your winning package or component should be represented as an autonomous control but also layered withing new micro-tests. The new tests should explore different angles of the winning package or component to isolate what, exactly, was the key driver of its success.
What an example optimization might look like…
You tested 4 different copy messages in your previous year’s holiday social ad campaign. One of them was a clear winner. This year, your approach should be:
- Running the winning copy word for word as your control ad
- Come up with 3 theories as to what it was about that copy that made it the winner and develop 3 new ad sets, each hyper-focusing on one of the theories.
- This year, run a new test of all 4 ads against each other and see if the control still wins.
What not to do: if your original test had a clear winner and loser, don’t test brand new assets and copy against your control again. That isn’t building off of your past learnings. That’s sweeping the chess board off the table and starting from scratch again. Instead, refine the existing elements to make your winner the winningest.
What if you don’t have the budget or capacity to fully optimize your test results? You can still learn from your implementation roll out, if you know what to look for.
Timing is everything
First, consider the factors surrounding your test. Be ruthless in your interpretation of what your results actually show. If you tested at one time of year but plan on rolling out the winning strategy at a different time of year, don’t be surprised if the results change.
One option you have is to run your winning test in exactly the same campaign as you tested it in. That’s a solid approach if you want a predictable result. You can use timing to your advantage. Implementing your winning strategy at other times of the year can be a low-cost approach to optimization!
Consider implementing the winning test package at the time of year when your fundraising response is the strongest. For example, if you want to maximize revenue at year end, you could implement new findings from your tests to ensure the greatest return. Assuming your test was relatively generic (not season or time dependent), shifting the timing is an optimization will have the greatest financial impact.
Conversely, consider slower times of year or the campaigns that are struggling. Moving your winning strategy to this time allows you to optimize your program strength year-round.
If none of your campaign factors have changed and but your results don’t match your original test, consider that your results may have been influenced by outside factors. The economy, world events, recent news items.
Beware of confirmation bias
Confirmation bias is when you’ve already made a decision about the outcome regardless of what the data tells you. Whether consciously or unconsciously, you only seek information that backs your assumptions. Inquiring with a closed mind leads to only one place: a false reality. When analyzing test results, it is essential that you leave your assumptions at the door.
But not imposing your own biases upon your donors is very challenging. We all have ideas as to why donors will give to one ask and not another. But treating those assumptions as fact will lead to missed opportunities. “But that’s why we test!” I hear you cry. While that’s true, we need to be even more diligent about not imposing our biases on test results. There are many reasons why test results can go one way or another, and your bias may be blinding you. Look at every image, every word, every channel, to break down why the winner won. Then prove it by doing it again.
Repetition dilutes response
When you implement your winning test, you need to exercise moderation. I wish we lived in a world where we could leverage the same campaign over and over and it would produce the same results each time. Sadly, wallpapering your winning package or strategy all over your program will only lead to heartbreak. The fact is, donors become blind to the devices we use to grab their attention. This could be anything from a particular ask that supports your core mission, the size of your direct mail envelope, or the use of emojis in your email subject line. What was once so cool and inspiring is now just another thing your donor has become blind to. Keep your winning strategy fresh and effective by using it sparingly.
Integration matters
I see it again and again. Organizations see a lift when they test bundling strategies together, particularly when using an integrated multi-channel approach to campaigns. Then, when they go to implement their optimized strategy, for whatever internal reasons, they end up breaking apart the happy little integrated family, thinking that each piece will succeed independently of each other.
Remember why integration matters in the first place. When you break up the multiple touchpoints, upend the consistent messaging, or alter the cadence, results will vary accordingly.
Fundraising is like a Jenga tower. Each channel and component are simply one block. The more blocks you stack together, the higher your tower. A single block does not a tower make. And the more blocks you remove from an existing campaign structure, the more the tower shrinks or even collapses.
Always be testing. To a point.
There is an ROI factor to consider with your ongoing testing. It costs money to test, and there can be risk. For larger organizations, the cost of testing represents a fraction of the potential gains of that test but you should also consider your resource investment. I’ve seen organizations that have crippled their staff and vendors, unable to focus on strategic work that could really move the needle on their program such as mapping out donor journeys or developing a legacy giving moves management system because of the capacity burden of obsessive testing.
I sometimes recommend having an investment year that focuses on an area that is in trouble. Say, your acquisition response rate has been diminishing year over year. By taking an entire year to develop a new package and rollout out a head-to-head package test, the following year will enjoy better results and lower costs! If you never give yourself a chance to reap the rewards, there is no point in testing! Your net revenue will remain flat (or diminish year over year) despite your program performing better.
The new normal
The goal of every annual program should be controlled and sustainable growth. Testing and effective implementation of the results of those tests are a foundational piece of your program’s success. While there are many factors to consider, building your program based on hard data results is fruitful and highly rewarding. Your Jenga tower awaits!