Demystifying Performance Algorithms: What actually makes algorithms work

Depending on where you go, the prevailing opinion within the marketing team ranges from "what's an algorithm?" to "conversion algorithm or nothing", but it's incredibly rare for there to be a nuanced and rigorous understanding of algorithms, how they work and why they work. If you ever ask why they work, you may receive an answer about machine learning and machine learning and whatever else vague technobabble reps and help articles are pushing, but at no point that doesn't actually illuminate what the algorithm is actually doing and how it impacts your campaign performance. This article demystifies that with data and gives strategic takeaways.

Algorithms and Sorting

One of the key purposes of the algorithm is to sort audiences based on signals that suggest a user's likelihood to complete a given campaign objective. It's significant to double-click into the idea of "sorting", because this is the primary mechanic that gives algorithms power and it means algorithms function as an intense modifier to targeting.

When you run a digital campaign, you design a target audience based on demographic and behavioral patterns, that might be analogous to your total addressable market. But when you apply engagement or conversion algorithms, you will further wittle that audience down to only reach those among the audience that it believes will meet the objective.

The Test

A consumer financial services client has a full-funnel marketing strategy with campaigns intended to reach users across social media strategies. On Meta, campaigns were deployed with a variety of algorithmic objectives with the purpose of balancing upper funnel needs for reach with lower funnel needs for attributable conversions. Other facts about the test:

  1. Due to internal limitations around creative development, creative variety was relatively limited and largely the same between algorithms.

  2. Targeting was the same between algorithms.

All of this is to say that the key differences between the campaigns are purely in the algorithm and the different bid strategies those algorithms enable.

Results

The question becomes: (1) did the campaigns successfully optimize for their respective objectives and (2) if so, how does the objective achieve this?

While the algorithms were effective at optimizing for their respective objectives, the results illustrated a side effect that demonstrated exactly why the algorithms are effective and how they worked. Figure 1.1 is a table showing unique number of accounts reached by each campaign. The higher the percentage, the more each campaign was reaching the same audience.

The table below shows the intersection between the algorithms using the same audiences: Reach, Impression, Traffic and Conversions. The first two columns shows the reach counts of the first and second algorithms, respectively.

Figure 1.1
Aud 1 Aud 2 Combined Overlap %
Reach x Impressions 79Mil 39Mil 91Mil 22.91%
Reach x Traffic 79Mil 35Mil 107Mil 6.44%
Reach x Conversions 79Mil 19Mil 74Mil 8.29%
Impressions x Traffic 39Mil 35Mil 74Mil 0.25%
Impressions x Conversions 39Mil 19Mil 50Mil 14.28%
Traffic x Conversions 35Mil 19Mil 50Mil 6.75%

Despite sharing the same audience, each campaign reached a section of the audience that was nearly mutually exclusive to the other campaigns. Rather than reaching the same users in different contexts, placements, times of day, etc. based on what is most likely to maximize the given campaign objective, the audience is essentially presorted into these categories with little to no chance of crossing over into another category.

This is evidence of sorting: The algorithm sorts users into different buckets based on how it identifies their likelihood to complete the various campaign objectives. If a user is in the "likely to click" bucket, they are unlikely to also be in the "likely to convert" bucket or "likely to view the landing page" bucket.

The fact that the algorithm does this is not particularly novel, but what is a novel finding is:

  1. How intense the sorting is and how little overlap there is between groups.* For example, traffic users and conversion users only had an overlap of 6.75% despite the fact that conversion users have to click to and view the landing page to reach the conversion event.

  2. That the sorting still exists for upper funnel algorithms.* It makes sense that "users likely to convert" is a specific subset of the total audience, but what delineates a Reach user from an Impression user when the goal is to serve ad impressions either way.

This leads to impactful implications for how brands should structure their digital media buys. The sorting mechanism means that using these algorithms may severely limit your campaign's reach. Instead of reaching everyone in the audience when they are most likely to convert, you are - instead - reaching a fraction of the audience that the algorithm has already decided is primed for that campaign objective.

Takeaways

  • Any campaign not using a full-funnel approach to algorithms should assume that some portion of sorting will take place, limiting audience reach and influencing spend projections.

  • Targeting the same users is unlikely to contribute to a higher frequency since the audiences are largely mutually exclusive.

  • Users likely to convert are essentially held hostage. If you are running traffic campaigns hoping to find the needle in a haystack that converts, forget it. All of those needles were moved to an entirely different bucket, they're not even in the haystack anymore.

Next
Next

How Strategic Persona Development Reduced Student Acquisition Costs by 78% for an EdTech Startup