18.11.21 / news
Autor: Kuba Puzyna
Let’s start with the data.
Currently, 85% of iPhones run on iOS 14.
With this update, Apple introduced new privacy settings. As a result, as many as 87% of iOS 14 users (95% in the United States) have denied advertising networks and apps the ability to track their activities.
While Android holds a strong position in Poland (around 90% of the market), its share is weaker in Europe (around 63%) and loses to iOS in the United States (57.5% for iOS).
The share of iOS devices varies dramatically depending on the group in question. For example, among high-earning women interested in fashion and living in large cities, the proportion of iPhone users is much higher than in the general population.
A lot. But in the context of this discussion, it primarily means that both reports and optimization techniques based solely on data from platforms like Facebook, Google, Twitter, Snapchat, or TikTok are now largely unreliable.
For years, advertising networks spoiled agencies with reliable data based on reasonable attribution models. This data alone was sufficient for campaign reporting and allowed for quick decisions regarding budget allocation, tactic selection, or campaign adjustments.
Today, that’s no longer the case. Agencies that fail to adapt to new analytical foundations will see worsening results—both in their reports and actual outcomes.
Because, due to the trends described above, advertising networks are increasingly losing conversion data.
For instance, an iPhone user on iOS 14 might click an ad, visit a site, and make a purchase, but their actions may not be recorded as a conversion by the ad network.
If an agency reports campaign results solely based on data from Facebook or Google, its results will underestimate the actual actions taken by these users. While this discrepancy is frustrating, it’s not the main problem. After all, the conversion happened.
The bigger issue is that this situation leads to poor decision-making—for example, investing in techniques that actually generate fewer conversions. An attribution system where ad networks can link purchase data to ad impressions won’t always outperform a system where some conversion data is lost.
As a result, focusing on improving reported data may harm the actual results most important to the client. The agency’s interest (creating good-looking reports) conflicts with the client’s interest (achieving better overall outcomes).
In our reports, we no longer rely on ad network data about user activities outside their platforms. For example, from Facebook, we gather data about reach, impressions, clicks, and CTRs, but metrics like CPA, conversions, and ROAS are considered unreliable.
Instead, we use data on all transactions directly from the client’s store. Instead of ROAS (revenue-to-ad-spend ratio), we calculate MER (the ratio of total revenue to ad spend).
This decision comes with obvious challenges. Comparing online ad spend with the final revenue means the final result reflects multiple factors, of which the campaign may not be the key driver. This leaves room for interpretation. How do online ad efforts contribute to observed revenue changes? Which tactics were crucial to the results? Which groups only click, and which actually purchase?
This shift increases the importance of analytical skills within the team. Using disrupted ad network data combined with intuition and experience, the team must hypothesize how optimizations translate into the final outcomes.
This reporting system only works with ongoing, comprehensive management of all online advertising activities for a client. If an agency handles only a portion of a brand’s activities, referring to aggregate data makes no sense. For us, this isn’t an issue, as we always work with clients on these terms.
Probably not. Major players are actively developing new measurement tools to address these challenges. We can expect analytical solutions to emerge soon.
For now, it’s better to rely on broad, trustworthy insights than detailed but inaccurate data.