​3 AdTech Horror Stories and What You Can Learn from Them

In my span of 6 years working in adtech, I have come across various functions and operations which always bask in the glory of how their product and engineering life cycles have evolved. I have met various vendors, all of them promising me best tools, proxy services and crowdsourcing. Their confident sales people have intimidated me with their terminologies, impressed me with techniques and, at times, surprised me with their pricing.

AdTech has been evolving at an unbelievable pace and the innovations around its various dimensions have been incredible. Unfortunately, as a quality operations manager, I still have a few struggles which do not seem to have a permanent solution. Ad rendering on proxy- and carrier- based targeting is on top of that list. I currently head up an ad quality team which works 24/7, manually reviewing, testing and verifying every demand source that runs (or intends to run) on our platform. My team is based out of India and global delivery is channeled through this center. Clearly, this means that, sitting in our office in Bangalore, we are required to test how a particular ad will be served on an ‘X’ device on ‘Y’ operator and ‘Z’ country, which is a very tricky part.

From being an Ad Operations executive half a decade ago, back to leading a team of ad quality professionals, I have gathered many experiences both good and bad. Each one has been a learning which I shall never trade for anything else.

Lesson #1 – Big Budget Is not Always Might or Right

Some time before AppLift, at the beginning of my stint in ad quality, I was ecstatic each time my friends from the Sales teams got a big budget campaign. We were a smaller startup and any budget was welcome. If it was a 4 digit value, it meant icing on the cake. I clearly recall a heavy budget campaign from a known agency from somewhere around West Europe: the bells on the floor rang, beer mugs clicked and plans were made for how the big sales commission would be spent. It was a flashy JavaScript ad, which had all the right notes, great images, sparks and blinks. After the initial euphoria, the campaigns were set up and since it was our star advertiser, we quickly approved the ads.

The initial two days were spent basking in the glory of how happy our supply partners were and how well we were spending. This is where the horror began…

The third day started with multiple emails from all channels sending us screenshots and clippings from various users, media, publishers and, last but not least, my manager. Amidst the whole frenzy, it took me a while to realize that our ‘star’ advertiser was a shady agency redirecting their JS ads onto a pornographic website. The placements of these ads were on our premium inventory and hence the damage was huge. One of the key placements where these ads were served was a military website of a South Asian country; I can only resort to my imagination to understand what a high ranking military official went through: claims to the best and brights and pleas to the nation’s youth to join the forces, followed by a pole dancer in skimpy clothes towards the end of the page. Horrendous.

My first lesson learnt: big budget does not always mean ‘star’ advertiser.

Lesson #2 – Check, Check and Re-Check Until You Are Sure

During one of my previous startup experiences, we suddenly realized the need to scale up our team. Work was pouring in from everywhere and approvals for new human resources became a priority. I scheduled quick interviews and we soon hired a small battery of people. Their trainings and induction was completed like a breeze and we were a sizable number in the Ad Operations team.

Work resumed and soon the new joiners started approving and reviewing ads independently. In the rush to prove ourselves, we also promoted senior ad quality resources to other teams. The review process was trimmed and we eliminated the sample checks and reduced the frequency of audits of campaigns. Unexpectedly, a giant e-Commerce advertiser acted up, as their campaign to promote a specific ‘new year sale’ did not serve at all. The deal had been made much in advance and everyone had great hopes for this campaign. At ungodly hours, we had to backtrack and understand what could possibly have gone wrong. Every team was shaken up in the middle on the night, ad serving and technical teams checked everything on their end, from servers to traffic sources, etc.

We finally discovered that the ads were incorrectly tagged with an irrelevant category by a newly hired ad quality executive. They were supposed to be tagged as ‘Commerce’ but were mistakenly tagged as ‘Contraceptive’ (in the huge list of categories in the drop down, the latter was just below the former, hence the error). This resulted in the ads being blocked on relevant sites and therefore the campaigns failed to deliver. Since the whole concept was to promote the sale on a particular day, there was nothing we could do.

Lesson learnt, the hard way: check, confirm and audit. It’s worth following the processes, have elaborate training and assessment sessions, whatever effort it takes…

Lesson #3 – Technology Is the Means to an End, not the End Itself

During yet another adtech startup experience, as we received additional work and had shorter turnaround time, I was keen to incorporate newer tools and devices to automate and reduce our efforts. Soon, we began a trial-and-error method of testing our processes until we reached a combination of a few services and tools which greatly helped in reducing our SLAs (Service Level Agreement, here the minimum time we needed to approve an ad). While some tools worked wonderfully, others failed miserably. Our luck ran out when we could not judge which ones belonged to which basket. We then bought a fancy software to help us flag a particular format of unapproved ads, in this case ‘auto download’ ads. As the name implies, these ads directly download a file on the user’s device without their consent. At this time, these ads were all the rage in China and South Korea, and many leading game developers swore by their good conversion rates. In other regions, these ads were less accepted and brand publishers completely disapproved their autonomous behavior.

We started using the new tool immediately after a botched due diligence, as we discontinued our older way of testing these ads manually. As you must have guessed by now, we soon got blocked by our branded supply partner, in this case a leading news app from the US. Their policy clearly mentioned that they did not accept auto downloads, considered them very bad for the user experience, and held them as a threat to user’s device (there were random cases reported earlier where a malware was auto downloaded corrupting the mobile device due to such ads). We found out that our new tool came with limited bandwidth. Once we reached the threshold, we could enter the URLs of the tags but they would not be tested and hence not be flagged. This was a huge bug in the tool and we raised it with the vendors in parallel. The terms of violation were clear and we paid a huge penalty for the damage to our partner (much more than we had anticipated to have spent with them).

Lesson learnt, the harder way again…

Does this sound familiar? What are your adtech horror stories? Let us know in the comments!