Inside the brains of three digital analysts – 4 questions / 12 answers
Stéphane Hamel, Consultant
Age: old enough to know a hell of a lot about analytics, but not quite old enough to pretend to know it all, and not too old to forget what it is 🙂 (turning 48 on August 29)
Job title: depends… sometime “Owner/founder” if I need to borrow money at the bank… or “digital analytics though leader” if I’m talking to my peers, or simply “consultant” if the person doesn’t have a clue what the heck I’m doing (but I don’t really like when people think I’m a computer engineer)
Hobbies: since I live in what is considered an old house in North America (200 years old), I have developed interests for history, old architecture, genealogy, historical novels, etc. and of couse, I do a fair share of home renovations too!
Sergio Romero, Vistaprint
Job title: Director, External Marketing & Performance Analytics
Hobbies: Movies, books, basketball, riding my street and mountain bikes
Stuart McMillan, Schuh Limited
Job title: Deputy Head of Ecommerce
Hobbies: Photography, DIY, walking, climbing
1. Can tools really find interesting correlations for humans to review or will must we always do our own data diving?
It’s an interesting question. With the advent of tag management tools we were promised instrumentation and getting the right data to enlighten business decisions would be easier than ever before. Interestingly, what happened is the “hard stuff” of the past became a lot easier, but at the same time, expectations and tracking requirements increased – so in the end, getting the right data to enlighten business decisions is still and art and a science. Same goes for analysis. Many things are now easily done, but at the same time, business questions are getting more elaborate and complex. But there are interesting initiatives going on – one of them is called InboundMuse (http://inboundmuse.com/), out of Malta Island (I’m on their advisory board). InboundMuse. They are actively working on a solution that leverages Big Data, artificial intelligence, natural language and smart brains to provide automated, tailored and applicable insight.
I think this has to do a lot with the problem we want to solve, more than the ability of algorithms and tools to find relevant insights. Depending on the business, it’s relatively easy to ask the right questions so that non supervised algorithms answer those questions, especially in the area of optimization: paid search bidding, customer value estimation, next best product recommendation…
But there are other areas in which the involvement of Analysts, a human touch, is critical. And here business strategy aspects come to my mind. Many times it’s necessary to have a business sense, take into account qualitative information, look at results in a given way, etc., and this is something algorithms cannot easily or accurately do. Experience and flexibility to decide next steps are key, and human decisions are likely to be required. Additionally, the risk with unsupervised tools is always that we could over-fit the current problem we are trying to explain, while losing the ability to do any kind of inference. Net/net, probably the balance between these two approaches is the best way to tackle analytical problems depending on specific needs. On one hand we’ve got the power and efficiency of tools and algorithms, on the other hand we have the human brain for non-structured problems.
I think tools could find many interesting things for us, without question. But interesting does not mean useful. We all work in companies with finite resources, which have a range of business pressures and objectives, in most cases we still need human analysis to turn data in to business actions. As far as I am concerned, these actions are all that matter: what decisions do we need to make to further our objectives.
2. What will be the outcome when the irresistible force (Big Data) meets the immovable object (government regulation)?
I honestly think the “first contact” has been established quite a while ago! Big Data didn’t create the words “ethics” and “laws”… but certainly accelerated and amplified the needs for guidelines (ethics) and boundaries (laws). I think what worries me most are the widening chasm between various regions – USA vs Canada vs Europe for example. Things that are perceived as acceptable, even competitive advantages, are simply illegal or certainly unethical in some regions. As a Canadian, I certainly get worried when I see the number of companies constructing profiles out of massive social media data collection combined with other sources (which, based on my understanding of PIPEDA, is illegal in Canada – see https://www.priv.gc.ca/resource/fs-fi/02_05_d_16_e.asp) – with absolutely no end-user consent, control or even knowledge this is happening.
We have already many examples of this situation, and probably the limit is the individual’s privacy. That’s what Governments need to regulate. An alternative to doing this leveraging laws and regulation, which is a kind of interventionist approach to the problem, is to empower people to make their own decisions. Do they want corporations to use data related to them in order to improve the experience they have with those corporations? Get better, more suitable offers? For many people probably the answer is YES, because they are not sharing extremely sensitive information in social networks, etc. For others, they could see this as too intrusive and the answer would be NO. But again, I think governments need to set a minimum threshold so that malicious corporations or individuals do not take advantage of all the information that is currently available on the web.
That depends one thing: how we use the data. If government is the representation of the collective will of the public, and so far big data has not had enough demonstrable value for the public. Therefore, the public have no reason to support it. Even if that benefit was advertising which sucked less. Even Amazon, with all their computing power, all their data, all their inventory, were widely criticised for their poor recommendations during “Prime Day”. If data is the new oil, we’re currently mostly turning it in to smoke.
3. When should you use marketing mix modelling instead of attribution?
First, I remember entertaining an audience of marketers (in fact, mostly consultants from smaller agencies focused on AdWords optimization) about the power of the attribution reports offered by GA, only to realize in the end that actually, very few people were using them or even aware of what they were… Yes, yes, this is true. So allow me to share how I understand the subtle but important difference between the “marketing mix” and “attribution”… Both techniques can lead to interesting optimization and great improvements in outcomes, but I see the former as being more of a planning tool (a fair dose of art, judgement and experience combined with regression analysis of past performance forecasted to asses & plan future marketing tactics) while the former is interested in the weighted impact of specific events/touch points toward desired outcomes, based on different models familiar to marketers, such as “upper funnel”, “lower funnel”, and everything in between. In other terms, and if I over simplify a bit, one is about what happen before people hit your website, the other is focused on what happen once they are there. In the end, I often say “it depends on what you can control, manage and influence”. I think getting a good understanding of attribution might be easier to start with because of the ubiquitous availability of Google Analytics, the lower cost of conducting attribution analysis, and the vast amount of articles written on this topic. But be forewarned, attribution or marketing mix modelling in the wrong hands could lead to catastrophic results.
Why having to choose? There are definitely situations in which MMM is really helpful, like with corporations that run marketing campaigns not only on digital channels, but also on traditional channels (TV, Radio, newspapers…) But my question is why shouldn’t we be taking the best of both worlds? Generally speaking, revenue or profit attribution is not a problem of perfect information, we need to work with assumptions, estimates, etc., so having different ways of measuring the same effect will reduce uncertainty and will help us to better understand the interactions across channels. In my opinion this is not a question of what to use, but how to make those methodologies work together.
I would say is with any model of how to spend advertising budgets, keep it as simple as possible until you can prove that making it more complex generates a greater return.
4. Why are advertisers using retargeting so poorly/annoyingly? Why don’t they understand frequency caps?
Because of the “automagical” aspect of it? They turn the switch on, see some positive impact, and don’t realize they might be hurting their longer term outcomes and reputation (not to say pissing off their potential and existing clients!)? Maybe because it’s easier to use a tool than it is to understand a concept? Maybe because vendors are awesome at putting very powerful tools in the hands of unprepared marketers – heck, I think most marketing classes thought in universities don’t even mention the concept of retargeting (at least not in the way it is applied in the digital economy). Once in the workforce, who wants to learn(and I mean really learning and understanding) about a concept when short blog posts bragging about the awesomeness of clicking a few buttons and pulling a few levers seems to work so well? Ok, enough sarcasm. There’s that fantastic thing called evolution and natural selection – the market will weed off itself from those marketers and organizations that are too annoying (although some people don’t believe in evolution, but that’s another story!)
I think they all understand, but there are two aspects affecting their decisions on retargeting but also online display campaigns: contractual constraints and marginal effects. Sometimes it’s the agreement with the publisher what’s stopping the advertiser to limit the number of impressions per customer. They agree on the general terms (cpc, cpo…), the volume of impressions, the location of the banner in the page, etc. But many of those publishers do not allow to explicitly limit the number of impressions per customer and day. And even if the advertiser can do that, the capping on one site may not take into account the viewability of banners on other sites. And on the other hand, and this is even more concerning, many advertisers assess campaigns as a whole so the overall return of the campaign is positive, but they don’t realize that the marginal effect of the last X impressions shown to a customer is much lower than the first ones, or even negative. Fortunately there are more and more tools that allow to manage this information in order to optimize retargeting campaigns.
This may seem overly simplistic, but “annoyance” probably doesn’t come in to the equation, it’s just a question of whether the retargeting is bringing in sales. Another consideration is that for many, spending caps kick in and effectively reduce the frequency an ad is seen. Getting frequency caps right isn’t easy, it can be a good way to actually loose sales. Are frequency caps the best way that a particular customer’s exposure to an ad could be managed?
Yes, customers get banner blindness; but let’s face it, there is going to be *something* in that banner slot, so does the customer become blind to the particular banner or the whole banner slot? But with all the add-blocking that is happening, both with add-on’s and some of the new changes coming natively in browsers, things are going to have to change.