No! We don’t promise to find all the coverage connected to your brand, product or issue. That’s because we’ve seen, time and time again, that volume-based media effectiveness evaluation is misleading. Instead of treating every piece of coverage as equal, Metricomm identifies the coverage that has a real impact on audience behaviour and business outcomes. That’s why clients often use us in addition to traditional media monitoring and evaluation.
Often, nothing! Our toolkit and algorithm find the coverage that is most likely to be seen, read and reacted to. If we’re undertaking a bespoke project, we’ll need a brief – the more we know about your communication challenges, your competitors, your internal KPIs and so on, the more we can tailor our analysis and insights to make sure you only pay for what you need. If you want us to incorporate other data sets such as Google Analytics, sales revenue, paid advertising results or social media data, we’ll ask you to give us access to your internal data sets.
Our solution is highly flexible, both in pricing, frequency of reporting and detailed deliverables. We work on a retained or project basis. Our team will work with you to suggest the most appropriate and cost-effective report to meet your needs. Some of our clients want a campaign report at short notice, or a competitor report to feed into annual strategy and planning. Others want regular reports to track KPIs over time or supplement other media monitoring or brand tracking activities. All you have to do is tell us what works for you!
There is an over-reliance on sentiment as an indicator of likely stakeholder response. Dividing content into positive / neutral / negative is a very blunt approach, which cannot take account of the nuances and anomalies we see in coverage. Instead of sentiment, we analyse using seven key emotions to get granular insights into why people feel a certain way and what this might mean for your brand or organisation. Often, negative coverage – for example about the climate crisis – results in positive consumer behaviour change!
We know that correlation does not equal causation. That’s why we use independent data sets such as Google search and look for concrete evidence of a solid statistical relationship between them. This allows us to say with a very high degree of certainty that one set of results has a direct relationship to the other. These relationships do not always exist; it’s our job to pinpoint when and why commercial outcomes can be attributed to online media coverage. In other words, where there’s only a one in a million chance that those outcomes can have been caused by anything other than the media coverage.
Yes, but we only use it where it’s the best tool for the job! In the same way, we still use human analysts to validate that the insights make sense and to make recommendations that address the ‘So what?’ question. AI helps us to gain very detailed information about the topics themes and issues that appear in media content and are driving audience behaviour at any given time. Our approach to AI differs in one other respect – we use independent training data sets to avoid inevitable confirmation bias.
Beyond doubt. There are times when media coverage makes a significant contribution to moving the audience from awareness to interest, desire and action. We see this in Google search outcomes, website visit analysis and Google Shopping trends, as well as quantifiable search uplift that increases the chances of paid advertising meeting its objectives.
Yes, we don’t have a crystal ball, but we can look backwards in time. Usually we find that 6-7 years’ worth of data is enough! Often it only takes 12-18 months’ data to situate something in context. Whatever the timeframe we’re interested in this makes our competitor and reputation benchmarking, campaign evaluation and media effectiveness research much more accurate and actionable than merely looking at what’s happening now.
We like to think we’re boardroom friendly. That means we’re focused on revealing insights that help clients improve performance and move the needle on media effectiveness, whether they come from a campaign, a sector study or a reputation benchmark. In other words, our purpose is to provide insights and recommendations that show what’s going well and what might be improved, either in the short-term or as part of long-term strategy.
Traditional media lists are all well and good but segmenting by national / regional / local / consumer and lifestyle / business and trade is a bit old fashioned. Metricomm identifies the media that work best for your sector or brand, so you can cultivate winning media relationships and save valuable time and resources.
Our multi-lingual team has worked in markets all around the world, providing salient and focused insights across a broad range of sectors, including higher education, financial services, FMCG, consumer electronics, alcoholic beverages, automotive, TV, travel and hospitality, charities and more….
No, our clients receive ready-made reports, complete with our insights, conclusions and recommendations. This saves them time and means they can get on with the day job, leaving the analysis to our team of experts.
In the real world, nobody can tell you the exact size of audience for a piece of media coverage. Even systems that track online articles can only tell you how many visited the site of the article, which is not the same as readership. In addition, such trackers are limited to the publishers prepared to share this data while privacy laws, already in place in many markets, combined with rapidly growing public concern over ‘big brother’ tactics will limit them further. Through its proprietary algorithm, Metricomm’s approach produces audience data that delivers highly reliable results (see below) and removes any issues or concerns over privacy. Informed by many years of academic research, market knowledge and experience, audiences are based on statistical analysis and techniques similar to those recommended by Google to address the removal of third-party cookies.
When we analyse independent data sets with our audience data, such as Google search, we use robust and rigorous statistical techniques to scrutinise the results. The minimum level of statistical significance we ever accept is 98% and the vast majority of our work is carried out at significant levels between 99.9% and 99.999%. Given that Google search data is reliable then the very high levels of statistical significance we consistently see when our audience data is analysed with Google search data, which is a completely independent data set, means that our data must also be reliable. If this were not the case then such robust results would not be possible.
Got any other questions for us? Just pop your details in the form below and we’ll get back to you asap!