Garbage in, garbage out!

We constantly emphasise to Metricomm clients the importance of using the right data to evaluate the effectiveness of media coverage. As communication professionals work increasingly with data to illustrate the results of their hard work, how can they be sure that they’re not using data which will obscure or skew the true insights?
Here are a few basic tips, to verify whether you (or your evaluation agency) are using reliable data sets to determine the outcomes of media coverage:
1. Sift out the sewage
The traditional media evaluation industry is based on volume of coverage. That means they use media monitoring to find every single piece of coverage about your organisation or brand and use that as the starting point for ‘insight’. The problem with this ‘boiling the ocean’ approach is that it doesn’t remove the flotsam and jetsam that nobody reads. Treating every piece of media coverage as equally influential produces highly inaccurate results. The only way to avoid this is to calculate what the audience actually engages with. In our experience, it’s approximately 20% of media coverage that produces 80% of the audience impact
2. Avoid introducing bias
There can be a big gap between the key messages an organisation thinks it is communicating and what is actually getting across to the audience in coverage. Even if you’re using AI to analyse content, if the training data set is only looking for pre-determined key messages you risk missing important insights about how media coverage is influencing audience perceptions. The ‘unknown unknowns’ can be very important in shaping opinions and reputation, and you must be able to identify them within the data set
3. Check for false positives
With any kind of automated platform, it’s very easy to wrongly classify content at the data-gathering stage. For example, searching for content about the retailer Next can produce oceans of false positives because ‘next’ is a common word within the language. Similarly, with sentiment analysis some words can be positive in the context of one brand, but toxic for another. ‘Crash’ is a great example – disastrous for financial services but a strong selling point for one TV show we’ve analysed. Unless you can carefully inspect the underlying data and have a mechanism to clean it thoroughly, false positives will riddle the analysis with mistakes
4. Question the statistics
Statistics and findings can be inaccurate (see above) or used selectively to highlight the ‘big numbers’ and pretty graphs we think executives like to see. In truth, those executives are often consigning media evaluation reports to the bin, where they rightly belong. Statistics can be powerful if they are robust and used consistently, so that cause and effect can be demonstrated at very high levels of confidence