Forget everything you might have heard or read about there being no magic bullet for media evaluation.  In reality measuring PR – as with everything else in marketing and communication – is  simply down to understanding the effect it has upon audiences.  The crucial point is not to be taken in by the silly audience numbers currently used as PR metrics.  Address that and “hey presto”, the magic bullet of PR measurement appears.

Here we explain how Metricomm has addressed this challenge and in doing so reveal that:

  • Media coverage is more powerful than has previously been understood
  • Media evaluation costs can actually be reduced

‘Audience’ metrics currently used by the PR industry might look impressive, but they mean little in terms of the true audience being reached and influenced.  At the same time volume metrics might be simple and easy to use but, again, reveal nothing about the audience being reached and influenced.  In fact, as we shall see later, volume metrics generate a very misleading view of what is really happening.

Given all of this why should anyone be surprised that PR has struggled to compete against other marketing activities where increased digitisation has provided real and meaningful audience data.  The goal we therefore set ourselves was to determine meaningful audience data for use in media evaluation; and in view of the increasing move to digitisation, took the decision to focus on online and broadcast media.  This has been vindicated as print media have declined even more rapidly during the pandemic while fake news continues to be a major concern for users of social platforms.

Our first step was to carry out extensive research into the human factors that determine reading, viewing and listening behaviour in the real world.  For media coverage to have any influence people obviously have to see it; and with the vast number of articles and other content available today the likelihood of that happening for all but the very biggest news stories is remarkably small.  Second, having seen coverage people then have to read it.  Again, the likelihood of them doing so is small, with 45% of readers who load an article leaving within 15 seconds (source: Chartbeat).  Third, for media coverage to be truly effective the content must grab them sufficiently to generate a reaction or interaction, such as visit a website or carry out a Google search for more information.

Various factors typically impact on the likelihood people will get to this stage, with the most important being the type and emotion of content – you can read more about this here.   Those who do are what we call the ‘engaged audience’ and are critical to understanding the true effectiveness of media coverage.  It is worth noting here that in today’s world many of the traditional rules for ‘good PR’ no longer apply.  For instance, long articles are just as, if not more likely to be read as short ones, even when viewed on a mobile phone, while images are not so important with online coverage.

Taking all of these factors into account required the development of an algorithm to arrive at a figure for the engaged audience for each piece of coverage.  This has now been used and extensively tested for over three years by some of the world’s leading organisations and the results are consistently and remarkably good.

Having established a reliable and robust algorithm for determining the engaged audience for media coverage it is just as crucial to understand where and how this should be applied.  This is where the PRSV – or Public Relations Search Value – comes into its own.  In simple terms PRSV determines the probability that people will find or go to different media for their news, either directly or via a search engine, such as Google.

It has always been assumed – and common sense would suggest – that increasing coverage will increase effectiveness.  In the real world, however, this is not the case.  The reason for this counter-intuitive fact is down to what is known as the ‘Pareto effect’.  More commonly known as the ‘80/20’ rule, this states that 80% of outcomes are the result of just 20% of their cause, which in the case of PR means 80% of effectiveness is driven by just 20% of media coverage.

The consequences of this are far-reaching and important.  After all, if just 20% of media coverage is truly effective does this mean that the other 80% wasted?  The answer, fortunately, is that the 80% still performs a useful function in helping to raise awareness and deliver key messages.  However, the key fact remains that only 20% of media coverage truly affects audience behaviour, though as we will see later this 20% is typically more powerful than has ever been realised.

Another question this raises echoes the infamous advertising quotation attributed to American department store magnate, John Wanamaker, when he said: “Half the money I spend on advertising is wasted; the trouble is, I don’t know which half”.  Fortunately this is not the case with public relations.  PRSV can identify the impact of specific media on audience reaction and behaviour, enabling the 20% of media most likely to be driving effectiveness to be identified, monitored and updated on an ongoing basis.

Given that four-fifths of media coverage has little impact it is easy to understand why volume of coverage is such a poor metric.  Crucially, it means that volume of coverage should only be used as a last resort for PR measurement: at best it provides a limited understanding of PR effectiveness while at worst it can lead to very poor – and in many cases, incorrect – decision-making.

The great news for the PR industry is that while only 20% of media coverage is truly effective, this 20% is far more effective than has been widely understood.  Indeed, far from being the “poor relation”, which is how many marketing and advertising people see PR – most notably and recently Sir Martin Sorrell, who claimed digital natives associate it with “press releases” and “gin-soaked lunches” – this means it must now be taken very seriously indeed as a key part of the marketing mix.

Further important consequences include the ability to measure PR effectiveness both reliably and robustly with much lower volumes of coverage.  As well as significantly reducing costs this also makes the analysis of entire sectors and industries perfectly feasible.

Of course, this all sounds great but can we prove it?  As always the best way to prove anything is to show a real life example.  The further education sector is ideal because of the large number of UK universities, but these could just as easily be fmcg brands, business organisations, TV programmes, services or goods.  The principle is exactly the same.

Results from this real-life example for the six months to September of this year (figure 1) show a clear corridor either side of the straight line, indicating a strong relationship between the PRSV engaged audience reached by online media coverage for each university and Google searches for them.  Google searches are an excellent way of understanding what people are really interested in, as outlined in Seth Stephens-Davidowitz’s excellent book, Everybody Lies.

Something we have to be very careful about here is that “correlation does not imply causation”.  After all, both media coverage for universities and interest in them is driven largely by the academic year, in which case we would expect media coverage for universities to coincide with interest in them.  Right?

Wrong!  It is absolutely the case that media coverage in universities, as with many things, is highly seasonal and the volume of coverage does, indeed, change accordingly.  However, 2020 has been anything but a normal year and media coverage of universities has not followed anything close to the usual pattern.  As can be seen by looking at figure 2, which also shows Google searches for the same 79 universities but this time plotted against the volume of coverage for each one, the clear corridor has gone.  What this indicates is a poor relationship between volume of online media coverage and Google searches.

Furthermore, using PRSV we know that if media coverage is having an effect, precisely when that effect will begin. Knowing this while taking account of any other activity that might be happening at the same time enables us to remove any doubt that the results we see are being generated by the media coverage being analysed. Of course, social media can also play a role, but again we know exactly when this has been driven by the media coverage. Either way, we can account for the correlation vs causation question and answer it accordingly.

Additional proof of the effectiveness of the PRSV engaged audience can be seen in the statistical analysis of the university sector data, which reveals a relationship between PRSV engaged audience and Google searches so strong that its significance level is 99.99999999%.  In the simplest terms this means the likelihood it can be explained by chance alone is under one in a billion.   We can also tell from the statistical analysis that 44% of the Google searches for universities is explained by the PRSV engaged audience generated by the online media coverage.  The remaining 56% will, of course, be down to other factors, including any underlying seasonal effects.

For any marketing discipline by any standard it is extraordinary to see these results.  This is the real power of online media coverage.  In fact, we should not be too surprised by them given the first place 90% of us turn to when we want more information is online, including Google search.  Google’s own algorithm now takes account of the quality of sources, with trusted online news media inherently appearing at or near the top of search results.

Once again, “hey presto”, the ‘magic’ bullet is at work.  Except, of course, this is not magic but the use of sound logic, rigorous data and robust statistical analysis, which holds true for brands, organisations, services, goods, etc, just as much as it does for universities.

It is worth saying at this point that as well as the engaged audience, PRSV also determines the search engine rank of online media coverage achieved, which is extremely valuable for SEO purposes.  In addition, PRSV can be used for any site and not just media coverage, which in today’s increasingly digitised world means conferences, exhibitions and any other online presence.

So, what does this mean for the PR and media evaluation industries?  In short it means that the true impact of online media coverage, which is where most people now get their news, can be measured robustly and accurately.  Key to this is understanding the real audience driven by media coverage and other PR activities and ignoring the silly numbers currently used, which are not only meaningless but dangerous.  The same is true for volume of coverage.  It is also true for advertising value equivalents (AVEs), which though supposedly ‘outlawed’ by the media evaluation industry continue to be used by many of them.

It also means that while understanding how it is being covered across the media, a client organisation should not have to pay for evaluation of all media coverage.  At best this is unnecessary and at worst dramatically increases the likelihood of poor decision-making.  As well as meaning that media evaluation costs can and should be reduced, this also means that critical competitor analysis should now become cost-effective.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 1.  The vertical axis shows people’s interest in UK universities measured by Google searches for each one over the six months to September, 2020.  The horizontal axis shows PRSV engaged audience for each university across the same six month period.  All data has been ‘normalised’ to allow direct comparison with the volume of coverage chart below (note: ‘normalisation’ is a standard procedure used to enable direct comparisons between different types of data).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 2.  The vertical axis again shows people’s interest in UK universities measured by Google searches for each one over the six months to September, 2020.  The horizontal axis uses exactly the same media coverage as figure 1, but this time by volume for each university across the same six month period.  All data has been ‘normalised’ to allow direct comparison.