The Best Leading Indicator of Initiative Success

Real Time User Generated Content: The best (and least leveraged) leading indicator of initiative success



3 tips for harnessing this secret weapon


Here’s a reality that few brands want to acknowledge: up to 95% of new consumer product launches fail. The reasons vary from marketing mishaps and pricing failures to bad formulas and faulty packaging. Clearly not all new product failures can be avoided after launch especially with propositions that simply do not have any core benefit, the target size is too small or some other foundational element is missing.


But for many new initiatives, small adjustments can be the difference between success or failure. And there is no other better (and quicker) leading indicator of initiative success than in user generated content – reviews and social feedback. Why? Because they are real time and unfiltered – no one is sitting in a one-way mirror room telling you what you want to hear. They are just real consumers who have experienced your brand, giving feedback – the good, bad and ugly.


The Challenge


Separating one-off comments on a new launch in reviews from trends and patterns is difficult. After all, you don’t want to follow one comment off the cliff and adjust an entire proposition based on an “n of 1”.


To understand this point further, 4Sight looked at a number of recently-launched CPG products. One launch from Clean & Clear is a perfect example of why digging deeper is imperative because even products from the same line could elicit very different reactions.

Here’s the background: Just under a year ago, Clean & Clear introduced a new line of Lemon + Vitamin C cleansers and toners. On the whole, the line has been received well by customers.


In looking reviews on their own website, Target, Amazon and Influenster, after removing any syndicated duplicates and verifying purchase, an interesting story emerged. Right away, it’s clear that the brand recognizes the importance of reviews, as most of the reviews collected are through a promotion, either on the company’s website or through Vox Box on Influenster. However, the two offerings from the same line tell a very different story.


First, the good: Clean & Clear Zesty Scrub.

Clean and Clear Zesty Scrub Review Rating and Number

The star rating has remained consistent above a 4.0, sometimes going up to 4.6. Overall, across all sites, Clean and Clear Zesty Scrub has an aggregated average star rating of 4.23 with just over 1,000 total reviews. You may notice on the above charts, that reviews spike around February or March. This is due to incentivized reviews, with over 80% of the reviews collected during these months solicited from the brand. As of mid-September 2019, the scrub had a 4.4 rating on the brand’s website, 4.2 on Influenster, and 4.5 on Amazon. Because the bulk of the reviews are the Influenster, those reviews were weighted more, which explains the overall star rating of 4.23. That average star rating aligns with the star ratings when looked at longitudinally over the last 10 months, from launch until now.


Clean & Clear Lemon Exfoliating Slices, from the same line and launched at the same time, tell a different – and perhaps more alarming – story.


Clean and Clear Exfoliating Lemon Slices Review Rating and Number

The overall aggregated average star rating for the Exfoliating lemon wipes is 3.85. As of mid-September, the wipes have a 4.2 rating on their website, 3.8 on Influenster, and 4.1 on Amazon, with Influenster again hosting the bulk of these reviews. But just looking at the 3.8 overall rating doesn’t give us the full picture.


The star rating peaks very early, in November and December close to a 5, but drops dramatically at the start of 2019. Notice that as the number of reviews rise, the rating keeps falling, to a low of 3.1 in April. That trend is the jumping off point: clearly there’s an immediate comparison with the “sister launch”, but the other one that can/should be done is with previous launches, and a deep dive into why the rating dropped so heavily. In this case, it looks like scent is one of the top negative drivers behind the poor reviews.


What does all of this tell us? A few things you need to do:


1. Monitoring reviews and social comments monthly at first and then quarterly after 4-5 months is critical to understand what users are saying about the product – it’s efficacy, scent, packaging, benefits, etc. But that’s not enough.


2. Have processes in place to action the drivers behind the feedback quickly – course correcting issues immediately before the issues become systemic. Actioning it first requires doing a deep dive on the drivers and then taking steps to fix the negative drivers and fuel the positive drivers. Having that process in place at launch is critical.


3. Benchmark new initiatives with competitors (or previous launches) so you can put early indicators in context. You don’t want to jump into action too quickly on a handful of reviews, but if you understand how those first few comments compare in terms of drivers (positive & negative) vs other competitors or your own brand’s previous launch that enables quicker action.


Taking these simple steps won’t fix every new product problem, especially with launches that probably should have never been launched e.g. wrong target, poor product performance, etc. But for many of the 95% of failures, putting in place these simple steps will help not only “save the initiative”, but prove to turn around growth.


Reach out to hear about 4Sight’s ALIRT solution, which helps enable this type of agility: https://www.4sightadvantage.com/contact-us

13 views