Talk to an expert

Report an Incident

Making Sense of Public Ratings in Product Selection Process

02 May, 2023

Selecting the best solution(s) among many potential solutions for internal systems is a tough ask. Vendor solution offerings are virtually endless; too many considerations and aspects to balance when your priorities can be miles away from criteria used in the rating you are looking at. Leveraging scores from the public domain and peer reviews may help, but does not necessarily solve the issue when considering novel technology.

Based on our experience, here are five top criteria to consider when you use public scores to identify appropriate solution providers.

Provider bias. Sticking to your favorite rating provider is like trying to see the room through the key hole. You will be guided to check the slice of the market, which is not comprehensive and amends novel approaches and multitools. The more sources you tip in – the more coverage and diversity you will receive for selection. Yet consider the trustworthiness of those sources as a proxy for your assessments.

Input volume bias. In all peer review platforms, the final rating is an aggregation of opinions. Mind the sample volume though, as some solutions have below ten reviews and each can be a five-star, which would inevitably force them into top rankings of your selection. To normalize this distortion, you will need a coefficient reflecting number of opinions taken for the score you are looking at.

To contradict, one may say that novel technology and net-new great vendors cannot get too many reviews just because they are fresh-to-market. True, but as an example, about 6% of security companies cease their operations within five years from inception. 24% gets acquired by bigger players who will decide the product destiny, and from the rest only selected few make it to the top in their niche.

Vendor alignment bias. Have you not received invite from a provider to rate them after successful implementation? Guess what – unsuccessful implementations normally are not followed-up with an invite; and not every frustrated engineer is interested in spending more time filling out feedback forms to share the bad experience with the product. This is where the trust for public ratings gets shallow.

Temporal bias. Another criterion to consider is the “aging” of scores. Major releases from the bigger companies are delivered every 6 months, and new features are rolled out in quarterly cadence. Temporal adjustments are necessary to avoid degradation of the input relevance under two main assumptions:

1. All the negative reviews and scores should promote features in development cycle and most likely will bring a “fix” in under a year’s time.

2. All positive reviews should degrade in value as industry moves forward with new ideas, frameworks, and requirements introduced over time.

The general rule of thumb here is: scrape scores and apply a discount coefficient every 6 month before the current date. Still, be mindful of how data is presented in the aggregated view on the peer review platforms and read their methodology to avoid unnecessary discounts. As an example, Gartner has an annual impact coefficient (weight of one entry) decreasing in half, which is inbuilt into the final scoring.

Mathematical bias. We recommend to do a fact-check for those ratings on the top as the final step. Behind all those numbers you may lose sight of the original requirements you had for the product. Add three MUSTs and three MUST NOTs as a filter to your list and remove solutions that would not meet the criteria on the high level.

Selecting a solution based on industry ratings and peer reviews should address the statistical bias emerging from the collection, processing, and presentation methods in public sources. With such an approach, you will be able to lean on the broader industry experiences, make informed decisions, and build a practice to accelerate your selection process for solutions that truly meet your needs.

At CPX, we’ve built a selection process that is data-driven. Through multiple filters we create a targeted sample of solutions for our clients which is thereafter funneled into RFI and in-lab POC pipelines. Given the depth of in-lab testing we put solutions through during the POC, efficient pre-selection methodology is an imperative. Appropriately tuned workflow allows us to accelerate the process of selection, validation, and assurance for our clients.

Written by Konstantin Rychkov

Caption - Despite its usefulness, public ratings of software providers must be used with caution. In this blog, we share a few considerations and suggestions for a guided approach on how to account for and eliminate bias in the vendor selection process.

hashtag#VendorManagement hashtag#VendorDueDilgence hashtag#productrating

Continue Reading

write

20 November, 2024

The Modern CISO Playbook: Top priorities for CISOs in 2025

Read now

30 August, 2024

Ask the Right Questions to Get Data Privacy Compliance Right

Read now

29 December, 2023

Navigating Cyberspace in 2024: A Sneak Peek into the Top Security...

Read now

14 December, 2023

Top systems integration challenges every organization must prepar...

Read now

29 August, 2023

Help ! My Facebook has been hacked

Read now

20 July, 2023

Security Product Research in the Lab: A fair chance to prove your...

Read now

20 July, 2023

The Cyber Security Conundrum: Balancing Ego and Expertise

Read now

20 July, 2023

The Internet Never Forgets

Read now

20 July, 2023

Top Cloud Security Risks and How to Address Them

Read now

20 July, 2023

Why Continuous Education, Training and Awareness are Essential fo...

Read now

02 May, 2023

A 5-Star Partner: Priming Your IT and Security Services for Success.

Read now

02 May, 2023

AI and Cybersecurity: A Tale of Innovation and Protection

Read now

02 May, 2023

How to Select a Secure Cloud Model, One Size Does Not Fit All

Read now

02 May, 2023

Privacy Compliance: A Four-Step Approach

Read now

02 May, 2023

Securing Your Website – Gaining Online Customers’ Trust

Read now