Project History VII-IX: Why the Best Go/No-Go Research Starts by Trying to Kill the Project

Intro

Teams often fail not because of a lack of data, but because of research that confirms what they already believe. The answer? Start by trying to kill your project.

Today’s article explores the mindset and biases you bring to an insights project supporting a go/no-go decision and how these impact the way you conduct primary research.

I have three case studies from projects where we utilized awareness of bias as a tool and ensured delivery that went beyond a mere box-ticking exercise.

It can be challenging, both as an individual and a team, to be aware of bias, as most people with a cursory understanding of popular psychology and human biases are likely aware (e.g., Kahneman, Tversky).

If you have the introspective ability to assess your bias on the decision and discuss as a team ahead of a project, I encourage you to.

The other option is to bring in external support and have them assess objectively.

For me, when I work with a new team, I’ll take a sense check during the discovery phase of the team’s attitude, through asking about their current plans and thoughts, and also assessing what they are not saying (which continues throughout the project).

Regardless of which approach you take, the following six tips will be helpful.

Six tips

The mindset you need to take into primary research is “Let’s invalidate or disprove our internal viewpoint.”

Below are six tips that can help you do this:

  1. Tip 1 is to recruit experts with diverse views. You can assess this from their public opinions/articles (for example, in Alzheimer’s, there are many self-declared amyloid-hypothesis skeptics) or by adding a screening question or two that would enable you to assess this (e.g. do you use this type of medicine, or perform this kind of surgery).
  2. Tip 2 is to consider how you can adjust your discussion guide to mitigate your bias. One helpful way is to ask comparative questions rather than absolute ones, to gain a more nuanced view and avoid respondents giving answers they think you want to hear. For example, rather than asking Are there unmet needs in this indication?”, where the answer will almost always be yes, ask, “Do you think the unmet needs in this indication are sizeable compared to other indications you work in and why?” This will provide a more accurate assessment of the strength of feeling and encourage the expert to elaborate on the topic in greater detail.
  3. Tip 3 is on the importance of following up and challenging ideas during the interview. Even if the questions are written in a “standard way,” you can challenge answers to get contrary findings. If an expert responds to an answer in the positive, follow up with something like “some of your colleagues answered/might answer in the negative, could you try to explain why they might think this?”. This brings good counterpoints (even where they might not naturally be given in responses from optimistic respondents).
  4. Tip 4 is similar, and that is asking “pre-mortem” style questions. When considering a potential drug candidate or approach to treating a disease, ask, “This drug failed to be commercially/medically successful when it launched, why did that happen.” This hypothetical scenario will provide counterpoints to progressing a project.
  5. Tip 5 is that if you have several people conducting the interview, one can be given a devil’s advocate role and asked to challenge answers from the opposite viewpoint. They can focus on follow-ups that push back or verbalise scenarios in which the opposite situation happens.
  6. The final tip, number 6, is on project reporting. In the situation of internal bias to “go”, rather than presenting the reasons to continue with the project, and afterward detailing risks or challenges, start with why you shouldn’t go, explaining the case against, and afterward discuss the positives. This will frame the decision in a different light, especially if presenting the research back to the team or more senior stakeholders.

Case Studies

These three projects will help demonstrate the use of some of these tips.

Project A

The client was 95% certain that there was no appetite for a particular Diagnostic test outside of one specific market that it was already launched in, as it was a) not well validated (yet) and b) not reimbursed.

However, when we built the guide and conducted the interviews, rather than trying to confirm there wasn’t demand, we instead tried to find demand (Tip 2).

This meant that questions and probes were built around finding uses, pushing the experts on how the limited, early-stage, but positive evidence for the test could be used as an argument to employ it, and pressing the doctor to consider ordering it despite no/limited reimbursement (a reverse Tip 4).

We pushed so hard that several of the physicians got mildly irritated that we “weren’t getting our heads around there was no demand” (Tip 3).

In the end, the project achieved the need of disproving (for now) developing the test for other markets, which wasn’t based on the product lead’s gut or a few box-ticking responses, but detailed quotes and feedback from a dozen experts from across the globe.

Project B

In this project, the R&D team developing the product was enthusiastic about incorporating a series of potential features into a new piece of software.

The Insights Lead wanted to be supportive, but needed to bring them down to reality that they might not need every bell and whistle in the first release, and focusing on fewer critical features would shorten development time and get it to market sooner.

We recruited respondents who performed pathology at various levels of experience, different ages, and from different regions across the target market (Tip 1).

We then used a relative assessment approach to see which features would be most valuable (Tip 2). To do this, we asked about each feature independently, ranked them with a Likert scale, and got them to talk through the benefits and any redundancies.

As expected, most were optimistic about getting new features and tools to help them.

We then came back and ranked them in order from 1 to 4.

During the discussion, we pushed back on whether they were overwhelmingly positive about multiple features (Tip 3). We asked them if they could only choose 1 or 2, which one they would choose, and why.

This follow-up discussion revealed a series of additional supportive and negative opinions regarding features that wouldn’t be used as much (Tip 4), which we hadn’t captured in a standalone assessment of features.

From this, we built a ranking table showing the most popular features to prioritize in a minimum viable product, and what could wait for version 2, as well as some ideas on how to develop the less popular features more effectively, which the Insights Lead took back to the R&D team for discussion.

Project C

On this project, the client conducted an exploratory landscaping exercise to determine whether to initiate a vaccine program for a specific condition with no existing vaccines, either internally or through acquisition.

Initially, the overall status was neutral, as it was one of many ongoing efforts by the team. Due to this and an overall lack of consensus on several key aspects of the condition, we had to adopt an open-minded approach to the discussion guide; therefore, Tip 2 wasn’t handy.

However, when we started interviewing experts, we had a good variety in terms of geographic location and setting (Tip 1), so we began to receive both negative and positive responses on whether developing a vaccine would be worthwhile.

I pressed each respondent with their responses to build more depth beyond the obvious (tip 3), aided by messages from the client, who was a silent listener, challenging their viewpoints. We also conducted pre-mortems (tip 4) on the vaccine’s failure to be adopted (and why), and for those who were negative, constructed scenarios of success and asked them to explain how we could achieve them.

I was the only interviewer on the call, but the client was listening and so played the devil’s advocate (Tip 5) by posing deeper challenges if I missed any.

In the final report, I focused on the negatives initially (Tip 6), linking several loosely related topics to demonstrate an overall challenge from Payers/Governments, patients, and physicians for adoption, before layering in the positives.

The team decided to pause any activity in this space and review at a later date, prioritising other projects in the interim.

Wrap up

Bias is inevitable. By adopting a ‘kill it first’ mindset, you transform research from validation into stress testing.

In a world where Phase 3 trials routinely cost 10s of Millions of dollars, the value of stress-testing your bias is obvious.

If you have a go/no project coming up and want to discuss ideas on how to apply some of these tips and challenge your team’s biases, get in touch.

Leave a Reply

Your email address will not be published. Required fields are marked *