Close
Download the 2024 State of Competitive Intelligence report for the latest industry benchmarks & trends. Grab your copy

Crayon Competitive Intelligence blog

Back to all posts

The Future is Here: Testing out ChatGPT with Win/Loss Analysis

This is a Crayon guest post from one of our partners, PSP Enterprises. Visit the Crayon Partner Directory to learn more!

“Let’s talk about how I can use GPT to help me with competitive intelligence.” Like every OpenAI ChatGPT session, I started the conversation. 

If you haven’t tried ChatGPT yourself, stop what you are doing right now, go to www.openai.com, and sign up for a free account to this generative AI tool. This is the breakthrough chatbot you’ve read about in the news, the one that has been trained to know just about everything that can be learned from the web. 

It’s exciting and a little scary to have this powerful new technology at your fingertips. AI this smart was science fiction just a few months ago. It can outscore most humans on legal bar exams, write and debug computer programs, compose poetry, even pen a blog or two (although not this one). So, why not go to the source and find out from ChatGPT itself what can be done?

According to ChatGPT, there’s very little it can’t handle — from data collection and monitoring, sentiment analysis, competitor profiling, trend analysis, scenario planning, competitive benchmarking, to content analysis — it’s got all the bases covered. 

But how about win/loss, my specialty? “Could GPT actually conduct a win/loss interview,” I asked, somewhat incredulously. “How would you do it?”

All you need to do is ask!

Oh, yes, it could be done: “GPT could potentially conduct a win/loss interview using its advanced natural language understanding and generation capabilities, although it would require careful setup and integration with appropriate communication channels.” 

It went on to list out the steps, starting with:

  • Question preparation
  • Establishing a communication channel (email, chatbot, or web-based form)
  • Conducting the interview (in a clear and engaging manner, with follow-up questions to seek clarification as needed)
  • Analyzing responses to find key themes or trends behind the factors causing wins or losses
  • Generating a report to inform strategic decision-makers and ways to improve sales and marketing
  • And even learn from the process to get better and better at conducting win/loss interviews over time

ChatGPT cautioned me, though, that “while GPT can provide valuable insights through conducting win/loss interviews, it may not be able to fully replicate the nuances of human conversation or pick up on certain non-verbal cues. Nevertheless, using GPT for win/loss interviews can still offer a scalable and efficient way to collect valuable feedback and insights from customers.”

I asked: What strategies could ChatGPT deploy to keep the respondent from losing interest or failing to move quickly through mundane matters and spend more time drilling down into interesting and important issues? 

Always a willing servant, ChatGPT came up with lots of suggestions about how it would personalize the conversation, adapt the line of questioning to the interviewee’s responses, be programmed to use empathetic language and acknowledge the respondent’s feelings or concerns and expressing “genuine interest” in their experience, and using all the standard interviewing techniques of open-ended questions, active listening, and so on.

So, I asked it to go ahead and use all of these strategies and to please compose a win/loss interview transcript to find out why a customer bought an Audi A6. I gave it instructions to conduct the interview following exactly the same process I use for my win/loss interviews. 

Out popped a finished transcript, with an entirely plausible set of competitive car models evaluated, competitive criteria considered, importance and vendor ratings, rationales for the ratings, etc. I then asked for and received a brief for BMW, Mercedes-Benz and Lexus for recommendations on actions they could take based on the interview.

It was absolutely jaw-dropping. It took me only 20 minutes to get this far and not a single line of computer code.

The future of ChatGPT & win/loss

So, is it time to hang up my gloves and call it a day? What’s left for the human practitioner?

Well, as of April 2023, anyway, ChatGPT can’t interview human subjects with the spoken word, so “conversations” are all text chats in a window. Right away, I think that disqualifies it as a technology for win/loss interviewing in the way that I do it today. It’s already hard enough to get respondents to agree to a win/loss interview on the phone.

Asking them to do it through a chat window with an AI bot might have some novelty appeal, but sounds like a recipe for disaster of low response rates, low completion rates, and — at best — short, pat answers to questions rather than the detailed, emotionally rich responses that you should expect in a win/loss interview. 

Someday, perhaps not too far off in the future, there will be very humanlike AI avatars on the front end, but for now, we are stuck with text.

Taking a hybrid approach to win/loss analysis

In the meanwhile, I have been investigating how well ChatGPT could help with the day-to-day work of win/loss. Since it’s so good at summarizing, setting it to work on analyzing a win/loss interview transcript was a natural test. So, I fed it material drawn from PSP’s sample win/loss interview transcript. 

Its capabilities here were once again astonishing: it (mostly) accurately stated what the interview was about and summarized the reasons why the winner won and the loser lost. However, here, too, I soon ran into real-world limitations that you need to know about:

1. Limited document length

Unfortunately, the versions of ChatGPT you can access on OpenAI’s website limit the length of the document you can paste into the chat window for summarization to about 2,700 words, roughly 20 minutes of conversation. 

My real-world interviews are typically about two to three times that length depending on how long and fast the respondent can talk.

2. Confabulation

“Confabulation” is the not so small problem of the GPT software making things up. Things seem to be going fine, but then all of a sudden, it can say a whopper. 

In my win/loss interview summarization tests, this showed up as ChatGPT including in its list of issues things that weren’t said, or might have been mentioned, but elsewhere in the discussion ruled out as a significant issue. 

The problem with ChatGPT is that these errors sound completely plausible and delivered in the same confident tone as the correct information. So, you must check its work and not let yourself be bamboozled!

3. Limited inferencing

As smart as ChatGPT seems to be, it can miss points that require more inferencing than it can handle. In my testing, for example, ChatGPT stated that the customer did not evaluate references, but this was not true: they had assessed customer references as part of a Gartner consultation. 

This logical connection was simply missed.  Or, when I asked it what mistakes one of the vendors made, it couldn’t answer it concretely so instead it gave me a list of generic mistakes that vendors could make. 

Note that ChatGPT 4.0, OpenAI’s latest version, is supposed to be smarter than ChatGPT 3.5 in this regard, and my testing showed this to be true. (ChatGPT 3.5 is free; you have to pay $20/month for limited access to ChatGPT 4.0.) ChatGPT 4.0 did a better job of finding mistakes that vendors made, for example. But it also seemed much more aggressive and possibly more prone to confabulation.

4. Data privacy

OpenAI warns you that conversations with ChatGPT are not private, so you can’t use them with confidential materials, such as raw win/loss interview transcripts. Redacted and anonymized interview transcripts should be better, but I have not used it with real client materials.

Adding the human touch to AI win/loss analysis

You have to marvel at the power of these AI technologies and their accelerated pace of improvement. It’s hard to deny that they will have a transformative impact on all knowledge workers, never mind win/loss practitioners. 

But big-ticket commercial win/loss, especially because it depends on asking a busy business manager for time to discuss potentially sensitive matters, is probably one of the least appropriate targets for complete automation. 

Robotic interviews or overly scripted interviews are a well-known pitfall for human interviewers already, and while ChatGPT isn’t the stereotypical robot that talks like a robot, it still comes across as a facsimile, with an odd yet pervasive numbness — exactly this, a lack of feeling — that will not engage people as a good interview must. 

It makes much more sense to use AI to super-charge conventional surveys and push them beyond their current limits as confirmatory research tools. But make no mistake, something big has happened and there is no going back.

Picture of
Ken Schwarz
Ken is the Managing Principal of PSP Enterprises, a win/loss consultancy in Lexington, Massachusetts. Ken Manages all aspects of win/loss interviews and analysis for high technology clients. Ken draws from his 25 years of experience in enterprise IT sales, marketing, product development, and service delivery at HPE/SimpliVity, Pegasystems, GE Digital, and Progress Software.
LinkedIn