During the 2024 Superbowl, Discover Card aired a commercial called “Robot” that explored the difficulties of recognizing AI from a real person. As Jennifer Coolidge praised Maya, the Discover Rep, for sounding like a human being (which she was), the rep retorted with:
“Wait, are you a robot?” – (Maya) Discover Rep
“Oh, how would I prove that I’m not?” – Jennifer Coolidge
Of course, the joke is now on the insights industry as the gap between real people and AI survey bots grows ever smaller. Researchers are now tasked with verifying the authenticity of their respondents.
Questioning the Quality of Our Data
Back in the early days of online research, it was easy to identify bad actors as their tricks were fairly simple. Now with AI, you start by questioning all your best responses, the ones with perfect grammar, as most likely sourced by AI.
This points to a significant advantage of Insights Communities. AI survey bots can answer an ad hoc survey, but joining and actively participating in an ongoing community is an entirely different matter.
The research industry has largely taken sample quality for granted. Now with AI, the potential for digital imposters increases exponentially. Ironically, in our pursuit of higher confidence intervals, we have opened the door for fraudulent respondents.
Quant results from online communities have typically been dismissed as directional, like the results of a few focus groups. I would argue that survey results from an insights community are as reliable, if not superior, to ad hoc projects.
Back in the early days of online surveys, researchers used to validate sample integrity by calling every tenth respondent to verify identity. As these quality standards have receded, we have set ourselves up for the rise of survey bots.
After all, someone or in this case, “something” has to fill the respondent gap.