AI in the Research Industry

The ‘Big Picture’ review

By Piers Lee, Deputy Editor of Asia Research Media, MD of BVA BDRC Asia

The buzz around Artificial Intelligence (AI) in market research is nothing new, but Asia Research took a step back for a fresh, big-picture review—how AI is transforming the industry and what’s coming next.

At the end of 2024, we conducted in-depth interviews with key players—clients, suppliers, and tech firms—to uncover how AI is being used, its benefits and pitfalls, and what the future holds for market research agencies (MRAs).

We followed this with a quantitative survey to gauge how these issues resonate across a wider audience. The findings will be published in Asia Research throughout 2025.

While AI’s advantages are well-documented, we zoomed in on its applications and limitations, and we went about shedding light on the enduring role of MRAs in an AI-driven world.

In this edition of Asia Research, we provide key feedback on a range of stakeholders interviewed on the subject.

PHONG HUYNH, Head of Research and Trends at Circles.Life:

Phong is heading the insight function at Circles.Life, which is a global telco tech startup. He comments that AI can be adopted faster among more disruptive brands. Start-ups need to move much faster to gain competitive advantage, and often don’t have the time to engage in the traditional model of end-to-end research consulting, e.g. we cannot go back and forth with the agency, etc. “Our model at Circles is to establish an in-house research team and use a range of research technologies. Within this, AI is increasingly a tool we are experimenting with.”

But Phong also highlighted some concerns that AI will not give younger generations the proper training in the foundations of market research. AI might be used to provide shortcuts without providing context around its answers.

It is possible that the traditional market research agency will disappear, and the staff transform into strategic business advisers with AI tools guiding businesses to better data driven decision making. The other scenario is that agencies become the innovation engine of consumer behavioural science.

MARTIN ANSO, Senior Manager of Research and Insights (international bank)

Martin highlights some of the additional challenges that AI faces within more regulated industries such as financial services. But confidentiality issues are manageable—you can take a version of ChatGPT and putting it in-house and on premises. It is trained outside and then brought inside.

While confidentiality can be controlled, there is a wider issue of ethics & responsibilities, e.g. how AI can be used in the processing of information and decision-making. We would be accountable for any bad decisions made by AI, so we need to know the risks inherent in what solutions AI generates. For this reason, we might need to reassess vendors who have already been onboarded if we know they are using AI in their analysis, such as how AI is used to drive an outcome.

Other limitations to AI are that some market and products, e.g. within corporate banking, are very complex. Sometimes this is down to lack of sufficient data points, but also for more sophisticated industries, it is hard for AI to make a judgment about “what does this actually mean?”—how do you connect this finding to the business?

While there is still some hesitancy in the use of AI, it has the potential to be a great tool in synthesizing findings across difference research pieces and different data sets, e.g. one vendor might provide data around one product, and different ones around another. With AI there is an opportunity to link them.

Martin comments that there will still be a role for human problem definition, e.g. not to take a brief from a client at face value without pushing back, to challenge, e.g. asking: “What are you really trying to do with this?” Hence, we like to work with agencies that have cross-industry experience, to provide something we would not look at, or just provide best practice.

PERCIVAL (VAL) PASTRANA, Consumer Science Lead, AMEA/Global, Sanofi Consumer Healthcare (Opella Healthcare).

Val highlighted potential pitfalls in AI both from a cultural and nuanced context. We would need to know what the AI has been “trained on” e.g. a US model might not be effective in transferring to the Japanese market. We also have “the curse of averages”—the tendency of AI to use averages to report, that might not highlight the significance of some granular findings that could have large implications.

For synthetic data, established brands could be more cautious about its use, but “challenger brands”, could be open to experimentation. But again, the risks can be managed, e.g. implementing “safety nets” such as running AB test with and without synthetic data to verify the results of the synthetic data.

Looking forward, we might see some fragmentation of the supply markets as more specialist vendors with highly skilled staff can provide the value add. To use AI properly, staff will still need to know the basics and principles. AI needs to have all the required prompts, and researchers will need to become more skilled in using these prompts with the right questions and stimulus.

But we will still work with the big agencies, and we will still need to use them for their databases of “normative scores”. We will also need client servicing teams in the various markets where we have offices.

RUSSELL CARTER, Regional Manager, Dynata, SE Asia

Russell Carter from Dynata says that one of the challenges with AI is where organisations or individuals may be reluctant to engage with the new technology, which risks leaving them behind in their industry.

While synthetic data could be seen as a major threat to panel providers, Russell points out that “human thoughts and involvement will always be important” and that synthetic is “machine learnt” from other data, so while there is a role for synthetic data, we still need those data to hold base human inputs.

Synthetic data will play a strong role in areas like product development whereby there can be large banks of data for normative scores, etc. It will be more about “increasing the level of research that is done, rather than taking it away”. Synthetic boosts also have their role, e.g. for niche groups, that are hard to reach via surveys.

AI will increase the efficiency of MR agencies, remove the mundane work, e.g., questionnaire design and reporting. Research will be enhanced. It will push them up the value chain. The role of the researcher will be to understand the business that they are applying the research to—AI will allow the researcher to become a better business consultant.

TIM WRAGG, Global CEO of Human8

Tim highlights issues around AI hallucinations—when findings are stated so eloquently within the AI outputs they are often “seductively plausible”, but they can be simply wrong.

Privacy and security concerns are huge, for example using confidential data to train AI that might get out there, and it could also allow bad actors to hack their system, hence a lot of AI is being developed in-house. Human8 is focusing on proprietary AI—while there are generic algorithms that you can apply, we want ownership of the AI, and we want to train it ourselves. We are developing our own AI to cover the entire process, e.g. desk research, survey design, analysis, workshopping, data visualisation, co-creation, framing concepts, and storytelling.

AI makes people more productive with relatively little experience, e.g., with two years’ experience, you could probably operate like a mid-level person. Agencies will use AI to provide better advisory services to clients, the “traditional full-service agency” is not a survivable position, so they will also need more consultative, often senior profiles that know how to prompt effectively with the business goal in mind. Agencies could end up being more of an hourglass hierarchy which is good because agencies have always struggled with mid-level people.”

RENEE SMITH, EVP Global Innovation, Toluna

Renee highlights challenges in AI adoption in healthcare and financial services, sectors that have a lot of regulation. “Clients might ask about activities in any research that might involve AI. We have even created AI model Trust Cards that include the purpose and foundation of the AI model, the training and imported data, limits of the model, etc. Or even far more reaching checks such as, ´Did you investigate that it (AI) will do no harm? ´ For the latter, we build in sensitivity controls to probing questions to ensure that it will not be asking questions that are too personal. But this can be switched off according to the nature of the study.”

Renee Smith also comments on the broader implications of AI. “There is still fear out there that AI will take over our jobs. It’s in rapid transformation mode, and there are change management issues within organisations, and you will naturally get resistance. But once we have good predictive models, there is the potential for MRAs to expand their services, e.g., to step into the areas of creative agencies, by using AI. Not necessarily because they want to, but clients might ask MRAs to ‘optimise my concept’ prior to going into the field.”

“Augmented Intelligence is the way to go (AI and human) hence we have a set of AI champions to check on how AI has reached some conclusions. We say to staff that ´you have more time to do insights´ but what does that actually mean? Does this mean they spend a whole day doing one set of recommendations, do they use more data analysis, etc? We still need to envision what the ´researcher of the future´ would look like.”

JON SMETHERHAM, Regional Director, 2CV

Jon Smetherham from 2CV says that “While AI is a valuable tool in analysis, there are still gaps in capturing communication, like nonverbal cues and body language; and the biases are in Large Language Models, meaning you might miss marginalised groups. Especially somewhere as diverse as SE Asia, if you are looking across markets you might not get those cultural nuances.” 

“We’ve also seen a lot of mis-selling of AI that could raise clients’ expectations of AI that are unrealistic and would fail to deliver, e.g., really fast turnaround, predictive analytics that could be unreliable, etc.”

“Synthetic data has considerable potential—panel data quality issues are well-known, while niche audiences are hard to find, so we’re actively considering what a blended approach might look like and what the use cases might be.”

“Often overlooked is the environmental impact of AI, apparently every 50 queries run through ChatGPT equates to approximately one Coke can’s worth of water in air conditioning to keep the data centre cool!”

“There will be more self-serve AI tools, and I’d expect insight teams to take more in-house, while the MRA will get leaner with fewer project managers, but lots more analysts.”

CONCLUSIONS

Arguably, the consumer insight sector is adopting AI faster (and more enthusiastically) than most other industries. This is driven by the wider range of benefits that AI brings to the sector, with solutions being offered by a plethora of specialist vendors.

As partly an “academic pursuit”, consumer insight is both able (and eager) to explore the benefits and limitations of AI to optimize its commercial applications.

But the limitations of AI are well recognised and highlight the importance of maintaining human oversight in decision-making processes where context, ethics, creativity, and emotional intelligence are vital.

The human judgments bring in experience and understanding of context, culture, nuances, and application to specific verticals, but with AI also expect major restructuring of the supply side of the business over the next few years.

This article was first published in the Q1 2025 edition of Asia Research Media

Share:

Latest Updates