On 22 August I hosted two ESOMAR AI Taskforce Community Circles, the first at UK 6am and the second at UK 4pm. These sessions focused on identifying the Strengths Weaknesses, Opportunities, and Threats being generated by AI in the context of Research and Insights. In this post I highlight some of the key points from the discussion, distilling the contributions of nearly 200 people.
Generating experimentation and new thinking. Lots of people are already finding labour-saving options, for example coding open ended comments. People are using it to to speed up their desk research and hypothesis exploration. Tools such as translation and transcription and tools in everyday use. Others are creating draft screeners and questionnaires. People refer to using LLMs as research assistants or co-pilots. On interesting phrase was ‘enhanced search’.
The key message was that lots of people have started using AI and especially LLMs. With every technological revolution there are the early adopters, and there were plenty of them in the Community Circles.
Weaknesses include hype and overclaim, attention being diverted from other good prospects towards AI. Bias and IP concerns are widespread. AI is too biased to Western society in general and English in particular. Sometimes it is hard to assess the sources, and sometimes they have been fabricated (the so-called hallucinations).
The costs for LLMs can be high and there are concerns about data protection and intellectual property. One of the dangers is that its mistakes and fabrications are very plausible.
The conversation about opportunities focused on the next three years. One opportunity is that the weaknesses can be addressed. Faster creation of questionnaires and design plans. Word-smithing social media posts and reports. Using hallucinations to be creative. Synthetic data and new ways of solving problems. AI-powered research designs and analysis. Using AI to help stakeholders define research better.
The creation of smart DIY tools, allowing non-researchers to conduct research more safely. Growth in the utilization of secondary data. Chat bots that really work.
Fraud is already a concern, but it could be a much bigger problem. AI might remove real humans from the research process. Misinformation, deep fakes, and deliberate misinformation. Changes in the business ecosystem, clients bypassing suppliers, new companies (such as tech companies) entering the industry. Will the shift from primary research to secondary research eventually result in less secondary research?
Primary research could become cheaper with AI, but it could become more expensive as more and more steps are required to control fraud. Legislation by governments often ends up favouring the larger companies, helping promote oligopolies. There is concern about how new researchers will learn research skills. Bias could become worse and there could be deliberate misuse of AI systems.
The threat to the planet via climate change and the risk to humankind from AI going wrong should be noted.
Operationally we need to utilize the strengths and ameliorate the weaknesses. However, the real challenge relates to the Opportunities and Threats.
Ideas that have been suggested for initiatives that ESOMAR and others should do include: rules about transparency and adding watermarks, the creation of ethics committees, increased skillsets in the domain of AI. We should declare when data collection or reporting is based on Generative AI.
A set of AI questions, like the questions ESOMAR provides for people using panel companies was suggested.
Want to join us?
The quickest way to get connected is to visit the ESOMAR Taskforce page and sign-up. You can attend Community Circles, listen to Podcasts, access materials, and consider attending the ESOMAR AI Forum in Amsterdam 18 October.
Watch the Community Circle recordings:
SWOT for AI Insights | AM Session