Ethics of AI in QDA
Ethics are at the forefront of all research. The rise of Gen-AI poses new ethical questions to consider in the practice of QDA.
Navigating the ethics of AI use in QDA
When considering the use of AI to facilitate qualitative analysis there are a number of ethical considerations.
These pages are under construction. Get in touch with suggestions of more materials to add.
The ways Large Language Models are developed raises ethical implications for their use.
Use of data without consent.
- Gen-AI models are trained on materials that humans have produced - artwork, writing, research, music etc. There is a movement to ensure those creators are treated fairly. See FairlyTrained.org which certifies Gen-AI models that are trained fairly.
- Penguin Random House (PRH) intends to amended its copyright wording to state "No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems", See post by Matilda Battersby, October 18th 2024 on The Bookseller Blog.
Have you considered the environmental impact of not only developing LLMs but also use of them?
- Kolbert, E. (2024). The obscene energy demands of AI. The New Yorker.
See also the page on Guidance on AI for QDA.
Franzke, A.S. (2021). An exploratory qualitative analysis of AI ethics guidelines. Journal of Information, Communication and Ethics in Society 20, 4, pp. 401-423.
When using AI tools for qualitative analysis it is of upmost importance that you are aware of the data privacy issues - whether you are uploading audio/video for automated transcription and/or using Generative-AI tools for analytic tasks.
Check out carefully the full details of how the tools you are intending to use handle data. Consider the following:
- the AI models being used - some CAQDAS-packages use more than one, or give you a choice, others do not.
- what assurances are provided that research data uploaded to the AI model(s) will not be used to train AI models?
- whether other 3rd party vendors are used to process research data - which ones and can you trust them?
- do all the companies involved in processing research data comply to relevant national or regional information and data privacy regulations (e.g. GDPR, CCPA, HIPAA etc.
- how is research data uploaded to the AI model, how long is it retained, and by which companies?
- how detailes is the information provided by the tools you are using about data security and privacy?
Below we list links to data security information for several CAQDAS-packages that use Generative-AI models.
- AILYZE Data Security information
- ATLAS.ti AI Data Security and Privacy
- CoLoop Security and Ethics information and information about what happens to data when uploaded
- MAXQDA AI data protection information
- NVivo AI Assistant Terms of Use
- QInsights Privacy Policy
- Quirkos Privacy Policy
- Reveal Privacy Policy
- Transana Data Privacy and Security information
We are used to brokering informed consent from participants contributing to our qualitative studies - but have you considered what participants may feel about their data being analysed with the use of AI?
If you intend to use AI for any aspect of qualitative data analysis, you must be upfront about it with participants and seek their consent for you using their data in this way.
The question of bias is important when considering the development and use of LLMs.
- Ashwin, J., Chhabra, A., & Rao, V. (2023). Using Large Language Models for Qualitative Analysis can Introduce Serious Bias. World Bank Policy Research Working Paper 10597.
Have you considered the question of authorship and getting your research work published when using AI for any aspect of qualitative analysis?
Is it really YOUR work if you have used AI as part of the analysis process? The answer will depend in part on how, when, and to what extent you have used particular tools. You should expect to be questioned about this, and be able to justify your answers to those questions.
Transparency is paramount. So document your use of AI in detail so you can communicate your use of AI.
Also be aware that you may not be able to publish your research work where you intend if you have used AI to facilitate the process.
Make sure you know the guidelines for the use of AI from the publishers where you want your work to appear. Below are a few to get you started thinking about the issues involved in publishing if you have used AI.
- Sage Publications Guidelines for Authors.
- Taylor & Francis AI Policy.
- Committee on Publication Ethics (COPE) position statement on Authorship and AI tools.
- Bristol University Press, home of Policy Press Policy on authorship and AI.