EPFL recognizes the huge potential of the many generative Artificial Intelligence tools capable of generating synthetic media such as text or images [1][2]. Like any tool, they have their advantages, but we need to be aware of their limitations and the major risks they present. We encourage the use of generative AI across our range of activities in an informed, responsible and transparent manner. To best exploit this potential, some recommendations are in order.
A key principle: always remember to remain critical when using these tools.
- Do not use such tools to learn new things or to search for information. They often generate plausible nonsense and can lead you to believe that what they generate is true or real when it isn’t.
- Do not use it to generate content that you are unable to check for veracity or for form: for example, a foreign language.
- When your data is sensitive. Never input confidential, private or personal information about yourself or others into these tools. Always reflect first on the nature of the information you are using, because once you enter it into one of these tools, it’s no longer confidential.
- When you want to be surprised: for example, to generate ideas.
- When you have the possibility to check the accuracy of the result generated by the AI tool: for example, only generate code that you can run and check yourself.
- When you want help with the form of your production, rather than with its contents: for example, to improve the wording of your text, to summarise a passage that is too long or to overcome writer’s block.
- Plausible nonsense [3]: we generally tend to trust machines more than ourselves (automation bias [4]), which makes us all the more vulnerable to the apparent plausibility of the content generated by this software, even when it is completely false or incorrect.
- Environmental impact: this software is among the least energy- and water-efficient, so avoid using it when you have tools that will perform the same task with less impact (for example, searching the web, or even watching videos).
- Privacy: by using generative AI tools, you are sharing your data with private companies, so you lose your control over it.
- Bias: this type of software suffers from different types of biases, whether gender bias (e.g. machine translation [5], image generation [6]) or bias based on ethnic origin or religious orientation (e.g. text generation [7]). Evaluate the results carefully and think critically.
Be transparent: you should reference the use of generative AI tools in your work. And remember that you will need to be able to justify decisions taken on the basis of AI-generated results.
Although a list of potential uses would be extremely long, we may suggest a few examples as illustrations:
– direct students more effectively to documents relevant to their questions
– assist pedagogical advisors in their analysis of course evaluation comments
– assist teachers in the creation of teaching material
– summarize lengthy legal arguments
– write programme code for research applications or project descriptions
A mail previously sent to EPFL students and teachers laid down guidelines for the use of generative AI by students during their studies. The Federal Administration has prepared a helpful fact sheet regarding the use of generative AI tools in its services [8], as well as a set of guidelines on the broader underlying issues [9]. Our sister institution in Zürich also has an FAQ page relating more specifically to ChatGPT [10].
A mail previously sent to EPFL students and teachers laid down guidelines for the use of generative AI by students during their studies. The Federal Administration has prepared a helpful fact sheet regarding the use of generative AI tools in its services [8], as well as a set of guidelines on the broader underlying issues [9]. Our sister institution in Zürich also has an FAQ page relating more specifically to ChatGPT [10].
[1] Barraud, E, Petersen, T, , Overney, J., Aubort S. & Brouet A.-M. (2023). Intelligence artificielle. Amie ou concurrente. Dimensions, 8. EPFL. Link
[2] Rochel, J. (2023) ChatGPT. 6 questions fondamentales. Link
[3] Hardebolle, C. Ramachandran, V. (to appear). SEFI Editorial for the Special Interest Group on ethics: https://go.epfl.ch/plausiblenonsense
[4] Suresh, H., Lao, N., & Liccardi, I. (2020, July). Misplaced trust: Measuring the interference of machine learning in human decision-making. In Proceedings of the 12th ACM Conference on Web Science (pp. 315-324). Link
[5] Schiebinger, L., Klinge, I., Sánchez de Madariaga, I., Paik, H. Y., Schraudner, M., and Stefanick, M. (Eds.) (2011-2021). Gendered Innovations in Science, Health & Medicine, Engineering and Environment. Link
[6] Leonardo Nicoletti and Dina Bass. Humans Are Biased. Generative AI Is Even Worse, Text-to-image models amplify stereotypes about race and gender — here’s why that matters. Link
[7] Abid, A., Farooqi, M., & Zou, J. (2021). Large language models associate Muslims with violence. Nature Machine Intelligence, 3(6), 461-463. Link
[8] Instruction sheets for the use of AI within the federal administration: https://cnai.swiss/en/products-other-services-instruction-sheets/
[9] Guidelines on Artificial Intelligence for the Confederation. General frame of reference on the use of artificial intelligence within the Federal Administration. https://www.sbfi.admin.ch/dam/sbfi/en/dokumente/2021/05/leitlinien-ki.pdf.download.pdf/leitlinien-ki_e.pdf
[10] https://ethz.ch/en/the-eth-zurich/education/ai-in-education/chatgpt.html