Ethical and Privacy Concerns

Privacy and AI-generative tools

Generative AI tools (like large language models, or LLMs) collect and process large amounts of personal data, raising concerns around student/user privacy and data security. For example, consider the privacy policy from ChatGPT, which allows the company to access any information fed into it. (See OpenAI’s FAQ on how they may use information shared with it.)

There is a risk that this data may be used for unwanted or malicious purposes or that it may be compromised in a data breach.

If you choose to use an LLM in your course, make sure that you use either Google Gemini or Microsoft Copilot, and that you access them through your Brandeis account. Brandeis has entered into licensing agreements with Google and Microsoft that ensure that user data is protected and not used for further training.

Additional equity, ethical, and accessibility concerns

  • At the current time, many AI tools are free, but this might change in the future. If you decide to incorporate these tools into your assignments, consider options that all students can access.

  • It is worth considering avoiding tasks that will disproportionately benefit students who can pay for expensive AI tools.

  • It's important to educate students on the limitations and potential biases of AI-generated content and encourage them to use it responsibly. AI tools are only as unbiased as the data they are trained on. If the training data for the AI includes bias, the results generated by the AI will also be biased. In this way, AI tools can perpetuate the biases present in their original training sets, leading to discrimination against certain groups of people and reinforcing pre-existing inequalities and stereotypes.

  • Just as AI tools can perpetuate bias, they can also perpetuate misinformation. AI tools can generate content that is inaccurate, misleading, or harmful, potentially creating or perpetuating misinformation based on the data it was trained on.

  • AI tools that generate text are producing text outputs based on the massive data set of texts on which they were trained. As a result, the texts produced by AI tools make it difficult to determine who is responsible for the content created, who is the author of the content, and whether there is any accountability for the results.

  • Given the wide variety of current and in-development AI tools, it is important to note that not every AI tool has been designed to be accessible to all users.