Artificial Intelligence (AI) Task Force

Ethical Considerations with AI

The Brandeis Center for Teaching and Learning has done a great job working on Ethics frameworks for the use of AI over the past many months. Below you will find links to the work the group has done, along with other resources to learn more about things such as spotting algorithmic bias and learning how to mitigate the risk of obtaining bad data.

Brandeis Resources

  • Ethical and Privacy Concerns - Overview by the Brandeis Center for Teaching and Learning
    This website discusses information on equity, ethical, privacy and accessibility concerns in using AI at our institution.
  • Brandeis University Ethics Framework for Students - Ethics AI Framework
    This website provides a visual set of general guidelines for using generative AI in coursework, assignments and university research. These guidelines can be applied to faculty and staff as well as students.

Ethical Considerations and Algorithmic Bias

The following articles were reviewed and hand-picked by faculty members of the AI Task Force to help our community better understand potential shortcomings of AI platforms.

  • Refusing GenAI in writing studies: a quickstart guide: Wordpress
    Sano-Franchini, McIntyre, and Fernandes (2024) offer a guide to generative AI “refusal” based on disciplinary goals and informed “principled choice.” Addressing ethical concerns in a list of 10 premises, the writers discuss issues including language and power, linguistic homogenization and white supremacy, extractive technology and labor exploitation, intellectual property and citation justice, and environmental impact. Geared specifically for the discipline of writing studies, the guide provides a useful overview of ethical issues to consider in higher education more broadly.
  • Understand Algorithmic Biases -When AI Gets it Wrong
    Generative AI has the potential to transform higher education—but it’s not without its pitfalls. These technology tools can generate content that’s skewed or misleading (Generative AI Working Group, n.d.). They’ve been shown to produce images and text that perpetuate biases related to gender, race (Nicoletti & Bass, 2023), political affiliation (Heikkilä, 2023), and more. As generative AI becomes further ingrained into higher education, it’s important to be intentional about how we navigate its complexities.
  • Algorithmic bias and fairness: A critical challenge for AI - Just Think AI
    (2024, May 21)
  • Bias in AI algorithms and recommendations for mitigation - PLOS Digital Health
    Nazer, L. H., Zatarah, R., Waldrip, S., Ke, J. X. C., Moukheiber, M., Khanna, A. K., ... & Mathur, P. (2023)

Environtmental Impacts of Generative AI

  • The environmental impacts of AI—Primer: Hugging Face (an AI and machine learning
    community)
    Luccioni et al. (2024) offer a primer on the environmental impacts of generative AI. The article discusses energy use, water consumption, supply chain minerals, and greenhouse gas emissions. The researchers also survey different national initiatives and propose a set of technical, behavioral, organizational, and policy interventions.
  • The uneven distribution of AI’s environmental impacts: Harvard Business Review
    Ren and Wierman (2024) discuss generative AI in terms of environmental sustainability
    initiatives and the problem of “environmental inequity.” The article argues that the negative environmental impacts of AI are not distributed evenly, with more vulnerable communities often suffering worse effects.
  • Explained: generative AI’s environmental impact: MIT News
    Adam Zewe’s (2025) MIT News article offers a reader-friendly overview of the environmental impacts of generative AI. The article covers data centers, energy consumption (including electricity and fossil fuels), and water usage. It also compares the energy costs of generative AI and standard Internet queries.
  • Making an image with generative AI: MIT Technology Review
    Melissa Heikkilä’s (2023) short news article notes that different AI tasks consume energy at different rates. Image generation, in particular, requires significantly more energy than standard text generation.
  • Reconciling the contrasting narratives on the environmental impact of large language
    models: Scientific Reports
    Ren et al.’s (2024) open-access article considers contrasting narratives that view LLMs as either a “sustainability problem” or a “sustainability solution.” Comparing the energy efficiency of LLMs vs. human labor, the study finds that LLMs offer potential environmental benefits through the automation of some tasks. The researchers consider broader societal impacts, including job displacement and the spreading of misinformation and bias, and the article concludes that careful planning will be necessary to balance labor, environmental, and other ethical concerns.