Possible AI Syllabus Statements
Instructors are required by the University to include a policy in all their syllabi regarding the use (and misuse) of generative AI (e.g., chatGPT) in their courses. Rather than outright prohibiting the use of generative AI tools, faculty are asked to develop AI policies for their classes that allow students to learn how to use generative AI tools ethically and effectively in their discipline. A policy helps ensure that your expectations for appropriate interaction with generative AI tools are clear to students. Instructors are also asked to articulate the policy clearly for the students and to have regular conversations in class about the use of generative AI, especially before major assignments.
Below are examples of statements you may adopt for your own policy. Feel free to modify it or create your own to suit the needs of your course. Please include a reference to the Library Guide on citing generative AI in your course policy and assignment instructions.Examples of AI Permissive Policies
A fully-encouraging draft policy (from Harvard University)
This course encourages students to explore the use of generative artificial intelligence (GAI) tools such as ChatGPT for all assignments and assessments. Any such use must be appropriately acknowledged and cited. It is each student’s responsibility to assess the validity and applicability of any GAI output that is submitted; you bear the final responsibility. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course.
(Source)
"Use is freely permitted with no acknowledgement" Example Statement (from the University of Delaware)
Students are allowed to use advanced automated tools (artificial intelligence or machine learning tools such as ChatGPT or Dall-E 2) on assignments in this course; no special documentation or citation is required.
(Source)
Example Statement from EDUC 6191: Core Methods in Educational Data Mining (from the University of Pennsylvania)
Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time.
The university's policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.
(Source)
Example Statement from Advanced Quantitative Analyses (from Clemson University)
Artificial Intelligence Policy: Are all of our classes now AI classes?
A. I expect you to use AI (e.g., ChatGPT, Dall-e-2) in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill, and I will provide basic tutorials about how to leverage it for our work. However, be aware of the limits of these software systems.
B. AI is vulnerable to discrimination because it can inadvertently (or intentionally) perpetuate existing biases present in the data it is trained on. For example, if an AI system is trained on data that contains a bias against a certain group of people, the system may make decisions that are unfair or discriminatory towards that group.
C. There are several reasons why AI systems can perpetuate discrimination:
(i) Bias in the training data: If the training data contains biases, the AI system may learn and replicate those biases in its decision-making.
(ii) Lack of diversity in the training data: If the training data does not include a diverse range of examples, the AI system may not perform well on diverse inputs, which may lead to discrimination.
(iii) Lack of transparency: Some AI systems can be difficult to understand and interpret, making it challenging to detect and correct for biases
(iv) Lack of accountability: Without proper oversight and accountability, it can be difficult to identify and address discrimination in AI systems.
(v) It is important to keep in mind that these biases can be unconscious, unintended and hard to detect, but they can have serious consequences if they are not addressed.
D. AI can be a valuable tool for augmenting human decision-making and critical thinking, but it is not a replacement.
E. AI is a tool, just like a pencil or a computer. However, unlike most tools you need to acknowledge using it. Pay close attention to whatever information you use in your own work that is produced from AI, and explain how/what you used at the end of assignments. My recommendation is to screen shot and save everything (i.e., what prompts you used, what answers were produced, where, why, and how). This is new territory, but basic attribution rules still apply. Cite everything, otherwise you are likely violating academic integrity policies.
F. If you provide minimum effort prompts, you will get low quality results. You will need to refine your prompts to get better outcomes. This will take time and practice.
G. Don't trust anything the systems says. Assume it is wrong, unless you already know the answer and can verify with trusted sources. It works best for topics you deeply understand.
H. Use your best judgment to determine if/where/when to use these tools. They don't always make products easier and/or better.
I. Large language models and chatbots are "look back" machines. They don't advance knowledge (yet). ChatGPT-3 uses data from 2021 and earlier (a lot has changed since 2021).
Note...some of this was written with Ai; OpenAI. (2021). GPT-3 API. Retrieved from https://beta.openai.com/docs/api-reference/introduction
(Source)
Example Statement from Specialization for Insects (from the Wharton School University of Pennsylvania)
I expect you to use AI (ChatGPT and image generation tools, at a minimum), in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill, and I provide tutorials in Canvas about how to use them. I am happy to meet and help with these tools during office hours or after class.
Be aware of the limits of ChatGPT:
If you provide minimum effort prompts, you will get low quality results. You will need to refine your prompts in order to get good outcomes. This will take work.
Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check in with another source. You will be responsible for any errors or omissions provided by the tool. It works best for topics you understand.
AI is a tool, but one that you need to acknowledge using. Please include a paragraph at the end of any assignment that uses AI explaining what you used the AI for and what prompts you used to get the results. Failure to do so is in violation of the academic honesty policies.
Be thoughtful about when this tool is useful. Don’t use it if it isn’t appropriate for the case or circumstance.
(Source)
Examples of AI Mixed Policies
Allowing GenAI that you use in your discipline
These are Generative AI technologies that I use on a regular basis in my discipline: [example 1], [example 2], and [example 3]. I am giving you permission to use them in this class. If you come across something else that I'm unaware of, let me know so we can discuss its use in the class.
Mixed draft policy (from Harvard University)
Certain assignments in this course will permit or even encourage the use of generative artificial intelligence (GAI) tools such as ChatGPT. The default is that such use is disallowed unless otherwise stated. Any such use must be appropriately acknowledged and cited. It is each student’s responsibility to assess the validity and applicability of any GAI output that is submitted; you bear the final responsibility. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course.
(Source)
Example Statement on the Use of AI-Assisted Programming Tools (from Georgetown University)
Large language models, such as ChatGPT (chat.openai.com) are rapidly changing the tools available to people writing code. Given their use out in the world, the view we will take in this class is that it does not make sense to ban the use of such tools in our problem sets or projects. For now, here is my guidance on how these can and should be used in our class: First and foremost, note that output from ChatGPT can often be confidently wrong! Run your code and check any output to make sure that this actually works. Such AI assistants will give you a good first guess, but these are really empowering for users who invest in being able to tell when the output is correct or not.If you use ChatGPT or similar resources, credit it at the top of your problem set as you would a programming partner.Where you use direct language or code from ChatGPT, please cite this as you would information taken from other sources more generally.
(Andrew Zeitlin, McCourt)
(Source)
Example Statement on the Use of ChatGPT (from Georgetown University)
Part of treating others with respect is giving appropriate credit for ideas and scholarly works (including code). If you consult with other students on an assignment, report this in the work that you turn in. If in your code you use a library or implementation from another source, indicate that as well (minimally by including a URL in a comment). Do not generate new content with prompt-based AI tools like ChatGPT or CodePilot without permission from instructors unless specifically allowed by the assignment. (Using, for example, Grammarly as a language aid is OK.) Instructors reserve the right to request an oral explanation of answers.
(Nathan Schneider, Computer Science)
(Source)
Example Statement (from the University of Pennsylvania)
You may use AI programs e.g. ChatGPT to help generate ideas and brainstorm. However, you should note that the material generated by these programs may be inaccurate, incomplete, or otherwise problematic. Beware that use may also stifle your own independent thinking and creativity.
You may not submit any work generated by an AI program as your own. If you include material generated by an AI program, it should be cited like any other reference material (with due consideration for the quality of the reference, which may be poor).
Any plagiarism or other form of cheating will be dealt with severely under relevant Penn policies.
(Source)
Example Statement from CORE-2096: Digital Literacies and Intercultural Learning (from the American University in Cairo)
Transparency: When/if you use Artificial Intelligence (AI) platforms in your assignments, please write a note to clarify where in your process you used AI and which platform(s) you used. We will discuss this more throughout the semester in class, and you are encouraged to reflect on this in your writing as well. Please note that what the AI writing tools generate is often inaccurate and you may have to exert effort to create something meaningful out of them. I also hope that when the assignment is about reflecting on your own opinion or experience, you will do so.
(Source)
Example Statement from Introduction to Critical Theory (from George Washington University)
Policy on the use of generative artificial intelligence tools:
Using an AI-content generator such as ChatGPT to complete assignment without proper attribution violates academic integrity. By submitting assignments in this class, you pledge to affirm that they are your own work and you attribute use of any tools and sources.
Learning to use AI responsibly and ethically is an important skill in today’s society. Be aware of the limits of conversational, generative AI tools such as ChatGPT.
- Quality of your prompts: The quality of its output directly correlates to the quality of your input. Master “prompt engineering” by refining your prompts in order to get good outcomes.
- Fact-check all of the AI outputs. Assume it is wrong unless you cross-check the claims with reliable sources. The currently AI models will confidently reassert factual errors. You will be responsible for any errors or omissions.
- Full disclosure: Like any other tool, the use of AI should be acknowledged. At the end of your assignment, write a short paragraph to explain which AI tool and how you used it, if applicable. Include the prompts you used to get the results. Failure to do so is in violation of academic integrity policies. If you merely use the instructional AI embedded within Packback, no disclosure is needed. That is a pre-authorized tool.
- Fine tune your research questions by using this tool https://labs.packback.co/question/ Enter a draft research question. The tool can help you find related, open-ended questions
- Brainstorm and fine tune your ideas; use AI to draft an outline to clarify your thoughts
- Check grammar, rigor, and style; help you find an expression
(Source)
Example Statement from HI 371, Baseball as American History (from Bentley University)
A Few Words about Generative AI (e.g. ChatGPT)
Writing is integral to thinking. It is also hard. Natural language processing (NLP) applications like ChatGPT or Sudowrite are useful tools for helping us improve our writing and stimulate our thinking. However, they should never serve as a substitute for either. And, in this course, they cannot.
Think of the help you get from NLP apps as a much less sophisticated version of the assistance you can receive (for free!) from a Bentley Writing Center tutor. That person might legitimately ask you a question to jump-start your imagination, steer you away from the passive voice, or identify a poorly organized paragraph, but should never do the writing for you. A major difference here, of course, is that an NLP app is not a person. It’s a machine which is adept at recognizing patterns and reflecting those patterns back at us. It cannot think for itself. And it cannot think for you.
With that analogy in mind, you will need to adhere to the following guidelines in our class.
Appropriate use of AI when writing essays or discussion board entries:
- You are free to use spell check, grammar check, and synonym identification tools (e.g., Grammarly, and MS Word).
- You are free to use app recommendations when it comes to rephrasing sentences or reorganizing paragraphs you have drafted yourself.
- You are free to use app recommendations when it comes to tweaking outlines you have drafted yourself.
Inappropriate use of AI when writing essays or discussion board entries:
- You may not use entire sentences or paragraphs suggested by an app without providing quotation marks and a citation, just as you would to any other source. Citations should take this form: OpenAI, chatGPT. Response to prompt: “Explain what is meant by the term ‘Triple Bottom Line’” (February 15, 2023, https://chat.openai.com/).
- You may not have an app write a draft (either rough or final) of an assignment for you.
Evidence of inappropriate AI use will be grounds for submission of an Academic Integrity report. Sanctions will range from a zero for the assignment to an F for the course.
I’m assuming we won’t have a problem in this regard but want to make sure that the expectations are clear so that we can spend the semester learning things together—and not worrying about the origins of your work.
Be aware that other classes may have different policies and that some may forbid AI use altogether.
(Source)
Example Statement from CS6750: Human-Computer Interaction; CS7637: Knowledge-Based AI (from the Georgia Institute of Technology)
We treat AI-based assistance, such as ChatGPT and Github Copilot, the same way we treat collaboration with other people: you are welcome to talk about your ideas and work with other people, both inside and outside the class, as well as with AI-based assistants. However, all work you submit must be your own. You should never include in your assignment anything that was not written directly by you without proper citation (including quotation marks and in-line citation for direct quotes). Including anything you did not write in your assignment without proper citation will be treated as an academic misconduct case.
If you are unsure where the line is between collaborating with AI and copying from AI, we recommend the following heuristics:
- Never hit “Copy” within your conversation with an AI assistant. You can copy your own work into your conversation, but do not copy anything from the conversation back into your assignment. Instead, use your interaction with the AI assistant as a learning experience, then let your assignment reflect your improved understanding.
- Do not have your assignment and the AI agent itself open on your device at the same time. Similar to above, use your conversation with the AI as a learning experience, then close the interaction down, open your assignment, and let your assignment reflect your revised knowledge. This heuristic includes avoiding using AI assistants that are directly integrated into your composition environment: just as you should not let a classmate write content or code directly into your submission, so also you should avoid using tools that directly add content to your submission.
Deviating from these heuristics does not automatically qualify as academic misconduct; however, following these heuristics essentially guarantees your collaboration will not cross the line into misconduct.
(Source)
- CTL Help Request Form
- Programs and Services
- Events
-
Resources
- Brandeis University Syllabus
- Community Norms and Expectations
- Getting to know your students by name
- Tips for Preventing Student Plagiarism
- Best practices for preventing and managing disruptive classroom behavior
- Instructor FAQ
- Resources to help students learn how to study effectively
- Resources to help students develop reading skills
- Resources to help students read science papers and interpret data
- Grants to Support Teaching
- Active Learning
- Inclusive Teaching
- Assessments
- For TAs, TFs, CAs, and IAs
- chatGPT and AI
- People
- Contact and Visit
- Student Learning Resources
- Home