With the continued development of artificial intelligence tools like Copilot and ChatGPT, students, faculty, and administrators are considering how to use and evaluate AI-generated text in academic writing.
This page of resources is intended for faculty, students, staff, and administrators. It is a work in progress and is updated regularly (last update: August 20, 2025). While UMB published an AI Governance Policy in May 2025, specific programs and schools might have additional AI usage policies. For student writers, we also recommend checking with course instructors when using an AI tool for any coursework (e.g., writing assignments, discussion posts, research, etc.).
In July 2025, the National Institutes of Health (NIH) also published some guidance, entitled Supporting Fairness and Originality in NIH Research Applications, “on the appropriate usage of artificial intelligence (AI) to maintain the fairness and originality of NIH’s research application process.”
AI Resources at UMB
- Center for Information Technology Services (CITS): Microsoft Copilot. UMB students, staff, and faculty have access to Microsoft’s GenAI tool Copilot. When logged in with a UMB login, users “are in a protected environment” (Copilot Chat, para. 1).
- Health Sciences and Human Services Library (HSHL): AI and Information Literacy: Overview. This multipage guide provides an overview of strategies to use GenAI tools like Copilot and ChatGPT in a higher education environment.
- Faculty Center for Teaching and Learning (FCTL): Artificial Intelligence (AI). This site provides sample prompts and strategies instructors can use with tools like ChatGPT or Copilot to spark ideas, streamline content generation, and enhance teaching, although all AI-generated responses should be reviewed before use.
Citing GenAI
- New York Medical College: Citing AI. This site provides an example of how to cite a GenAI tool like Copilot or ChatGPT in APA style and according to the style guide of the American Medical Association (AMA).
- American Psychological Association (APA): How to Cite ChatGPT. This blog post on the APA’s official blog provides details on how to cite ChatGPT and other GenAI tools (including Copilot).
Authorship Guidelines
The following links are a selection of authorship guidelines by major academic publishers:
- Springer Nature
- Sage Journals
- Elsevier
- The Journal of Biotechnology, published by Elsevier, provides some helpful sample language on how to declare the use of generative AI in scientific writing (select “Declaration of generative AI in scientific writing” on the left-hand menu).
(Critical) AI Literacy
- Anna Mills: Critical AI Literacy and Critical Assessment. Free module introducing students to the "nature, risks, and ethics of language models like ChatGPT" and Copilot.
- Bill Tomlinson, Andrew W. Torrance, and Rebecca W. Black (2023): ChatGPT and Works Scholarly: Best Practices and Legal Pitfalls in Writing with AI. This article, published in SMU Law Review Forum, introduces a thoughtful framework on fair use, plagiarism, and legal considerations when writing with AI (open access via arxiv, an open-access archive for research papers hosted and supported by Cornell University.
- Charlotte Huff (2024): The Promise and Perils of using AI for Research and Writing. This article, published on the American Psychological Association's (APA) website, highlights how AI tools can streamline routine or early-stage tasks in psychology research and writing, such as literature reviews and initial drafting, while encouraging critical oversight and careful evaluation of any AI-generated content.
Ethical Challenges around AI
- Salvador Santino F. Regilme (2023): Artificial Intelligence Colonialism: Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South. Published in the SAIS Review of International Affairs, this article elaborates on responsible AI governance that emphasizes workers' rights, ecological preservation, and balanced global progress is needed.
- Jeremy Forest Price & Shuchi Grover (2025): Generative AI in STEM teaching: Opportunities and Tradeoffs. This report, commissioned by the Community for Advancing Discovery Research in Education (CADRE) and supported by grant funding via the National Science Foundation (NSF), explores how GenAI can support key aspects of STEM instruction, including equitable curriculum design, lesson planning, assessment, and teacher development, while thoughtfully addressing the ethical, instructional, and inclusion challenges such tools may pose.
- Reporters Sans Frontières (Reporters Without Borders, RSF) (2023): Paris Charter on AI and Journalism. This charter, developed by Reporters Without Borders, outlines ten ethical principles for using AI in media, emphasizing transparent use, human oversight, content authenticity, and the preservation of journalistic integrity in the AI era.
- Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet (2025): This statement was developed after the 2025 Artificial Intelligence (AI) Action Summit held in Paris, France. It expresses a commitment to develop AI that is open, inclusive, ethical, sustainable, and grounded in human rights for the benefit of both people and the planet. It was signed by 58 countries (the United Kingdom and the United States were not among its signatories).
Teaching (Writing) With AI
- The Modern Language Association (MLA) and the Conference on College Composition and Communication (CCCC) have created the MLA-CCCC Joint Task Force on Writing and AI. This task force has issued two working papers that provide guidance for instructors who either teach writing or use writing as forms of assessment in their classrooms:
- MLA-CCCC Joint Task Force on Writing and AI Working Paper: Overview of the Issues, Statement of Principles, and Recommendations (2023). This paper presents a principled overview of the risks and benefits of generative AI in writing and literature programs and offers recommendation-driven guidance for educators, administrators, and policymakers to develop ethical, mission-aligned policies and foster critical AI literacy.
- Generative AI and Policy Development: Guidance from the MLA-CCCC Task Force (2024). This document offers scenario-driven guidance for developing thoughtful, pedagogically aligned generative AI policies, emphasizing how instructors can set clear parameters to ensure AI enhances, not replaces, human writing processes.
- Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes (2024): Refusing GenAI in Writing Studies: A Quickstart Guide. This guide articulates "refusal" as a principled and deliberate response within writing studies, and for anyone who uses writing in their teaching, encouraging educators and students to thoughtfully choose when and how not to use AI, rather than defaulting to prohibitive bans.
- Neil Selwyn, Marita Ljungqvist, and Anders Sonesson (20205): When the Prompting Stops: Exploring Teachers’ Work around the Educational Frailties of Generative AI Tools. This article investigates how teachers navigate the substantial effort required to review and reshape AI-generated content, revealing the gaps between AI’s promise and the complex realities of its classroom use.
- Association for Writing Across the Curriculum (2023): Statement on Artificial Intelligence Writing Tools in Writing Across the Curriculum Settings. This statement emphasizes that writing is a mode of learning and that overreliance on AI generators can limit student development and enculturation into disciplinary discourse.
External Link Disclaimer
The links above are being provided as a convenience and for informational purposes only; they do not constitute an endorsement or an approval by UMB of any of the products, services, or opinions of the corporation or organization that created the content on these sites. UMB bears no responsibility for the accuracy, legality, or content of the external site or of subsequent links. Contact the external site for answers to questions regarding its content.