A draft statement on AI at SUNY Geneseo for discussion by the community. LAST UPDATED 2025-04-30 AT 7:40 A.M. to add the italicized text above the statement. As this has been an evolving draft, contents may have changed since you were here last. Want to compare this version with earlier ones? Use the link below to track the changes on GitHub. Contributors: Paul Schacht, Paul Jackson, Laurie Fox, Alexis Clifton, Matt Pastizzo, Dan DeZarn, Amy Sheldon, Melanie Blood, David Warden.
SUNY Geneseo’s 2024–2025 theme for “Ideas that Matter,” Artificial Intelligence (AI), produced many discussions of issues and opportunities related to AI. One result of these discussions was a task force in academic affairs charged with exploring the potential for a draft SUNY Geneseo statement on AI. Multiple constituents from across the campus were consulted in crafting the draft statement, which appears below.
In its current form, the statement is proposed for endorsement by the SUNY Geneseo College Senate.
Given the newness of AI technology, individuals are likely in different places with respect to their knowledge and understanding of its impact and risks. As this new technology evolves, it may be necessary to revise the statement below. Any proposed revisions will be the result of continued broad-based input from the campus community and will be brought back to the College Senate for its endorsement.
This document sets forth a framework of definitions, principles, and guidelines intended to help our campus meet the challenges and leverage the opportunities presented by Artificial Intelligence (AI), particularly Generative Artificial Intelligence (GenAI). (These terms are defined below under Definitions.)
This is not a policy document, although it may prove useful in the development of such policies as may be deemed necessary, in future, for governing the use of AI at Geneseo. In the interest of staking out a broad area of shared practice, the document does seek to establish a number of norms.
The document’s description of opportunities and benefits afforded by AI should not be construed as cheerleading for AI in general. Its description of risks and harms presented by AI should not be construed as deprecation of AI in general. How to regard these opportunities, benefits, risks, and harms, in themselves and on balance, must remain a matter of individual judgment.
Although the present document is not intended as a policy, it is important to remember that as a State Entity, SUNY Geneseo is bound by New York State policies, including Policy No. NYS-P24-001, Acceptable Use of Artificial Intelligence Technologies.
We are also bound by any applicable provisions of the State University of New York Policies of the Board of Trustees, SUNY system policies, labor-management agreements to which the system is party, policies of the College, and state and federal law.
As a practical matter, SUNY Geneseo must adhere to policies adopted by our accreditor, the Middle States Commission on Higher Education (MSCHE). In addition, certain academic programs must follow the policies of profession-specific accrediting bodies, and all programs would do well to consult any statements or recommendations developed by their disciplinary organizations or associations.
In its Glossary, the New York State Office of Information Technology Services defines Artificial Intelligence as “A machine-based system, that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. The definition does not include basic calculations like Excel formulas, basic automation, or pre-recorded response systems.”
The same Glossary defines Generative AI (GenAI) as “AI that is capable of generating text, images, or other media, using generative models. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.”
The present document concerns itself primarily but not exclusively with GenAI, for it is GenAI that has so far dominated the public conversation about AI’s impact within and beyond academia.
While GenAI is not new, it suddenly seems to be everywhere: not only in free-standing tools with names such as ChatGPT, Claude, Gemini, Copilot, DALL-E, and Midjourney, but also, increasingly, as an affordance built into other tools, such as search engines, word-processors, image editors, email and messaging interfaces, mobile devices and apps, coding environments, websites, cars, and household appliances.
Given the growing prevalence of GenAI, ignoring it is not an option. Rather, as a campus community we must commit collectively to engaging with this new technology critically, responsibly, and competently.
Engaging with GenAI does not require embracing it. It does not require accepting its proliferation as inevitable. It does require that we educate ourselves about the technology so that we understand how it works, what it can and cannot do, and how we might begin to navigate some of the issues surrounding it: issues of trust, transparency, integrity, reliability, and social responsibility, to name just a few. From this starting point, we can engage one another in discussion of GenAI in the classroom and beyond. Where faculty and staff are comfortable engaging students in actually using GenAI, doing so may create opportunities for collaborative exploration and deeper, richer conversation.
Under the leadership of the appropriate member of the President’s cabinet, each division or office should, over time, establish clear expectations regarding acceptable use of GenAI in accomplishing the work of the division or office, including when and how GenAI may be used appropriately (if it may be used at all). Consistent with New York State policy, referenced earlier, on Acceptable Use of Artificial Intelligence Technologies, GenAI tools should only be employed in ways that involve human oversight, follow principles of fairness and equity, and provide transparency by disclosing when and how such tools have been employed. The same criteria should govern the use of GenAI tools in teaching and learning. In the classroom, faculty should establish clear expectations regarding students’ use of GenAI to complete assignments, and they should state these expectations clearly on their syllabi. A detailed syllabus statement will provide guidance that distinguishes among different tools and uses, such as idea-generation, problem-solving, outlining, summarizing, knowledge-seeking, spell-checking, and grammar-correction. For their part, students are entitled to know when and how their faculty are using GenAI tools for such purposes as creating assignments and assessments, grading submitted work, and generating communications.
Engaging with GenAI responsibly means, among other things, recognizing that certain types of data should not be provided to GenAI tools as inputs. These include
Engaging with GenAI critically means, among other things, recognizing that AI tools replicate the biases and misinformation in their training data, cannot distinguish fact from falsehood, frequently invent their own facts (a phenomenon sometimes called “hallucination”), and cannot even perform basic calculations or data analysis reliably. It means understanding that AI is having and will likely continue to have widespread and significant social, economic, political, and environmental effects: threatening job security, destabilizing conventional understandings around intellectual property, polluting civic discourse, and contributing to climate change—but also leveling the playing field for individuals with certain types of disabilities, opening new avenues for creativity, and providing new tools for advancing individual and public health.
Engaging with GenAI competently means, among other things, understanding when and how to best use AI as an effective aid to brainstorming and creativity, a useful tool for condensing and organizing information, or a powerful means of surfacing patterns in large quantities of data. It means, as well, understanding that AI performs these tasks through processes of statistical analysis and inference, without itself understanding what it is doing. To most people, the word “intelligence” implies conscious awareness and the ability to reflect on one’s own thoughts and actions. At times, GenAI tools may appear to exhibit these properties, but in fact they are nothing more than sophisticated machines for predicting the next plausible word, pixel, or other bit of data.
As we engage with GenAI at Geneseo, individually and as a community, we should ask ourselves continuously how and when we can use this technology in ways that uphold and advance our values. Below are some questions to consider under our values of learning, creativity, belonging, civic engagement, and sustainability. Many more questions could be raised in connection with each value, and some questions could be repeated under more than one value. Those below are simply intended to help get substantive conversations going.
New York State policy on Acceptable Use of Artificial Intelligence Technologies carves out narrow exceptions for inputting “personally identifiable, confidential, or sensitive information” to AI systems consistent with “applicable laws, rules, regulations, notices, and policies” (p. 3), but Geneseo users should not determine for themselves whether a particular situation meets the criteria for an exception. Rather, users should not input information of this kind into any AI system or tool without prior authorization from Geneseo’s Department of Computing and Information Technology . ↩