AI Policy
Introduction
This is a living document that reflects my current approach to AI usage in various aspects of my professional life. AI usage is defined as any interaction with generative artificial intelligence.
Section 1: For My Research
Throughout my research process, I treat AI as a competent research assistant. One of my primary uses is the building, expansion, and checking of analytic code. In most cases, I independently determine the ways that I want to explore data, and then I turn to AI tools to improve the time and structure of building the necessary code. Sometimes, I ask these tools to suggest additional ideas such as new graphics or complementary analyses. I critically evaluate all AI-driven suggestions, and, if I keep any suggestions, I assume the final responsibility of their inclusion.Each stage of my analytic process remains driven by good data stewardship and a commitment to transparent research. AI has become an increasingly useful component in these processes, but I try to ensure that its usage does not contradict my underlying principles. For example, I continue to prioritize thoughtful data preparation and cleaning informed by my prior training. I have been able to enhance this process with AI’s ability to produce large amounts of code based on written guidance and existing examples. If AI writes code, I verify each line myself to ensure I can understand and validate the output. I have been able to further bolster the data cleaning and preparation stage through AI’s ability to record decisions and assumptions made during the process.
In the analysis stage, I rely on AI most heavily to build graphics that easily communicate my findings. When creating the underlying analytical models, I predominantly perform this work without AI, utilizing past experiences and existing research. Sometimes, I will share ideas and ask questions with AI, treating this as another way to inform my decisions. I validate all claims by reviewing the references provided by AI. I understand that I hold full responsibility for conclusions shaped by these discussions. More frequently than informing the modeling process, I use AI to build succinct graphics that communicate the results. I check all illustrations to ensure they align with my raw output.
Beyond data analysis, I use AI to support my ideation and writing processes. In most cases, I first formulate the content independently from AI, with minor support such as sentence rewriting. After I have a substantial first draft, I use AI most heavily to provide a first review of the work. I draw on discussions with Sebastian R. Jilke in addition to open-source content provided by Scott Cunningham to optimize this feedback (Cunningham, 2026). Cunningham’s repository has informed the prompts that I use during this stage will also providing overarching structure and guiding steps for consistent AI performance across time. I highly recommend starting with his work for anyone that is interested in “AI collaboration.”
I use AI with a few other assorted research tasks, and some of these areas, admittedly, require further personal reflection. Given the existence of enough original content written by myself, I will co-create with AI to finalize certain deliverables including conference abstracts and research presentations. Beyond these, I have also utilized AI to broadly summarize existing literature or curate a research report that serves as a starting point for my project. These last two components give me slight pause even though I do not include any section of this content in my final products. This hesitancy is driven by two main components. First, allowing AI to inform my initial thinking, even if this information (in the form of a literature review or report) is taken as one of many sources, suggests that any biases that come from such tools can be present from the very first steps. And such biases are harder to verify than lines of analytic code. Second, I am unclear on the implications of feeding resources into AI. I often ask AI to be aware of 4-5 main articles that are driving my research during the project collaboration. I am not sure whether there are important ethical or logistical considerations when utilizing AI tools to pick up on themes throughout multiple papers. My ongoing task is to continue to wrestle with these questions.
Cunningham, Scott. (2026). MixtapeTools. Github. https://github.com/scunning1975/MixtapeTools/tree/main
Section 2: For My Teaching
I do not substantially use AI to inform my teaching material. Course content is derived from a number of reputable resources that I curate including textbooks, news articles, government communication, and open-source outlets. The structure and substance of my lectures are not informed by AI, unless otherwise stated. I cite AI usage just like I cite other sources. In the future, I may ask AI to reformat or bolster my slides, but I currently do not see any need. I assume responsibility for the final content delivered by my teaching.As an instructor, I accept the responsibility of creating meaningful lectures that hold relevance for my students. This includes thoughtful material and exercises that support the skills and knowledge needed in our rapidly changing world. Concrete steps to incorporate AI-inclusive lessons in my classroom include mixed-media projects that focus on bridging the gap between information acquisition and communication. Additionally, I prioritize components of the classroom experience that cannot be fully replicated through generative AI including peer interactions and instructor-provided feedback.
Section 3: For Revisions and Grading
I do not substantially use AI for any tasks that require my feedback. Since AI tools are widely available, I do not see value in combining my personal thoughts with generative input; I leave the generative input to be acquired by the original author. Revisions and grading reflect my personal responses to the submitted work without the additional influence of AI. The only two ways that I use AI during these tasks are to (1) suggest ways for me to communicate my fully formed idea in a clear, context-appropriate manner and (2) provide resources that can further inform my response.Section 4: For Student Expectations
Classrooms remain a stronghold for developing one’s critical thinking and knowledge acquisition. This unique context necessitates careful thought when incorporating generative AI. I draw inspiration from the AI Assessment Scale to structure my expectations of students. In most cases, I do not police or ban the use of these tools, rather, I largely promote a “level 3” approach. At this level, AI may be used to help complete the task, including idea generation, drafting, feedback, and refinement. Students should critically evaluate and modify the AI suggested outputs, demonstrating their understanding. Students are required to cite their AI usage following APA style guidelines (Perkins et al., 2024). Students are also encouraged to make initial attempts of tasks independently from AI, bolstering their ability to understand and navigate situations without assistance.Perkins, Furze, Roe & MacVaugh (2024). The AI Assessment Scale.
