Session 2: Ethics, Trust, and Transparency in GenAI Use

Overview

This session centers on the ethical challenges of using generative AI in classroom/learning settings. We will discuss issues of trust between students and faculty, responsible use of AI tools, and the ethics of disclosure when syllabi lack clear AI policies. Participants will consider questions such as: When is AI use appropriate? How should expectations be communicated? What does academic integrity mean in an age of AI assistance? Through guided discussion, we will explore how classrooms can balance innovation with fairness, accountability, and mutual trust.

Meeting Date and Location

Thursday, March 5th in RNS 356 from 11:30 to 12:30 CST

Note: This is not during either SOAR discussions nor faculty meetings

Reading Materials

Discussion Questions

Trust, Assessment, and Academic Integrity

There is no reliable way to detect whether a piece of text is AI-generated. Research has shown that AI detectors have extremely high false positive rates, especially for non-native English speakers.

  1. How do we maintain trust between faculty and students in a world where, unless work is produced in class, there is no accurate way to determine whether it is human-created?

  2. For classes where knowledge cannot be meaningfully assessed through timed exams, what are some broad-stroke ways to ensure core learning objectives are being met?

    • Feel free to be general or discipline-specific.
  3. Every school and instructor must define acceptable AI use.

    • Choose a specific discipline and describe:
      • What acceptable AI use might look like
      • What unacceptable AI use would be

AI, Creativity, and Meaningful Work

In his discussion of AI as a creative tool, Mollick argues:

“A lot of work is time-consuming by design. In a world in which the AI gives an instant, pretty good, near-universally accessible shortcut, we’ll soon face a crisis of meaning in creative work of all kinds. This is, in part, because we expect creative work to take careful thought and revision, but also that time often operates as a stand-in for work” (120).

  1. In general, if time is no longer a reliable proxy for effort, what might be some of the markers we should use to assess:
    • Quality?
    • Sincerity?
    • Others?
  2. How does this translate to an academic context?

The Rise of Humanizer AI’s

In a recent NBC News article titled “To avoid accusations of AI cheating, college students are turning to AI”, Tyler Kingkade explores how some college students who had been falsely accused of using AI to cheat now turn to so-called “Humanizer AIs” to “dumb down” their writing. As one of them put it,

“I have to do whatever I can to just show I actually write my homework myself.”

  1. If a student writes their own work and then runs it through a “humanizer” to avoid being falsely flagged, is that cheating? What, exactly, would make it unethical?

  2. The article suggests that students now feel responsible for proving that their work is human-generated rather than institutions proving that it is not. What does this shift in burden of proof mean for trust between students and faculty?

One of the students featured in the article who had resorted to using AI detectors said:

“But it does feel like my writing isn’t giving insight into anything — I’m writing just so that I don’t flag those AI detectors,”

  1. If students routinely use tools to mask their writing style, what happens to the idea of a “student voice”?

General Questions

  1. If we accept the premise that detection is unreliable and that students will adapt strategically, what kinds of assignments would make “humanizer” tools unnecessary? For classes in your discipline, what would an assessment look like that cannot be meaningfully “humanized” by AI?

  2. Many of the tools mentioned in the article are paid services. How might unequal access to “humanizers” and detection-avoidance strategies deepen existing educational inequalities? Should universities intervene in this market, and if so, how?

Post-Session Reflection

As I reflected on our conversation in this second session, I realised that it was probably easiest to summarise the discussion in the room around the three themes below.

1. Transaction Costs and Incentives: This Isn’t New, So Why Does It Feel Different?

One thing we can all agree on - cheating and plagiarism are not new phenomena. Students have always been able to pay others to write their papers, copy work, or find ways around assessments. And yet there was broad agreement that something has genuinely shifted. Something is definitely different about the way it’s happening today. To put in economic terms (I am also an econ major after all), the transaction costs of cheating have fallen dramatically. There is far less friction. What once required effort, money, or risk now takes seconds. When incentives already exist to cut corners and the barriers to doing so collapse, the behaviour follows. The boundaries around what even constitutes cheating have also blurred, making it harder to build shared norms around honesty when the lines themselves are contested.

2. Current Strategies Are Losing Strategies in the Long Run

An interesting proposition was put to the room - every tool or approach currently being used to detect or deter AI-assisted work is, structurally, a losing bet. AI detection tools carry high false positive rates and cause disproportionate harm to non-native English speakers. The downstream consequences are immense: students falsely accused of using AI have begun running their own genuine work through “humanizer” tools just to avoid being flagged. The idea of a student voice, is being eroded on all sides - both by the students who use AI to generate their work and by the humanizer tools that some students now turn to, to avoid having their authentic work flagged. Meanwhile, the technology will only improve, including its ability to replicate individual student voice, which means even the conversational and relational heuristics that experienced educators rely on will erode over time. There is a real risk that the majority of faculty non-instructional time gets consumed by policing suspected misuse. That is not what anyone came to teaching to do, and it makes the environment worse for everyone. Formal institutional policies banning the use of AI detectors were raised as one concrete step worth considering.

3. Pedagogical Shifts: Documentation as the Norm

If the baseline assumption is that students want to learn, then the right question is not are they cheating but why are they turning to AI in the first place?. The answers likely include: not seeing the purpose of an assignment, feeling under-supported, managing stress, or simply operating with a different sense of what tools are legitimate. Rooting out dishonesty is a losing frame, our focus should be on realigning incentives. This points toward a broader rethink of assessment. If AI can easily complete the tasks being set, it may be because those tasks were never measuring much to begin with. Assessments built around defining or describing have a low ceiling; assessments that require students to document their process, reasoning, and reflection at each stage are inherently harder to outsource: and more educationally valuable regardless. Some assessments may not lend themselves to “reflection” in the traditional sense, but documentation of process is itself a form of reflection. Having to justify to yourself why you used particular techniques, constructs, etc at each step is a way to double check how well you as a student actually understand those things and reinforce your learning over time. It can be tedious, but worth it.