Session 1: GenAI and the Future of Teaching & Learning
Overview
This session explores how generative AI is reshaping education and what that means for teaching and learning. Using Co-Intelligence by Ethan Mollick as a guiding text, we will examine how AI tools may augment or transform classroom practices, the way faculty impart knowledge, and the way students engage with the learning process. The discussion will focus on practical and conceptual questions about how AI is already influencing higher education and where it may be headed. No prior reading is required - discussion questions are designed to provide all relevant context, so all participants can engage meaningfully regardless of familiarity with the material.
Reading Materials
- Co-intelligence: living and working with AI by Ethan Mollick.
Meeting Date and Location
Thursday, February 19th in RNS 356 from 11:30 to 12:30 CST
Note: This is not during either SOAR discussions nor faculty meetings
Discussion Questions
Chapter 3: Rules for Working with AI
In Chapter 3, Mollick introduces four rules for working with AI:
- Always invite AI to the table
- Be the human in the loop
- Treat AI like a person (but tell it what kind of person it is)
- Assume this is the worst AI you will ever use
How, if at all, would you revise these rules?
- What would you add or subtract?
Do you think these rules make sense for the purpose of figuring out how to or if we should incorporate AI tools into our classrooms?
If you already make use of AI tools (particularly Generative AI tools), what are some of your own personal rules for working with AI?
- Why do you consider those rules important?
What rules would you make for student use of Generative AI?
Cointelligence and Subject Matter Expertise
Mollick describes a powerful tension between access to AI outputs across domains and the ability to properly work with those outputs as “cointelligence.” He writes:
“The issue is that in order to learn to think critically, problem-solve, understand abstract concepts, reason through novel problems, and evaluate the AI’s output, we need subject matter expertise” (181).
What might be some of the implications of this tension for universities and colleges?
As a faculty member, how would you define subject matter expertise?
As a student, how would you define subject matter expertise?
A student is very unlikely to have subject matter expertise in a course they are actively enrolled in.
- Given this, are there ways to meaningfully incorporate AI use into classrooms that support learning and growth?
- Can AI assist students in developing subject matter expertise as you define it?
AI, Tutoring, and the Future of Teaching
Benjamin Bloom’s 1984 paper “The 2 Sigma Problem” showed that students receiving one-to-one tutoring performed two standard deviations better than those in conventional classroom settings. Bloom challenged educators to find scalable methods that could achieve similar results.
Mollick writes:
“This is where AI comes in… AI will reshape how we teach and learn, both in schools and after we leave them… They won’t replace teachers but will make classrooms more necessary… and they will destroy the way we teach before they improve it” (160).
- What changes are you already feeling or seeing in the classroom—as students or as teachers?
Post-Session Reflection
In our first session, participants divided into small groups to explore four key themes: changes already observed in the classroom, a re-evaluation of Mollick’s Rules for Working with AI, the concept of “The Tyranny of the Button,” and the role of Subject Matter Expertise.
What Changes Are We Already Seeing in the Classroom?
There was general agreement that while some improper use of GenAI tools by students does occur, widespread cheating at St. Olaf does not appear to be a significant problem (at least not yet). That said, some faculty are already adjusting how they assess students in response to that possibility. The most common shift has been a return to in-person assessments. Across disciplines, students reported that at least some of their classes have moved away from take-home final essays or exams. Faculty expressed frustration with having to return to in-person assessments, noting that take-home assessments allow for more interesting, open-ended questions and give students the time to engage deeply with difficult material in ways that timed, in-person settings simply do not. One participant reflected on Mollick’s comments about AI tutors, pointing out that a fundamental aspect of learning is being in a shared space with peers, all struggling through a concept together, and coming to understand that failure is a natural and valuable part of the process. The concern raised was that an AI tutor cannot replicate that experience for students who have not yet developed a tolerance for failure in the way a human tutor/classroom can. There was also broad agreement that the GenAI models most people have access to, the free versions, tend to affirm and agree with users rather than push back, which can lead students down the wrong path. A human tutor, by contrast, can offer thoughtful resistance while remaining encouraging. This naturally led to a discussion of Benjamin Bloom’s 1984 paper, “The 2 Sigma Problem,” which demonstrated that students receiving one-to-one tutoring performed two standard deviations better than those in conventional classroom settings. The question the group raised in response was: what are students performing toward, and for whom? Learning has both quantitative dimensions (anyone who has ever taken a standardized test is familiar) and qualitative ones, such as failure tolerance and social interaction. While we can attempt to measure the former, the latter is far more difficult to capture, and many so-called “quantitative measures of qualitative traits are generally weak proxies. The group also discussed how GenAI tools have made it easier for students to act on their underlying incentive structures. Consider the difference between a math major focused primarily on earning strong grades as a signal to future employers, and a math major who intends to pursue graduate study and needs a genuine mastery of the subject. These two students have very different motivations. In the past, even the grade-focused student had to put in significant effort to achieve that signalling effect, and even deciding to cheat required a lot more effort than it does today. With the availability of GenAI tools, the required effort is reduced considerably.
Reassessing Mollick’s Rules for Working with AI
Of Mollick’s four rules, the one that generated the least controversy was “Assume this is the worst AI you will ever use.” The rule “Treat AI like a person,” however, was far more contentious. To paraphrase one participant: for perhaps the first time in human history, we have machines capable of mimicking our language with apparent fluency, and treating those machines as people would be a serious mistake with real potential for harm. We are already seeing the consequences in the rise of AI romantic relationships, AI therapists, and other forms of problematic attachment. We should instead treat AI like a parrot. When you interact with a parrot, and that parrot says something wise, you don’t (or at the very least you shouldn’t) go “that’s a really wise parrot”, you generally wonder something along the lines of “how was that parrot trained?” This is precisely the stance we should take with GenAI - a curiosity about its training data to understand its output. For more on this, check out “On the Dangers of Stochastic Parrots” by Emily Bender et al.. The room was roughly split on the rule “Bring AI into everything you do.” Some argued it is the most effective way to discover which tasks AI genuinely helps with, while others maintained that certain activities should be considered fundamentally human — a theme I hope to explore more fully in the second session.
On the “Tyranny” of the Blank Page
The conversation in the room implied that describing the process of wrestling with “the blank page” as tyrannical was inherently problematic and an issue of perspective. To some, the blank page represents possibility - at that point, anything could still happen. When we default to GenAI for idea generation, we most definitely lose something unique about the way we think about and approach issues - the anchoring on first ideas is very real. Even in the cases where AI does boost individual creativity, it limits group creativity and how teams think because everyone is generating similar ideas as researchers at the Wharton School of Business found. As one of my teachers put it to me once, there’s something to be said for learning to “sit in the suck” and seeing what comes of it.
What if we defined the prompt as the creative idea? This conversation generated quite a bit of conversation about whether prompting is creative (it can be), whether you can trademark a prompt (short answer - it depends), and whether you own the output of a particularly creative prompt. The last question really dovetailed into the ethics of these models - they’re all trained on something, and if you don’t own the underlying “something”, you probably don’t own the output of your prompt either. For more on ethics, join us on March 5th! (yes I am a shameless self-promoter, deal with it.)
Co-intelligence and Subject Matter Expertise
We ran out of time before fully addressing this theme, but based on the discussion I was able to observe, a central challenge is determining if and when students have developed sufficient expertise to meaningfully engage with GenAI tools in their coursework. For many subject areas, upper-level mathematics, for example, the versions of these tools available to most people are not yet capable of doing much more than serving as a cautionary tale about the dangers of outsourcing your thinking to GenAI.