Friend or Foe? Three Perspectives on AI from the 2023 ATXpo
Saint Mary’s recently hosted the annual conference, bringing together ten top Bay Area schools to discuss effective education through technology. The topic on everyone’s mind? AI in higher education.
Long considered the global tech epicenter, the San Francisco Bay Area is on its way to becoming the “AI capital of the world.” It’s only fitting, then, that the region's leading colleges and universities are driving conversation and innovation.
On October 3, 2023, 160 attendees arrived on Saint Mary’s campus for the tenth Academic Technology Expo (ATXpo). Each year, the conference brings together faculty, staff, and students from ten Bay Area schools to demonstrate and learn cutting-edge educational practices, from automated content management systems to VR mock courtrooms. SMC Chief Information Officer James Johnson is a member of the leadership team for ATXpo and played a key role in overseeing the work of the College in hosting the event. Also this year—ten months after software company OpenAI released ChatGPT to the wider world—there was that large-language elephant in the room, one that needed addressing.
Previous ATXpo hosts include UC Berkeley, Santa Clara University, and Stanford University, which established the conference in 2013. While Saint Mary’s has taken part in ATXpo for years, 2023 marks the first time the College has hosted the event.
In his opening remarks, Saint Mary’s Provost and Executive Vice President Corey Cook described the event as a natural fit for the College’s commitment to transforming lives. “Ultimately, today is about helping us develop tools and strategies to engage our students,” he said.
If you weren’t able to attend this year’s ATXpo, don’t fret! Here are three key highlights from the conference.
Living through history—and responding to it
In a matter of months, generative AI has radically reshaped the educational landscape. As a result, technology professionals have effectively become AI first responders, equipping their campus communities while striving to pave a way forward.
During the ATXpo Lightning Talks, three IT specialists—John Bansavich of the University of San Francisco, Colin Justin of Santa Clara University, and Kenji Ikemoto of Stanford University—described how their campuses have responded to the emergence and rapid evolution of AI. “In the very beginning,” Ikemoto said, “academic integrity was a big concern, and the need for the grading policy, really.” Based on faculty feedback and brainstorming, Ikemoto and his team at Stanford developed the “Artificial Intelligence Teaching Guide.” Designed for educators at every grade level, the free and accessible guide will help instructors explore the pedagogical possibilities of AI and create appropriate course policies for this academic year.
But it’s not all doom and gloom, Justin noted. Santa Clara ran a series of summer workshops entitled “AI is On Your Side,” encouraging faculty to explore how AI could streamline workflows and assignments. The university also plans to offer faculty grants for innovative teaching with AI. Above all, Justin said, “We really want all of our programming around AI to stay human-centered."
AI Hallucinations: The Pitfalls and Possibilities
Later, during the IdeasLab portion of the day, we connected with Carl Thelen, longtime Instructional Technologist at Saint Mary’s and co-instructor of the beloved Jan Term course on swordplay. Back in January 2023, Thelen decided to experiment with Google Bard to create a quiz. He had asked his students to watch the film The Princess Bride and instructed the chatbot to spit out a few questions to test them. Right off the bat, one question stood out: “What is the name of the poison that Prince Humperdinck gives to Princess Buttercup?”
The issue: Humperdinck never poisoned Buttercup. The chatbot had, in the words of AI experts, “hallucinated.” Chatbots like Bard have soaked up hundreds of hundreds of gigabytes of text—from books, Wikipedia, and other Internet sources—which allows them to rapidly make connections and generate human-like responses. But because some of that information is wrong or biased, they can also offer up inaccuracies and stereotypes. “Sometimes the places they will take you will be great; sometimes they'll be stupid,” Thelen said. “Whatever comes through, you have to cross-check.”
But there’s potential in the mistakes, he continued. With the Princess Bride questions, he decided to include some hallucinated ones, testing whether students actually watched the film. In another instance, Thelen used the program Pictory to create a video on the history of the lightsaber, that iconic Jedi weapon in the Star Wars films. When the resulting video was riddled with errors, he decided to try another program, Perplexity. It directed him to a pre-existing YouTube video that was “exactly what I needed,” he said.
The lesson for Thelen? Sometimes, AI is most helpful when it isn’t generating.
Mightier Than the Sword? AI and Writing
At a later faculty panel, five educators and administrators were candid about their efforts to efforts to grapple with AI’s impact on college writing. Saint Mary’s Sheila Hassell Hughes has been learning on her feet: She returned the teaching writing this fall after seven years as Dean of the School of Liberal Arts. Hughes decided to call her first-year Writing as Inquiry course Inquiry in the Digital Age. “We are trying to get at digital literacy in a variety of ways,” she said.
With each writing assignment—personal narrative, rhetorical analysis, and argumentative essay—students are asked to think critically about their relationship to ChatGPT and other technology. At some point in the semester, too, Hughes will share another list of assignments—a list recommended to her by ChatGPT. She had asked the chatbot to design “a series of first-year writing assignments where ChatGPT could be used but not do the work for them.” She’ll have her students compare those writing prompts to her own.
“We are trying to get at digital literacy in a variety of ways,” said Sheila Hassell Hughes.
None of the chatbot’s recommendations were perfect, Hughes noted. “They were all like, ‘Well, have the students reflect about what they did, and then they won’t be able to use ChatGPT.’” But generative AI tools can reflect, too, she said. It’s part of what makes the tools so valuable—and so human, argued another panelist, Ryan Sloan, from UC Berkeley’s College Writing Programs.
Instead of being immediately dismissive, Sloan said, “I think there are opportunities… to adjust and learn and play with the strategies that then can be layered back into a student's own process of inquiry.” Humans are inherently “mimetic,” he added. It makes sense that their technology would be, too.
READ MORE: Adjunct Associate Professor Kenneth Worthy's recent piece for Times Higher Education, "Run from the chatbots or embrace them? There is a middle way"
Hayden Royster is Staff Writer at the Office of Marketing and Communications for Saint Mary's College. Write him.