Moneycontrol PRO
HomeNewsOpinionColleges are going to have to put ChatGPT on the curriculum

Colleges are going to have to put ChatGPT on the curriculum

Generative AI is becoming more powerful and the education system cannot afford to ignore it. This fall semester needs to be a period of rigorous questioning and experimentation for teachers at all levels. Yes, we will read scholarship and write essays in class, but we will also use generative AI individually and together

September 07, 2023 / 10:06 IST
Perhaps we should reconfigure our courses to emphasise the aspects of thinking and learning that we do better than AI does.

In retrospect, my late summer to-do list was laughable. Among 20 other items to accomplish “before the semester begins” was this innocuous bullet point: “Write my AI policy.”

That’s like writing “prepare for storm” while in the eye of the hurricane.

Forecasters in the media had warned me since the spring, so why wasn’t I better prepared? In part because I’m old-fashioned, a late adopter. I’m a scholar of ancient, timeless things, a professor of theology who also teaches Greek and Latin and Coptic. I’m more comfortable decoding papyri unearthed from the desert than re-coding chatbots in the cloud. And I suppose I had stayed put during previous waves of educational technology, which were usually overhyped. Indeed, I had experimented with the generative AI platform ChatGPT when it was first released ­— and was not impressed.

ChatGPT can’t adjudicate the good from the bad, I had thought. It writes stilted prose, with occasional hallucinations and low aptitude for direct quotation. It’s a powerful aggregator of internet discourse, to be sure. But I thought there was a five-year window to figure out how to adapt our educational methods and goals to generative AI.

Nonetheless, I had blocked off a recent morning to read up on the technology, plug in some of my favorite essay prompts for my classes, and then write my AI policy. But a lot had changed in a year. GPT-4 was now generating decent work about complicated questions, in mere seconds per essay. With just a few minutes of refining prompts, editing and plugging in quotations, these would be above average student essays. I had not seen an excellent essay worthy of an A grade yet, but the competence to produce a good (albeit formulaic) one was now evident. Some prompts:

  • “Give me some options for a bold thesis statement about the future of abortion policy.”
  • “Were Jesus’ teachings in the Sermon on the Mount really good advice for daily life?”
  • “Analyse the strengths and weaknesses of Professor Michael Peppard’s scholarly writings.”

It gives surprisingly coherent and meaningful responses to all of these, and its criticisms of my own published work are, sadly, accurate.

Some of my assignments require creative or first-person writing. So I prompted GPT-4 to write a personal essay about a young girl who had just made a perilous migration from a violent family in Honduras to a bus stop in Texas, and the only possession she had left was her rosary given to her by her grandmother. Not only did the AI write a coherent story on the first attempt, but it also used metaphors accurately and made symbolic connections that read realistically: "My family was fractured — broken shards that could never form a complete picture again. … Before I left, Abuela handed me a rosary. “Your North Star, she whispered, as she pressed it into my palm."

Is this an excerpt from a work of great literature? No, but GPT-4 produced a competent narrative with some poignant moments. It generated the pivotal metaphor of rosary as “North Star” — a doubly meaningful symbol for the Catholic migrant’s journey northward.

Maybe I shouldn’t have been surprised. So much of literature’s meaning and emotion emerges from the manipulation of symbols, and large language models like ChatGPT have been coded specifically to do just that. Not only did I now understand the power of this technology to disrupt education, but I also see the Hollywood writers’ strike with new eyes.

As someone whose career has been built on analytical reading and generative writing, I needed someone to talk me off the professional ledge, to tell me the storm isn’t as scary as it seems. I called up my friend Mounir Ibrahim, who works at Truepic Inc, a leader in digital-content authenticity. After a long conversation, he convinced me that what I am seeing now is already old technology, and that the current capacities are already far beyond what I am using on a publicly available interface. He persuaded me to change my educational methods and assessments immediately and, in this new world of AI, to reassess what education is for.

This fall semester needs to be a period of rigorous questioning and experimentation for teachers at all levels. If AI can generate a cogent essay template about the role of religion in the Roman Empire, then should I retain this essay prompt in my class? More generally, is learning to write an analytical essay still a central goal of a liberal education? If not, what else should we be doing?

Perhaps we should reconfigure our courses to emphasise the aspects of thinking and learning that we do better than AI does. We humans are (as of now) better at: asking questions, critical thinking, building and maintaining human relationships, analysis and prevention of bias, evaluating aesthetics, problem-solving about the present and future, ethical decision-making and empathy. What would it look like to build our courses around these features of our learning?

This semester will be in “sandbox mode,” as the gamers say, an exploratory mixing of the old world with the brave new one. Yes, we will read scholarship and write essays (in class on blue books), but we will also use generative AI individually and together. We will increase the frequency and modes of group collaboration and the development of higher-order questions that AI does not ask. I will re-introduce the most ancient assessment, the individual oral exam, while also requiring students to use generative AI on their first take-home essay.

Most importantly, we will critique the biases, omissions and falsehoods of generative AI, in the model that I am calling “require and critique.” For some assignments, students will use generative AI and then, as their evaluated work, offer higher-order criticisms of its outputs based on other sources and inputs from our course. Finally, we will devote substantial time and effort to ethical analysis — the ultimate mode of intelligence that remains unique to humans, for now.

I know I’m still not ready. But the waves of some storms are too big to ignore or resist. The only choice, it seems, is to ride them.

Michael Peppard is a Professor of Theology at Fordham University. Views are personal and do not represent the stand of this publication.

Credit: Bloomberg

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Michael Peppard
first published: Sep 7, 2023 10:06 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347