Click here for the Daily Orange's inclusive journalism fellowship applications for this year


On Campus

‘A brave new world’: How SU professors are adapting to AI in classrooms

Maxine Brackbill | Photo Editor

Hamid Ekbia, the director of the Autonomous Systems Policy Institute at Syracuse University, teaches Information Studies 300: AI and Humanity to build better understand artificial intelligence.

Get the latest Syracuse news delivered right to your inbox.
Subscribe to our newsletter here.

In response to the emerging field of artificial intelligence, Syracuse University’s School of Information Studies launched the class Information Studies 300: AI and Humanity this fall.

The class is an introductory course for the Autonomous Systems Policy Institute, a SU initiative launched in 2019, that focuses on the intersection of technology, society and policy. It will be taught by Hamid Ekbia, a professor at the Maxwell School of Citizenship and Public Affairs and the director of ASPI.

“All of us, as citizens, need some level of understanding of AI and that is what this course is going to do,” Ekbia said.

Ekbia said the course follows a “bottom-up” approach to build a solid and robust foundation and understanding of AI, which he sees as essential to students at SU.



“This class is meant to be on AI, but from a very multidisciplinary perspective,” Ekbia said. “The goal is to get students from all over campus and from all different disciplines.”

Although many SU professors have included specific language and guidelines in their syllabi addressing the use of AI in their classrooms, Ekbia intentionally didn’t. Instead, he is opting to teach students how to use AI in a responsible and ethical way.

“Rather than telling people what to do or what not to do, I’m going to teach them to develop a clear sense of how to do this themselves,” he said.

Ekbia is opting to teach students how to use AI in a responsible and ethical way through the course Information Studies 300: AI and Humanity, which launched this fall.

Maxine Brackbill | Photo Editor

The use of AI in academic settings has been reflected in classes and syllabi throughout SU as programs like ChatGPT – a widely-used chatbot driven by AI technology – have increased in popularity. ChatGPT became the fastest-growing consumer application in history after reaching 100 million monthly active users in January, only two months after launch, according to Reuters.

Nina Iacono Brown, an S.I. Newhouse School Of Public Communications professor who specializes in communications law, said SU’s current Academic Integrity Policy, updated in 2021, already indirectly addresses the use of AI in academic work.

“Our academic integrity guidelines are pretty clear, what is acceptable and what is not acceptable,” Brown said. “So when a professor asks a student to write a response to something, for example, the expectation is that the student is writing the response, not AI.”

SU faculty and instructors are also encouraged to include a statement on whether and how artificial intelligence should be included or prohibited in their syllabi, according to SU’s Center for Learning and Student Success syllabus recommendations.

Dan Pacheco, a Newhouse professor of practice and the Peter A. Horvitz Chair of Journalism Innovation, said he sets specific guidelines in each of his class syllabi that allow the use of AI, as long as students disclose how and when they use it. Classes he teaches like Magazine, News and Digital Journalism 545: Virtual Reality Storytelling have more flexible rules than others.

“We are at a unique inflection point in human history with next-generation Generative Artificial Intelligence coming online over the past year,” Pacheco wrote in his syllabus for Journalism 221: Foundations of Data and Digital Journalism, which he sent to The Daily Orange. “If you use generative AI to assist in the performance of your coursework, including assignments, you must disclose it.”

Pacheco doesn’t allow students to use generative AI to write stories or generate and analyze data. Students are permitted to use generative AI in other ways, such as to get instructions on a task or to help write HTML code. Pacheco said he is set to teach a Newhouse class next semester on artificial intelligence for media professionals.

“We need to start using these tools in an educational setting in responsible ways that line up with how industries are using them,” he said.

Alex Richards, an assistant professor of Magazine, News and Digital Journalism in Newhouse, also allows some use of AI in his classes. He said AI can play an important role as a “tutor” for students to help them understand anything they want at any time. But he cautions students against relying on it too heavily.

“Generative AI is not acceptable to use when it’s doing the work for you,” Richards said in his class syllabi’s generative AI and large language model policy, which he sent to The D.O.

Sierra Zaccagnino | Digital Design Editor

The temptation of AI in classrooms, Richards said, presents its own set of problems by creating a “short-cutting” approach that can sever critical thought.

Yüksel Sezgin, an associate professor of political science in Maxwell, is also afraid of AI diminishing a student’s ability to think individually and critically. Sezgin has not only banned the use of AI in his classes but also the use of all technological devices in his classroom.

In response to the rising popularity of ChatGPT, Sezgin said he has stopped giving take-home exams this year, which he has done every year previously.

“I have to keep a fair playing ground for all my students between those who cheat and those who don’t cheat,” Sezgin said. “That is my role as the educator.”

Beyond fairness concerns, the negative implications of AI, such as structural biases and misinformation, have been raised by many SU professors.

“The biggest concern that I have, especially as a person of color, is the biases that exist in large language models,” Pacheco said.“They were trained on data from the internet that reflects our cultural biases, so our AI basically will then use those patterns and reflect them back outward.”

Pacheco said the best way to avoid feeding the “beast of bias” is to build the time and space for conversations about issues exacerbated by AI and to teach students how to navigate these technologies responsibly.

Another concern Richards raised was how AI could potentially misinform students who use it as a research tool.

“AI creates something that sounds like it should be right and in the process of doing that, it certainly can be right, it can be correct,” Richards said. “But it will also make things up, it will hallucinate, it will generate facts that aren’t facts.”

Jasmina Tacheva, an assistant professor in the iSchool, said she has concerns about AI’s broader societal implications beyond academia, such as the environmental impacts of data centers and pay rates for AI workers who train models.

“My appeal to all of us is not to be swept away by the AI hype cycle; instead, we ought to understand these technologies for what they are – mirrors reflecting our society, with all of its inherent complexities and challenges,” Tacheva wrote in an email to The D.O.

Despite these concerns, Brown acknowledges the educational benefits of AI, as long as professors and students keep in mind that these developing technologies are only just emerging and are inherently flawed.

As AI becomes more prevalent in educational settings and beyond, Richards said, it’s now up to educators to navigate its place in the classroom.

“It’s just sort of a brave new world,” Richards said. “We’re going to have to find ways to respond quickly to AI to keep the whole college experience meaningful and worthwhile.”

membership_button_new-10





Top Stories