Area colleges tackle challenges, opportunities of AI in the classroom

TEACHING MOMENT: UNC Asheville professor Stephanie O'Brien teaches a senior seminar called Media Industries & Artificial Intelligence. Photo by Elijah LaPlante

A year ago, Stephanie O’Brien was confused — and a little intimidated — by the idea of artificial intelligence. Now she teaches a class on the subject at UNC Asheville.

“I was one of those people that was scared to death by AI,” says O’Brien, a lecturer in mass communication at the school. “I thought, ‘Oh my gosh, it’s going to be robot overlords. How do we deal with this?’ “

But after taking an online course offered by the Knight Foundation in October, she realized her students were falling behind in their knowledge of the rapidly advancing technology. So she took another course that gave her the training needed to teach a senior seminar called Media Industries & Artificial Intelligence, which UNCA began offering this semester.

“I knew that if I was going to truly prepare my students for their careers, no matter what they’re doing in the media, I had to find a way to teach this right and learn about it,” she explains.

O’Brien’s crash course is not unusual. Public interest in AI skyrocketed after the late 2022 launch of OpenAI’s chatbot, ChatGPT. Since then, many college officials have been scrambling to learn about a technology that could dramatically transform higher education.

UNCA and Western Carolina University have appointed task forces that will soon make AI policy recommendations to university officials. And Mars Hill University adopted a syllabus policy to provide guidance to faculty members and students about the use of AI in the classroom.

Among the issues the schools are sorting through are academic integrity, digital literacy, privacy concerns, the impact of AI on curriculum and best classroom practices for professors and students who use it.

Officials agree there is no time to waste because many professors already require the use of chatbots in classroom assignments. A chatbot is a computer program designed to simulate conversation with humans. When you type a message to a chatbot, it uses algorithms to understand what you’ve said and then processes the information and generates a response.

“A year ago, we were just starting down this path and we were doing presentations on ‘What is AI?'” says Chris Cain, director of MHU’s Center for Engaged Teaching & Learning. “Professors wanted to know, ‘Can we use AI?’ And now it’s ‘How do we use it? How much should our students use it?’ This is the world we live in now, because the cat’s out of the bag.”

AI assignments

Chris Duncan, an adjunct business professor at Mars Hill, started lecturing at the school in January 2023, shortly after ChatGPT became available. Because his job involves thinking strategically about educational trends, he decided to familiarize himself with how chatbots work.

“First, I just started experimenting with it, but then I realized it was such an essential skill for today’s generation looking for jobs that I incorporated it into class,” explains Duncan, who is also an executive-in-residence at the school. “They’re actually required to use it in many of their assignments, but within boundaries.”

For example, he says, he will have students write a marketing plan on their own. Then he will ask them to use a prompt to get an AI program to produce a plan. The students then combine the best elements of each plan into a final product.

MACHINE LEARNING: Chris Duncan, executive-in-residence and adjunct professor, has incorporated AI into his business classes at Mars Hill University. At left is senior Stephen Hood. Photo courtesy of Duncan

“It’s still their thought process, but they’ve had this assistant alongside of them, called AI, throughout. What it teaches them is the limitations of AI, which are substantial, but also how to use it as an efficiency tool and as an idea generator.”

Such an approach is becoming more common around WNC colleges.

O’Brien, for instance, requires students in a videography class to use at least two AI resources on an end-of-semester project. One student used ChatGPT to help generate interview questions for a podcast and will use AI tools to clean up the final product. Another is going to explore the ethics of artificial voice generation by producing a podcast in which two AI-generated fake voices debate each other.

Students must save all their prompts and responses so that O’Brien can review their processes.

“What’s not different is they still have to go through all the research steps,” she says. “They still have to cross-check everything. They still have to get scholarly articles and look at media sources.”

Ken Sanney, a professor of business administration and law at Western Carolina University, teaches a class on alternative dispute resolution. For one assignment, he had students negotiate against a Microsoft Copilot chatbot.

“Their classmates and themselves are both at a very beginning stage of learning how to negotiate, how to mediate, whereas the AI chatbot’s going to be drawing upon the large language models and all the other information that it has available to it,” he says. “It gives them some critical feedback that they don’t necessarily get from their classmates.”

MHU’s Cain has even used ChatGPT to spark classroom conversations. “I’ll ask a question, and if nobody answers, I’ll put it in AI, and we’ll get a response back. All of a sudden, they want to talk about it. So I can use it as a kind of a response generator for students.”

Questions of integrity

But not everyone is sold on AI as a classroom tool.

“There are professors who do think that [using] AI is just plagiarism based on experiences that they’ve had with students using it for assignments,” says Ashlee Taylor, a senior at Mars Hill. “So it’s still a very iffy thing with a lot of professors.”

Even some students are skeptical.

“One came to my office and asked about it,” says WCU’s Sanney. “He said, ‘I don’t really want to [use AI on an assignment]. I don’t trust it. And I don’t want to get any of my information [from] it. And I don’t want to ever be accused of cheating.’ I thought that was interesting coming from a 20-year-old.”

Educators who embrace AI agree its use creates legitimate concerns about plagiarism, cheating, misrepresentation and accuracy.

“Having an AI write a paper for you, we all agree that’s cheating, right?” says MHU’s Cain. “But at what point does it cross that line? Is asking ‘What are some good topics for a biology paper or a mathematics paper?’ cheating? It’s really interesting to see the differences in what one professor might believe is acceptable and another says is not acceptable.”

Professors can use tools designed to catch AI-generated cheating, but many have opted against it.

“They’re just not that reliable,” O’Brien says. “You can find those stories all over the news of people falsely accusing students. It is very difficult to tell. I go by this mantra, that I am going to trust my students first.”

Questions of academic integrity are among the myriad issues colleges are trying to address through updated policies.

Mars Hill’s suggested syllabus policy, which faculty members are not required to follow, states: “Responsible and productive academic use of Generative AI is possible if AI-produced content is acknowledged with explicit citation, fact-checked using human-produced authoritative sources of information and edited for clarity and accuracy by the final human user. Students at MHU should know that irresponsible use of Generative AI may involve them in violations of MHU’s academic integrity policies … because it may involve the use of unauthorized material, or plagiarism, and/or unauthorized collaboration on classroom assignments.”

WCU’s working group is likely to present some suggested tweaks to the school’s academic integrity policy to the provost’s office by the end of the semester, says Sanney, who chairs the group. The current policy defines plagiarism as “representing the words or ideas of someone else as one’s own.” The phrase “someone else” will likely be changed to reflect the possibility of machine-generated words or ideas, he explains.

And UNCA’s task force has a subgroup that is focused on the impact of AI on curriculum and faculty, says Marietta Cameron, dean of natural sciences in the computer science department. She is one of three people heading the group, which also will present recommendations this month on institutional philosophy, student life and university employees and operations.

Ethical concerns about AI go beyond the possibility of students using it to cheat, Cameron says.

“We already have a problem with people not knowing what is credible information,” she says. “AI compounds it even further. We give authority to [AI] and forget that we still have fallible people that are behind these technologies. It reflects our virtues and our weaknesses and our vices and our biases.”

Sanney says a WCU nursing professor used an AI assignment to alert students to the limitations of AI. She required students to use a chatbot to generate an autobiography. Almost all discovered that the chatbot made up information about them. “Her point is, if you’re going to turn in stuff that’s AI generated, not generated by you, you likely aren’t going to know the difference between what it gets right and what it gets wrong,” he says.

Privacy is another concern for colleges, which are responsible for safeguarding students’ education records under the Family Educational Rights and Privacy Act. Use of chatbots can involve sharing sensitive educational data with third parties without proper consent, potentially leading to privacy breaches and other security risks.

Chatbots can access personal information in a variety of ways, including requesting users to input personal data during conversations or accessing information stored in databases connected to the chatbot (such as user preferences or past interactions with the chatbot).

For that reason, Sanney says, WCU only allows professors to use the university-approved Microsoft Copilot chatbot.

The students’ views

Like some of their professors, many students didn’t know much about AI a year ago.

MHU’s Taylor first dabbled with the technology at the end of her junior year. As a senior, she finds herself using it almost daily.

“When I have to write emails or when I have to do any kind of written communication to somebody, I get very nervous because I don’t know how to do it,” says Taylor, a business and psychology major from Franklin. “So I’ve actually used it to help me learn how to write more professional-sounding emails. And I’ve found it very helpful with helping me learn how to do a business plan template for [Duncan’s class].”

LIFE SKILLS: Ashlee Taylor, a senior at Mars Hill, thinks using AI in the classroom has been beneficial. “I’m not as stressed knowing that I do have that tool,” she says. Photo courtesy of Chris Duncan

Fellow Mars Hill senior Stephen Hood has had a similar experience working on a business plan for Duncan’s class.

“Whenever I write that stuff, I just jumble it all into one big paragraph because I don’t really know when to stop,” explains Hood, a business administration major with a concentration in finance. “It’s not my strong suit. And so I would put it into AI, and then it would help me have stopping points and know when to kind of transition into the next thing.”

Both think a knowledge of AI will benefit them as they look for jobs, and the numbers seem to back them up. A recent survey from Washington State University found that 74% of U.S. professionals think graduates should have AI experience before entering the workforce.

“Being AI literate is crucial when entering the job market, especially since the use of artificial intelligence is becoming prominent in the field of mass communications,” says Parker Lacewell, a UNCA senior who is taking O’Brien’s AI class. A senior from Winston-Salem, he hopes to work in strategic communications.

But among others who will soon be graduating, there are doubts about AI.

“I still have some students who are very skeptical,” O’Brien says. “Here at UNCA, we have a lot of students who think very much about the community and society, and there are many that are worried about job loss. So we try to think about ways that AI will help with workflow and free up, let’s say,  journalists to actually go out and talk to people.”

For many educators, a big challenge is making sure students become proficient at AI without using it as a crutch.

“An overreliance on AI tools can hinder development of critical thinking and problem-solving skills,” Sanney says. “We’re looking at some guidelines and guidance for faculty there. But we’re not recoiling from the technology as an institution.”

As a computer science expert, Cameron has pondered the implications of AI far longer than most people in the academic world. She says universities must make sure not to compromise their core values when adopting it.

“The technology should not change an institution’s mission. It should enhance it. The mission should not be enslaved to technology.”

SHARE

Thanks for reading through to the end…

We share your inclination to get the whole story. For the past 25 years, Xpress has been committed to in-depth, balanced reporting about the greater Asheville area. We want everyone to have access to our stories. That’s a big part of why we've never charged for the paper or put up a paywall.

We’re pretty sure that you know journalism faces big challenges these days. Advertising no longer pays the whole cost. Media outlets around the country are asking their readers to chip in. Xpress needs help, too. We hope you’ll consider signing up to be a member of Xpress. For as little as $5 a month — the cost of a craft beer or kombucha — you can help keep local journalism strong. It only takes a moment.

About Justin McGuire
Justin McGuire is a UNC Chapel Hill graduate with more than 30 years of experience as a writer and editor. His work has appeared in The Sporting News, the (Rock Hill, SC) Herald and various other publications. Follow me @jmcguireMLB

Before you comment

The comments section is here to provide a platform for civil dialogue on the issues we face together as a local community. Xpress is committed to offering this platform for all voices, but when the tone of the discussion gets nasty or strays off topic, we believe many people choose not to participate. Xpress editors are determined to moderate comments to ensure a constructive interchange is maintained. All comments judged not to be in keeping with the spirit of civil discourse will be removed and repeat violators will be banned. See here for our terms of service. Thank you for being part of this effort to promote respectful discussion.

2 thoughts on “Area colleges tackle challenges, opportunities of AI in the classroom

  1. T100

    AI has been “around” for 50 years. In the late 80’s AI researchers managed at Stanford to equip a Cessna 172 class aircraft with an enhanced auto pilot that managed to fly it “hands off” from take off thru landing from Palo Alto to San Jose.. That was hailed as a great “break through” in AI. The SAME YEAR Boeing equipped a widebody 767 with auto takeoff and auto land systems that enhanced the capability of their autopilot and ILS systems.. and flew it hands off from Seattle to London UK It was “heralded” as an incremental engineering improvement…

    Existing commercial AI systems do EXACTLY what their human masters programmed them to do! A TRUE AI system can “learn” and dynamically change its its behavior.. Some experimental game playing programs CAN do this.. But there is no way any such system will have commercial viability in any non-toy domain because the behavior of ALL instances of the systems will diverge with time. Therefore, it will be basically IMPOSSIBLE to debug the cause when ONE instance wrecks a Tesla, crashes an aircraft, or kills a hospital patient (all of which have been done by existing fixed behavior “AI” systems) it will NOT be possible to resolve the underlying cause.

    (I am a retired Comp Sci prof at our state’s flagship STEM university and from the mid 80’s thru mid 90’s consulted with a major computer company on the development of an early “AI” system that could be used to read handwritten amounts on scanned images of personal bank checks.)

  2. Voirdire

    AI ..cut and paste data retrieval with a few algorithms thrown in for good measure …there’s nothing intelligent about it. We need to be clear about this… and clear eyed about it as well. It has its place in our modern world no doubt, but thinking isn’t one of them . And, not only is there not any thinking going on with AI …once the algorithm keeps going down a path, no matter how “miscued”, there is no deviation ..and that’s the problematic part in certain applications. Bottom line, there is no “TRUE” AI …sorry to contradict the above expert. “Learning” and “behavior changes” are simply additional algorithms. In other words, it’s called programming ….not learning. Oh, and then there’s the little bit about unimagined consequences… which is impossible for a machine …last time I checked anyway. But yes, I agree completely, “True” AI beyond a non-commercial game/ toy application is really just an unrealistic fantasy ..thankfully. ( ..and if it somehow does the unimaginable will be a reality no human wants to experience/ see)

Leave a Reply

To leave a reply you may Login with your Mountain Xpress account, connect socially or enter your name and e-mail. Your e-mail address will not be published. All fields are required.