Exploring the Intersection of AI and Higher Education

Insights for Digital Content and SEO

Data compiled by Courtney Herda

Written by Courtney Herda, Olivia Maurel and Jean Brown

Businesses have websites, and marketing teams have digital campaigns – all needing compelling and well-written content. New AI Writing tools like ChatGPT presumably offer a quick and easy way to produce it, but, as many are discovering, it’s what you do with this content that counts. Now, AI is being used to detect AI to prevent plagiarism and copyright infringements. And red flags have been raised (and frantically waved) about the accuracy of the content generated by these AI Writers. 

All a bit late for a horse that has already bolted, don’t you think?

But that is the nature of technology – constantly evolving quicker than we can keep up. As a Marketing Director and an adjunct professor of Marketing at the University of Tennessee’s Haslam School of Business, I am responsible for understanding and helping my team and students navigate these changing industry trends. 

Since my work overlaps both sectors, I took the opportunity to explore the implications of some of these concerns and risks for businesses, marketing teams, and Higher Education (HE) facilities. As part of this exploration, I created an assignment for my students to expose them to AI writers and technologies and help them consider the various challenges around them. 

As my findings and conclusions reflect, I won’t rely too heavily on AI writing tools soon. 

AI Risks and Concerns In Higher Education

While its applications in marketing are proving controversial, the role of AI (and AI writers, specifically) in higher education (HE) is equally nuanced. From virtual tutors and self-testing tools for students to applications that can automatically grade papers or generate assessments, emerging AI technology promises to transform, enhance, and “shake up” how colleges and universities handle many of their regular administrative, teaching, learning, and research activities.

While AI’s opportunities abound, so do the risks for HE facilities. One primary concern is the impact AI writers like ChatGPT will have on academic scholarship and intellectual property (IP) rights. Another, according to M’hammed Abdous in his 2023 Op-ed (opinion piece) for Inside Higher Ed, is the significant lack of research into the “ethical implications of implementing AI in higher education.” 

Open AI, the creator of ChatGPT, has acknowledged similar risks and partnered with leading universities to research, predict, test, and try to mitigate ways in which tools like ChatGPT can be misused for disinformation purposes.

Since ChatGPT launched in November 2022, educators have also raised questions and concerns about the facilitation of cheating through AI. In response, existing anti-plagiarism software companies like Turnitin have updated their tools to offer AI-detection capabilities. Products by new players in the AI space provide students and educators with AI-powered solutions that can detect AI-generated content and verify the ownership of written materials.

Interestingly, while Open AI released its own AI classifier to help detect written or audio content generated by AI, it has discontinued its public access due to the tool’s high inaccuracy levels. 

So, how reliable and accurate are other AI and academic plagiarism “checkers?” And, as similar technology emerges for identifying AI-written web content, what are the implications for businesses and digital marketers looking to leverage the power of AI technology in their marketing and Search Engine Optimization (SEO) efforts? 

Google’s language is clear: 

"If you use automation, including AI generation, to produce content to manipulate search rankings, that violates our spam policies." 

Disregarding these spam policies puts businesses at risk of ranking lower in Google search results or not being ranked at all. Google has also placed the onus on marketers to create valuable, authentic, and human-focused content, over and above content generated solely to help with online rankings. 

Against this backdrop of Google’s Helpful Content System – a ranking signal that identifies and rewards helpful, people-first web content over content with low value-add – understanding how search engines can (and eventually will be able to) scan, analyze, and rank the value of AI-written content will directly impact a marketing team’s SEO and content recommendations. 

AI and Plagiarism Concerns: Closer to Home

For years, there have been tools that help university and college students cut corners in completing academic assignments, leading to anti-plagiarism software like Turnitin. Such plagiarism checkers look for similarities between a student’s work and text found elsewhere online and in research databases. 

My class at the Haslam School of Business, Digital Marketing Strategy, introduces students to the latest and most effective digital marketing tools, strategies, and tactics they can execute within a larger marketing and media strategy. 

The next wave of digital marketing tools features AI writers – natural language processing tools like ChatGPT – with access to large data sets of text that can turn simple questions, prompts, and keywords into blogs, long-form essays, and copy for social media posts, paid ads, and marketing emails. 

For this article, I set up an assignment for my students over two semesters to help familiarize them with new AI technology and to test the ability of AI-checking tools that promise to detect AI-generated and augmented content. The study also aimed to assess the capacity of current anti-plagiarism software to catch copyright infringements in AI-written essays. I also wanted to explore the potential implications of these infringements on digital marketers’ SEO activities. 

The Prompt: “Will AI Replace Digital Marketing?”

AI writers or “chat models” like ChatGPT require text prompts (or inputs/instructions) from users to generate a response (or text output). Much like a Google search, these prompts are in the form of keywords, queries, or reference text, which an AI writer then uses to work out and supply “answers” as it applies its language processing modeling to its existing knowledge base.

For new learners preparing to enter the workspace, advancements in AI software like ChatGPT will continue to disrupt their world of work. I felt it fitting (and topical) to give my thirty-nine (39) spring semester students and twenty-nine (29) fall semester students the following prompt to input into either ChatGPT or another AI software of their choice (most used ChatGPT): 

“In 200-350 words, will digital marketing be replaced by AI?”

Students weren’t permitted to edit or reword content outputs from the AI writer to avoid skewing results; I ran 68 AI-generated responses “as is” through the AI detector (GPTZero) and a plagiarism checker (Unicheck). 

The students were also requested to answer questions about the experiment for self-reflection purposes, some of which we will discuss later.

  1. Do you think the AI checker will flag your content as written by an AI?

  2. Do you think your professor will be able to tell your content was written by an AI?

  3. Do you think the plagiarism checker will flag your content as plagiarized?

  4. Do you think the content created is better or worse than the original content you would have made? 

  5. What is the value of AI-generated content? 

  6. What are the drawbacks of AI-generated content?

About Our Tools

GPTZero

Created by 22-year-old Princeton University student Edward Tian, GPTZero is a reaction to the ethical questions about students using AI-created content in the higher education classroom environment. 

Ironically, the tool leverages deep learning technology and language modeling (i.e., AI) to determine if content is AI-generated. It can also detect if other software has been used to modify content to bypass AI detection. GPTZero is considered one of the best tools in AI detection and is the most frequently used AI checker, with over one million subscribers. 

Unicheck

Unicheck (by Turnitin) is a free online plagiarism checker tool educators can use to detect potential plagiarism in academic essays or theses. According to Unichek, plagiarism is “wrongly appropriating ideas or words without giving credit to the source or the “theft of one’s intellectual property, be it an image, music sample, invention, or concept.” 

The tool analyses documents for any similarities inside the text to existing published web content. It identifies a piece of content’s “average similarity to other web sources,” which it then calculates as a percentage of similarity – known as a similarity score.

GPTZero Results: Spring Versus Fall Semester Submissions

2023 Spring Semester Results:

Our team ran the first batch of AI submissions through GPTZero. GPTZero flagged 25 of the 39 (64%) submissions as wholly AI-generated, while fourteen (14) submissions were identified as being human-written. 

Even between semesters, the metrics evolved as the technology continued to evolve. Nevertheless, the majority of the entries were accurate responses to technology that was relatively new.  

2023 Fall Semester Results:

Again, all submissions were evaluated by GPTZero. The results were somewhat different by the fall semester due to improvements made to the GPTZero model to increase accuracies in predicting AI-written content. 

100% of the fall semester assignments were identified as “mixed” – i.e., including some degree of AI-created content. The introductions and conclusions were the most likely to be flagged. In 18 of the 29 papers, more than 50% of the content was deemed to be AI-created. In fact, GPTZero flagged all submissions as having at least 20% AI-written content, with an average of 60% AI-generated text. 

Unicheck: Spring Versus Fall Semester Plagiarism Scores  

Spring Semester Results

In the fall semester, the average similarity of the 39 samples to other web sources was 13.65%. The highest was a 32.3% similarity to other web sources. 

Seven of the 39 students yielded scores that produced a value greater than 25%, typically flagged as “yellow” or highly suspicious of plagiarism. However, most journals and schools expect to see less than 10% similarity in academic essays, documents, or research articles. Only 17 of the 39 samples achieved scores below 10% and thus would be considered acceptable. 

Fall Semester Results

Unicheck detected a higher similarity percentage (to other sources) among the fall semester submissions. The average similarity was nearly 30%. The highest score was 60%, and the lowest was 9%. 

Most submissions were identified as having 25-35% similarity to other web properties, typically resulting in academic questioning, honor code violations, or suspicion of cheating from the professor and/or academic department. 

Who’s Fooling Whom?

When we asked the Spring Semester students if they thought an AI checker would flag the writing sample as AI-written, 23 said yes, and 16 said no.

However, they didn’t believe I (their professor) could detect the AI-written content when reviewing their submission. Twenty-six (26) students said that I could not tell the difference between AI-generated content and their own writing, and 13 thought I would, providing nearly inverse results. 

By the Fall semester, almost all students believed GPTZero and Unicheck could identify the content as AI-created (27 out of 29); the same number of students also believed I could identify AI-written content. 

Student Reflections: Trends and Analysis 

The following trends emerged from a review of the 68 AI-generated writing samples and the students’ responses to questions about the writing quality and potential value and drawbacks of AI-written content. 

  • Do you think the content created is better or worse than the original content you would have made? 

  • What is the value of AI-generated content? 

  • What are the drawbacks of AI-generated content? 

While no thematic analysis tools or methods were applied to identify these themes, they were relatively consistent and easy to pinpoint across student submissions in a small sample size like this. 

Who Does It Better?

The students were asked to evaluate whether the writing sample provided by ChatGPT was better or worse than their quality of writing. The feedback from the Spring Semester students was mixed, with some stating that the vocabulary and grammar were superior to what they typically used – “Mature vocabulary and accurate punctuation is used throughout the entire paper which can seem a little suspicious.” – but that the writing felt repetitive, the sentences were short, and there were no citations.

Twenty-six (26) Fall Semester students responded that ChatGPT’s outputs were better than theirs, and 13 said it was worse. 

Formulaic Structure. Several students noted that the writing samples started by restating the prompt, using the same style of transition phrases, with “First,” “in addition,” and “in conclusion” reappearing several times in several papers. Very few college students tend to write in this format, so it does stand out. 

Repetition of Themes. The AI content often repeated the same concept in multiple ways throughout several small paragraphs, restating key ideas instead of providing tangible evidence or examples. It lacked critical thinking in providing specific examples and often focused on broad generalizations in answering the question. 

  • “One drawback that I noticed was though it provided a lot of detail, it still felt a little repetitive when reading it over.”

  • “I think a major drawback of AI-generated content is that it can be repetitive. It took me six times to generate a paper with the correct word count. But each attempt to generate a paper was similar to the last. Some were longer, and some were more summarized. But overall, they all said the same thing.”

Low Creativity. While many people think language processing tools like ChatGPT can “understand” the prompts given to them by users, AI writers are more like sophisticated autocomplete tools, which are “trained to predict the next words in a text, based on what has already been typed.”

According to GPTZero, this AI-writing “pattern” results in AI content that lacks the nuances of human writing and doesn’t vary much in tone or style. This lack of variation is one of several ways GPTZero’s model identifies AI-generated content. 

These “generic” qualities of ChatGPT’s output weren’t lost on my students either. 

Many students who found the samples inferior to something they would write felt that the AI writing lacked creativity. The lack of evidentiary support and generalized responses meant that responses lacked stylistic quirks common to most writers and failed to account for any out-of-the-box thinking. 

  • “A lot of great writings are great because of the human creativity, and I think ChatGPT lacks that. I also think the paper was very generic.”

  • “It made good arguments and had great ideas… I just think there could be a little more creativity in how the thoughts are presented. I think that is the main drawback of the AI-generated content.”

  • “The odds of AI generating text exactly how the user wants is not great, however, it still produces quality work in a condensed amount of time.”

No Citations. One concern many of the students expressed was that there were few, if any, instances of citations used in their AI content. Most students are quite sensitive to plagiarism – particularly in a university setting – and are accustomed to citing all their work, even when not explicitly requested. There was a fear of submitting content without citations in case it would be flagged automatically as plagiarism or non-human-initiated content. 

Superior Vocabulary. While disheartening to me as their professor, several students flagged the AI writer’s grasp of vocabulary as superior to their own. Longer words, more academic terminology, and specialized language were consistently present across many writing samples. 

Professional Tone. The final theme we detected across the writing samples was that students felt the AI-written submissions had a more professional and almost technical tone than their capabilities. Many students felt that they tend to write in a more conversational tone that sounds more like how they speak, featuring a lower degree of formality and more instances of slang or Gen-Z lexicon. 

  • I think if this was something I wrote, it would have had a more organic tone, and the use of quotes would be much less and in more effective positions.

  • “This response was like reading from a textbook.”

  • “I think most professors nowadays can tell the writing styles of college students, and after playing around with different chatbots, I don’t think AI shares that same style.”

  • “It did answer the question, but it did not sound like something I would have written.”

What is the Value of AI-generated Content?

Students identified increased innovation and efficiencies as the top value AI writers could provide to industries.

  • “There can be a lot of value in AI-generated content as long as it is used effectively. The use of AI can help to eliminate a lot of superficial jobs in different industries, helping to create a more effective and innovative environment.”

  • “It will dramatically increase the efficiency of businesses and their employees. Time is money, and if Chat GPT can save employees time, then they can allocate this towards something else.”

They also felt that AI writers helped with content ideation and speeding up the writing process

  • “The value of the AI-generated content was the quality and speed.”

  • “AI can give people a rough draft that, with some editing, could be something completely acceptable and even successful.”

  • “The most effective way to use it is to generate ideas and the skeleton of a new project.”

  • “It has value because it might assist with coming up with more ideas to talk and write about and help [with] figuring out where to start with a topic.”

  • “The value is that it helps speed up the time for people to come up with creative ideas. It can also offer a new perspective to anything you put in and give new ideas that you might’ve not thought of before.”

What are the Drawbacks of AI-generated Content

A function of Higher Education is to train students to be critical thinkers. Often, this is achieved through their learning to research, read, analyze, and summarize information and data for academic essays. An expressed concern was that growth in dependency on AI writers would cut this process short.

  • “I feel that soon enough, AI will be so common that students will not have to write papers or critically think for themselves anymore.”

  •  “It is really easy to put an assignment into the AI instead of doing the work yourself. It will prevent students from using their brains at full capacity and practicing creativity.”

  • “Obviously, one of the major drawbacks of AI-generated content is the fact that people might not actually know what they are talking about. Just because you read over the content doesn't mean you are educated on it. If someone were to do a presentation based on AI-generated content, when the presenter is faced with questions deeper than the content generated, they may have no idea what to say.”

  • “[I]t is taking away the need and desire for humans to establish a solid work ethic. The idea of having a robot do your work instead of you putting in the work is present here.”

In the context of copywriting for digital marketing, some students felt a missing human “touch” would negatively impact digital messaging and strategies.

  • “Sometimes it can sound very robotic and not actually like a human wrote it.”

  • “Especially in digital marketing, content can be a lot less meaningful and not as customer-driven because of how data is interpreted [by AI].”

  • “For a company trying to appeal to its people, it’s sometimes better to write from a more human perspective… [which] can show more emotion, which may draw some people’s attention.”

  • “One drawback is that it lacks the emotional piece. Most consumers are tied to a brand because they have that emotional connection, and AI does not offer that.”

  • “The drawback is that some things will lack the human touch. The AI may not take into account the things a human would when generating content.”

Can AI Content Be Human-first and Reliable Enough?

With Google’s Helpful Content System signal in mind, my students’ above observations are not far off from expert guidance – especially as far as SEO content and search rankings are concerned. As Google Search Relations Team member John Mueller emphasized in a 2023 Search Off The Record podcast, Let’s Talk Ranking Updates, content created with “an actual audience in mind that you know would come to it directly” is one useful way to navigate Google’s ever-evolving ranking systems. 

Whether AI writers can deliver reliable and accurate human-first content without the necessary training and human intervention still needs to be seen. 

System architect Rich Dominelli highlighted that AI is just a “statistical model of weights” that “has no desires or needs or agency of its own.” It simply “[emits] what it thinks you want to the best of its ability.” Because of this, AI could easily return false statements that it thinks are correct or is “trying to convince you…[are] correct.” The lack of accuracy and reliability is risky and concerning in an academic context, just as much as in a business context. Misleading customers is dangerous.

AI’s real value is in its ability to multiply creativity. A recent Google study found that 80% of advertisers already use at least one AI-powered Search ad product. AI-driven tools can help brands scale their creative outputs to boost their productivity and impact the success of their marketing campaigns.

Most evident from this project, however, while plagiarism checkers and AI detectors are improving, they still have a way to go before being able to identify AI writing 100% accurately. 

As AI writers like ChatGPT continue evolving, no doubt, AI checkers will continue to evolve, too, bridging the gap between human and AI-created content. However, educators’ concerns about academic plagiarism and scholarship are still founded. Marketers, webmasters, and business owners, too, could see the impact on their website content as Google and other search engines adopt similar tools to identify (and potentially penalize) AI-generated content.