Education & Family
AI in Baltimore Schools: Academic Innovation or Sophisticated Cheating?
With generative AI tools at their fingertips, local students and teachers get familiar with its risks and rewards.

Generation Z, often referred to as Gen Z, is the demographic cohort following the Millennial generation, typically defined as individuals born between 1997 and 2012. As the first generation to grow up with the internet, social media, and advanced technology from an early age, Gen Z is distinct in its values, behaviors, and interactions with the world. This generation is poised to shape the future in profound ways, especially in terms of digital communication, social justice, and consumer trends.
To the 11 students in Goucher College associate professor Lana Oweidat’s Theories and Practices in Composing, Tutoring, and Teaching honors course, the opening lines of the essay on Gen Z—and the two pages that followed—were reasonably good. Asked to assess the essay, the students, all tutors-in-training for Goucher’s Writing Center, quickly noted some strengths: The essay presented facts and provided seemingly solid data to back up its assertions. But there were weaknesses, too. There was no thesis and the essay flowed poorly, for starters.
Worse, with a little digging, the students encountered a larger flaw: shoddy sourcing.
“It turns out there was a source, but it was not accurate,” says Oweidat, who is chair of the department of Professional and Creative Writing and director of Goucher’s Writing Center. Despite its problems, the essay earned Cs and Bs from the tutors-in-training. Not bad considering the writer was none other than ChatGPT, the popular generative artificial intelligence tool.
“I think [the students] were surprised that the essay was pretty good,” says Oweidat. “Even I was surprised at what it came up with in five or 10 seconds.”
The essay review was part of a November 2024 class that taught students about AI tools and talked through steps to take when tutors suspect a student has used AI in their work without disclosure. As peer tutors, “They should have that knowledge to be able to tell students what AI is capable of doing and also help students navigate the issue of using AI ethically,” says Oweidat.
Gaining such knowledge and skill is only part of the future tutors’ training, but it’s a vital one at a time when AI tools are easily accessible and poised to be woven into work and life in ways we can only imagine—and some we can’t.
Across the Baltimore region, educators are grappling with a future that is already here. Once a staple of science fiction, generative AI is now a real tool that needs to be understood, harnessed, and, yes, sometimes restricted or even banned.
The educators we spoke to were surprisingly sanguine about the possibilities of—even excited about what it could bring to the classroom—and generally favor a proactive approach that embraces AI. At the same time, they recognize that such tools come with risks and that the development of best practices remains a work in progress.
So what is Generative AI? It’s a transformative type of artificial intelligence capable of creating original content—from essays (everything from the History of Feminism to My Trip to the Grand Canyon) and computer code to modern art, classical music scores, podcasts, and more—and streamlining a variety of tasks. As generative AI tools continue to evolve and gain traction, educators are faced with the challenge of ensuring students have all the skills they will need in an AI enabled future, including how to use AI tools effectively, ethically, and responsibly without becoming overly reliant on them as a replacement for in-depth research, synthesizing information, and critical thinking.
While so-called “adaptive” artificial intelligence tools (those capable of learning and adapting to new information) have been used in education for years, generative AI tools are a more recent development—ChatGPT, for example, became widely available to the public in November 2022.
“Right now, we are still in a place where educators are learning about AI,” says Tara Nattrass, managing director of innovation strategy at ISTE+ASCD, a nonprofit that seeks to help educators in K-12 and higher education use technology to improve education. “They are thinking through opportunities and risks.”
In addition to beginning to develop AI usage guidelines and revisiting their honor codes, many schools are seeking to enhance their digital literacy efforts. Educators are also exploring the ways AI can help streamline their own work—by doing things like assisting with administrative tasks, creating lesson plans, crunching data, or even tailoring learning to a student’s specific needs, which can create a more equitable learning experience for all.
And herein lies the paradox of AI: It has the potential to be simultaneously one of the best and worst things to happen to modern education. It presents tantalizing opportunities to improve the academic experience, enhance efficiency, free up time for higher-order thinking, and open doors for the future success of students. But yes, it also can be a tool for sophisticated cheating and even unintentional misuse by users who don’t understand its limitations and risks.
“I FIND MYSELF USING CHATGPT EVERY DAY. IT’S EXTREMELY USEFUL IN SCHOOL.”
In Maryland schools, colleges, and universities, guidelines for the use of AI vary by district, institution, and even individual educator. Current practices can range from an outright ban on the use of generative AI by students, to allowing its use in specified ways, to a full-throated embrace of the technology. As Oweidat notes, not everyone is comfortable with incorporating AI into the classroom.
“People are still processing and trying to do what’s best for students, of course,” she says. “But some faculty are still grappling with the complexity of this and they’re not there yet.”
Aware of this complexity, University of Maryland has been proactive. In November 2023, they assembled a President’s Commission on Artificial Intelligence to explore the use of AI at the university. And in April 2024, they officially launched the Artificial Interdisciplinary Institute at Maryland, “focusing on responsible and ethical AI technology.”
Students, meanwhile, are making use of AI tools in and out of school. In a Spring 2024 University of Maryland campus-wide survey, 41 percent of students said they have used generative AI for academic purposes—and usage has almost certainly grown since the survey was conducted. The top three reported uses were for generating ideas, improving content, and summarizing concepts. Students across institutions say they also use AI to help with everything from organizing their work, to writing code, to getting a quick synopsis of a long technical paper. They may use it to get answers to questions they feel silly asking in class or to punch up a paragraph they’ve written but aren’t happy with.
For Towson University sophomore Krishan Patel, AI tools have recently become an invaluable resource. Although he’d used them since high school, “Honestly, before this semester started, I did not [expect to use] AI as often as I do now,” he says. “I find myself using ChatGPT every day. It’s extremely useful in school.”
For his algebra class, Patel found that having an AI tool solve math problems and show its work helped him practice solving similar problems on his own. For a speech class, Patel was able to use AI to brainstorm ideas. In both cases, his professors allowed and even encouraged such use.
And while Patel found his professors have been largely clear on allowable use—all addressed AI on their syllabus, with most prohibiting its use—students using AI for schoolwork can sometimes find themselves faced with tricky ethical questions. Like, is it cheating to ask AI for ideas for a paper you’ve been assigned? Or to summarize a reading for you? What about taking AI-generated editing suggestions that border on rewriting? The answer, for now at least, often is “it depends.”
At Goucher, the honor code prohibits the unauthorized use of generative AI to write papers and essays or to complete other assignments and directs students to ask their instructor and check course policies to determine if AI tools are allowed at any stage of their work. When instructors allow AI use, the rule of thumb is that anything taken from AI and incorporated into texts must be cited, says Oweidat.
Still, “there’s a lot of gray area,” admits Oweidat. “That’s why it’s very hard to come up with, ‘and this is how we do it.’ It’s complicated, and it’s an evolving issue.”
Even for areas that aren’t gray—having AI write large chunks of an essay without disclosure or permission, for example—potential solutions pose problems of their own.
“AI detectors are unreliable and the implicit bias within them will often flag work of multilingual learners and other students as being AI-generated even when it is not,” says Nattrass. “And we are now faced with this environment of distrust sometimes in our schools with teachers asking, was this generated by AI [or] was it not?”
And it’s not just educators who worry about the use and misuse of AI. Students also worry—that their peers will use AI to gain an unfair advantage or that their own use of AI will get them into trouble. In the UMD survey on AI use, a majority of student respondents said they were worried about how to use generative AI while maintaining the university’s code of academic conduct.
In addition to worries about cheating or unethical use, AI presents other challenges. As Oweidat’s students discovered, large language models like ChatGPT sometimes “hallucinate,” or create inaccurate content. That means their output must be verified by the user, says Soheil Feizi, a University of Maryland associate professor of computer science and founder and CEO of RELAI, which provides tools aimed at improving AI reliability.
“One of the main issues we see [is that] when people use AI models, they trust them. And that can obviously have significant consequences,” says Feizi, who is currently on leave from University of Maryland but plans to return to the classroom in the fall.
Educators who use AI in their work also need to be wary of the limitations of some AI tools, which might create content that perpetuates stereotypes or relies on misinformation or disinformation, says Nattrass.
The use of AI in education gives rise to other concerns too: about privacy, equity (not all students may have access to the technology), and the worry that an overreliance on AI will erode students’ skills in crucial areas.
“WE’RE ALSO GOING TO HAVE TO GET THEM READY FOR THEIR FUTURE AND THEIR FUTURE IS GOING TO INCLUDE AI…”
At McDonogh, a private college-preparatory school for students from pre-K through 12th grade in Owings Mills, embracing AI is a logical extension of a long-standing pedagogical approach that seeks to develop subject mastery in students while also fostering the skills to think deeply, explore a range of perspectives, and collaborate.
“There are certain things when you’re talking about a school that value the liberal arts and sciences that’s fundamental, that [students] are always going to have,” says McDonogh Associate Head of School Kate Mueller. “But we’re also going to have to get them ready for their future and their future is going to include AI—and a lot of different technologies for that matter.”
In spring 2023, McDonogh 9th graders worked through a multi-session course with Inspirit AI, an online artificial intelligence education program developed and taught by Stanford and MIT alumni and graduate students. As 10th graders, the students are continuing their AI instruction this year. Programs like this and others seek to help students understand AI’s workings, potential uses and limitations, and ethical considerations.
Taking a proactive approach to technology use, allowing students to use AI in a structured environment, and talking through ethical decision-making creates an environment that should limit the potential negatives, says Aisha Bryant, McDonogh’s director of educational technology. As part of their AI education, students practice creating an AI prompt, evaluating the response, and considering the ethics of using the AI-created content before making a decision about how much or how little of the content to use.
“Sometimes students, with pressure or being up against a deadline, don’t make the right choices,” says Bryant. But exploration and discussion of ethical choices helps them develop as responsible users of technology.
“By the end, they’re not even thinking, ‘Okay, write my essay for me,’” she says. “AI just becomes more like an assistant, because we have shown them how many different ways they can use it and they truly understand how AI works.”
As exciting as the potential for AI is, Mueller points out it’s just one of many tools students may use, none of which replace human-centered learning. “We want good, well rounded citizens to graduate from McDonogh and to make sure they are building relationships and interacting with each other,” she says.
To that end, some educators are redesigning assignments to be more creative and open-ended and are leaning toward tasks like group projects, oral presentations, and interactive learning experiences. The bonus: It’s harder for students to outsource that kind of work to AI and such assignments help build the durable skills that will help them thrive in an AI-integrated future.
As they move toward that future, educators and institutions will need to take steps to ensure all students have equal access to AI tools, says University of Maryland’s Feizi.
“What worries me is that we will have some people that are quite good at using these AI tools—and even better, they can contribute to these tools—and they are the ones who will see the majority of the benefits and that will potentially increase the economic gap and other disparities.”
For Goucher’s Oweidat, the path forward holds more exercises like the essay evaluation her tutors-in-training tackled, plus continued exploration and discussion of AI with faculty and students.
“The more I use the tool,” says Oweidat, “the more I teach about it, learn about it, the more comfortable I become and the more excited I become about the possibilities, which are endless really.”