If You Haven’t Jumped Into AI Yet Part 3
Ramifications of AI for Education
This is Part 3 of a series. Part 1 | Part 2 | Part 3
When working with students on AI, there are several important considerations to incorporating AI in the learning process.
If implemented correctly, AI can be an exceptional tool for learning–at all levels, not just upper grades. But it is not a tool you throw at students to explore without direction. It takes understanding of its capabilities to plan its use within a learning environment.
But it also takes understanding of AI’s weaknesses and problems as well.
As educators we need to diligently plan how AI fits with the learning process in our classroom. Having students do general “exploration” of AI is not something I recommend. It is up to the teacher to determine how to scaffold learners in their use of AI in a structured setting.
Ensuring all students have access to AI skill development will play an important role in equity and access if AI skill frameworks reach students across all trajectories, particularly because education is key to economic mobility (Ascione, 2024).
Let’s take at closer look at some of the problems AI might create for educators.
Anthropomorphism
Anthropomorphism is a term for humanizing technology, like imposing human characteristics to an AI. This can take several forms.
First, it is important, not only to understand but to convey for learners, that LLM’s do not really “understand” content. The process for LLM’s to generate text is to examine other writings and “predict” what words best fit an answer to your question or prompt. This is not understanding! At times it can be eerily accurate, but never let that override your caution–LLM’s cannot understand like humans.
It is important to note that some companies, particularly OpenAI, have explicitly promoted the human-like conversational nature of their product. At their press conference in May 2024 where they introduced a version called ChapGPT-omni, almost the entire demonstration was focused on the verbal interaction with ChatGPT. They see a competitive advantage by emphasizing the “magic” of interaction with their AI. In other words, they promote anthropomorphism as a key factor to AI, which educators need to view with a grain of salt.
As educators, we need to cut through the “magic” thinking and focus on the capabilities that are important for learning processes. It is important to coach learners that AI is not a “friend” but a tool akin to a calculator. We need to help learners realize the AI cannot understand as humans, but only imitate narrow aspects of human interaction. AI does not have any perspective with life experience. As you practice interacting with AI, you start to see the types of questions AI can provide solid answers and what questions bring out the weaknesses of AI. Learners need to develop that skill, therefore educators need to help learners develop that skill.
Accuracy
If you thought the skill of deciphering validity of content from the web can be difficult, the advent of AI just made it harder. I recommend you teach learners to be the skeptic and assume everything coming from an AI as a possible “hallucination.” Once you have determined it to be valid, then run with it.
One of the problems of using AI–teachers may test ahead of class how an AI reacts to a question or prompt, but because of it’s fluid nature, you can get something different in another setting. You cannot always depend on consistency with AI (Cool Cat, 2024).
Hopefully, it is becoming clear why I highly recommend using AI services that cite their sources. Without those citations, it becomes 10 times as much work to decipher the validity of an AI response. While citations by themselves are not a guarantee of accuracy, it makes it so much easier to make that determination. Besides, recognizing the importance of citation is something for all students to learn at all levels.
Using AI Content for Cheating
This is a huge concern for many educators. Because AI can reword material, it is nearly impossible to do a search on specific verbiage to determine plagiarism. While there are tools that claim to detect AI plagiarism, it is much harder given the nature of AI’s.
Educators tend to focus on the tools used to plagiarize. In reality, the problem of cheating reflects more on the learning process and grading procedure of the classroom than the tools being used. In a traditional setting where learners primarily have one shot at the summative grade, there is all the incentive in the world to cheat. On the other hand in a mastery-based learning environment, where all the work gets formative feedback for improvement all during the learning process, cheating becomes a lesser issue. When students realize the focus is on their growth of learning while working through improvement of their work drafts, cheating with an AI becomes almost irrelevant.
Teachers need to set the expectations of how to use AI in the learning process. For example, in my graduate class, I established a “policy” about AI use. I encourage everyone to always create the first draft with their own words–don’t use AI to create the first draft. As they refine their own words, now incorporating AI can help identify gaps in their thinking. When students turn in their paper, I return it with instructional comments only, no grade. They can use the comments to improve and resubmit. In that setting, depending on AI to create the work from scratch to “cheat” becomes less relevant, especially when everyone has opportunities to improve their learning with resubmitting. Is it possible someone will still use AI to do all the heavy lifting? Sure, but it is less tempting now. In a setting focused on improving learning as opposed to submitting for a grade, “cheating” with AI makes less sense.
If students using AI to cheat is a high concern, consider rethinking your classroom learning and grading procedures. Using a mastery-based approach with an emphasis on formative feedback to develop growth is the best technique to de-incentivise the need to plagiarize with AI.
Privacy
Some companies have banned the use of AI because employee’s queries were giving away too many “secrets” about their work and potentially falling in the hands of competitors. Privacy about our chat inquiry is always a problem to keep in mind.
It is important to note the various ways you can access AI products:
- Websites where all the content and processing is on the other end of the cloud
- Apps access these web services where all the content and processing is on the other end of the cloud
- Apps on your device/computer that do the processing locally, right on your device without handing over information to the cloud.
From a privacy perspective, the last option is the most desirable, but unfortunately, almost all the services are on the other end of the spectrum. Be aware of the kinds of questions posed to AI and caution learners from giving away too much personal information, especially with conversational AI tools. When possible, use AI tools that are processed locally on the machine.
Can’t I Just Wait for AI Hype to Fade Away?
I don’t see that happening. Microsoft has built AI in nearly every segment of their services and tools in many forms, everything branded as Copilot. They even branded specific sets of Windows PC’s with minimum standards of processing power to handle AI tasks. Apple recently announced their extensive plans to incorporate AI on ALL platforms at their developer conference, something they call Apple Intelligence. Google Search has not only released Gemini, but they have begun incorporating AI into the top of their search results pages. All indications are that AI is becoming a part of nearly all the software tools we use.
It is to your advantage to learn about AI sooner than later, even if you are a huge skeptic. It will not be something you can ignore. Understanding its limitations as well as strengths is to everyone’s benefit.
I have been around long enough to remember when email first came to schools. At first it was a novelty for early adopter educators, but over time it became the primary communication tool for buildings. People who chose to wait it out soon found themselves out-of-the-loop on what was happening around them. The level of adoption of AI by technology companies alone is an indication you can’t ignore AI much longer, any more than someone could ignore email in the early days of implementation.
So dig in and figure out how you will implement AI in the classroom. If you have colleagues who are holding out, you may want to start encouraging them to move forward. You will be doing them a service.
This is Part 3 of a series. Part 1 | Part 2 | Part 3
Resources
Ascione, L. (2024, May 2). New Group targets AI skills in education and the workforce. eSchool News. https://www.eschoolnews.com/digital-learning/2024/05/06/new-group-targets-ai-skills-in-education/
Some big ai problems: The eliza effect and more. Cool Cat Teacher Blog. (2024, June 8). https://www.coolcatteacher.com/some-big-ai-problems-the-eliza-effect-and-more/