Throw it out there: AI tools like ChatGPT can make our lives much easier by simplifying certain tasks. Instead of wasting hours in trying to convey ideas into words, AI tools will spit out as many sentences and paragraphs in the span of a few seconds that will convert your ideal input into the desired output. In short, ladies and gentlemen, these tools have become the magic answer to reducing all of our workload. But, how does that workout in the classroom?
Quite frankly, it is not a secret that teachers make frequent use of AI tools to help them in their daily tasks. From creating extensive lesson-plans that are very detailed, to creating quizzes and tests that draw from knowledge-base, AI tools simply have changed the way that teachers create materials for the classroom. The fact that these tools seem to be able to produce materials at such a fast rate make them to be very compelling reasons to keep them as part of the educational arsenal.
Now, we have to switch the table to the other side. Students are using AI tools to help them with just about everything. From homework assignments to class projects, students are taking advantage of AI tools in order to complete their assigned work. The question then becomes: are students getting assistance in order to complete their work, or are they completely using AI to do the work for them, thereby invalidating any notion that students are able to produce any quality work? I think that the answer is: a little bit of both.
AI and laziness can go hand-in-hand. Students who get exposed to AI tools without any ethical training may abuse the system by not producing any original work of their own. Instead, students may depend on tools like ChatGPT, Google Gemini, and Microsoft CoPilot to produce all of their work...all without breaking a sweat. These students will forgo any critical thinking in favor of automatic tools doing all of the work for them. Some people may call this “working smart”. Others may call this “cheating”. Whatever name you choose to describe this situation, one thing is very clear. Safeguards must be instituted in order to minimize the issue.
In terms of determining whether students are implementing original words, or regurgitating what the AI platform is throwing out, teachers should be able to recognize their students’ works, and judge them accordingly. For example, if a student is submitting a term-paper, have that student present it before the class, and be able to defend the material. If a student presents a paper describing why one product is better than another, that student should be able to address the whole class, and be ready to answer questions form his or her peers.
Another way to reduce the over-dependence of AI tools is by disallowing them, when possible. In case of an important exam that must be distributed, implementing it in the form of pencil-and-paper will force students to do some logical thinking when reading exam questions.
Another way to reduce the usage of AI by students in the classroom is by allowing them to write essays without the usage of a computer. Again, using pencil-and-paper will force students to come up with something more concrete that they know about, and can talk about, instead of having a machine do all of their work.
Technology can be used, as well, in order to determine abuse of technology tools. Teachers can require students to work in Google Docs or Microsoft Word with "Track Changes" enabled to show the evolution of their ideas.
There is no easy way to answer whether AI should be allowed or banned for students. Clearly, there are potential benefits from AI tools, such as automating tasks and generating ideas. By the same token, AI tools can be abused when students depend completely on them, not able to produce a single original idea, and presenting work that clearly the system created. One thing is for sure: safeguards must be created in order to detect and curb dishonesty by students.
By: Nick The Computer Teacher (Pedro Nicolas Payano)

No comments:
Post a Comment