PANTSU PROPHET

TOP UPDATES FOUR PILLARS CINEMA/TV GAMES MANGA/ANIME MUSIC WRITINGS FAQ LINKS


THE REAL PROBLEM WITH AI IN THE CLASSROOM

If you ever look at the state of modern education today, the biggest problem at all levels is artificial intelligence. Traditional homework assignments where you have a list of questions and have to answer them by writing something original aren't possible anymore unless they are done entirely in the classroom or another supervised environment. Students can just copy-paste prompts into an AI and copy the answer without retaining any of the content. They can even do this for the kind of robust, well-written theses that separate graduate students from the rest. So what to do? And what is the real danger going on here?

In many cases, this fear reminds me of the status of Wikipedia back when I was in university. Wikipedia was a kind of boogey-man that was taboo to acknowledge in any fields of higher education. But everyone knew that we consulted it behind closed doors. Of course, we weren't allowed to reference Wikipedia for assignments. And there is good reason for that. But a lot of us misunderstand the reason why it was so forbidden to use Wikipedia.

The real issue with Wikipedia is not that it contains misinformation. If any college professor is honest (and not 1000 years old), they won't say that Wikipedia can be edited by anyone with no oversight and that will give you a totally distorted understanding of the field. The problem with Wikipedia is that it flattens out rich areas of academic research. It is written "by committee" and, in its efforts to provide an overview of all knowledge in a field, becomes bland and general. Have you ever sat down and tried to actually read a long Wikipedia article from beginning to end? I have. And let me tell you, even the most boring and dry textbook was more entertaining. Will Wikipedia give you genuine knowledge? Yes. But it will do so in a way that often lacks context, lacks relevance, and gives the illusion of finality. In trying to create an overview of all human knowledge, the whole of Wikipedia articles are very often less than the sum of their parts.

AI is much the same, but on a level that is even greater. The real problem with AI is not that it will give students a wrong answer in most cases. Although it should be said that the possibility for AIs to get things wrong and make up strange stuff is far, far greater than Wikipedia has ever been. But even if the information that AIs give us is accurate, the danger they present is that students will increasingly rely on the authority of an AI and outsource their thinking to them. AIs give the illusion of every question already being decided and known.

This is the real danger of AI and many other modern technological achievements. Some old fogeys complain that zoomers are less intelligent because of all their reliance on smartphones. But this complaint equivocates very strongly. Zoomers may not remember it, but I am old enough to remember that there was once a time when we had conversations and very often would think something like "What was that actor's name? I can't remember..." And we would have to stop and try to remember or think it through with others instead of just pulling out our phone and looking it up. We ignore just how much has changed by being able to constantly check these kinds of things. So if we define intellignece as the mere knowledge of facts, zoomers often surprise me at how well-informed they are about all kinds of things. It far exceeds the average of most of the kids I remember in high school.

What does provide a challenge, however, is the way that this knowledge seems to be increasingly outsourced and irrelevant to them. They know so much more, but it is a very low level of knowledge. The most brute form of learning is mere recall. The higher levels have to do with creative integration and creative engagement. And the danger of AI is that it skips all the levels where the stuff we really care about happens. I always say that an AI can tell you what is true, but not how you feel about it. At least, not yet. And the danger of excessive AI use is that it makes students more and more okay with being told how to feel about it and outsourcing all their thinking and feelings.

So what should educators do? I remember hearing president of Wesleyan University Michael S. Roth give an interesting answer: Have students look at the response of an AI explicitly and evaluate it. Ask, what does this leave out? What is this getting right? Use it along with sources written by humans. I think this is probably the pragmatic answer going forward for educators. You can't just act like AI doesn't exist and warn your students to never use it as part of an honor system. You have to take it head-on and teach students to continue thinking for themselves in spite of the availability of AIs. But it's the kind of change that educators will have to get used to going forward.


Back to the ESSAYS section.