Hi,
Many people complain that students use AI to answer questions, but at the same time they ask AI to create courses, content, and even handle part of the decision-making within the environment. That is a fairly obvious contradiction. In my view, the responsibility is not the AI’s, and whoever is accountable for mistakes, bias, or poor pedagogy is the institution that chose to adopt it and put it to work that way. The tool did not decide on its own to go into Moodle and do tasks; someone chose it, configured it, and gave it room to act.
When AI helps the teacher, suggests paths, summarizes data, improves texts, translates them (as I do), creates images, and all of that still goes through human review, it remains an assistant. But when it is designed to act on its own, interfere with the student’s path, adapt content without monitoring, and make decisions that affect learning without real review, then it has stopped being just support, and that is exactly where the risk lies.
In the end, using AI to support the student is extremely useful for giving guidance and helping the student find content in Moodle, but using AI to outsource pedagogical decision-making is something else entirely, and it is a very dangerous path.
P.S. If the student is going to receive content that is 100% generated by AI, then it would be better to give them Google Gemini’s study assistant, which is fantastic.
Best regards,
Eduardo Kraus
Translated using ChatGPT