AI for pedagogical

AI for pedagogical

by Marcel Almeida -
Number of replies: 2

Hi,

If an LMS uses AI for pedagogical and relies on an agent to execute actions in the environment, who should be held accountable when the AI fails and the reproduces biases or an improper pedagogical intervention occurs:

  • the agent
  • the institution?

When does an LMS stop being merely a supervised "assistant" and become an autonomous "agent"? At what point does AI support turn into a risky outsourcing of pedagogical decision-making?

Average of ratings: -
In reply to Marcel Almeida

AI for pedagogical

by Eduardo Kraus -
Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers

Hi,

Many people complain that students use AI to answer questions, but at the same time they ask AI to create courses, content, and even handle part of the decision-making within the environment. That is a fairly obvious contradiction. In my view, the responsibility is not the AI’s, and whoever is accountable for mistakes, bias, or poor pedagogy is the institution that chose to adopt it and put it to work that way. The tool did not decide on its own to go into Moodle and do tasks; someone chose it, configured it, and gave it room to act.

When AI helps the teacher, suggests paths, summarizes data, improves texts, translates them (as I do), creates images, and all of that still goes through human review, it remains an assistant. But when it is designed to act on its own, interfere with the student’s path, adapt content without monitoring, and make decisions that affect learning without real review, then it has stopped being just support, and that is exactly where the risk lies.

In the end, using AI to support the student is extremely useful for giving guidance and helping the student find content in Moodle, but using AI to outsource pedagogical decision-making is something else entirely, and it is a very dangerous path.

P.S. If the student is going to receive content that is 100% generated by AI, then it would be better to give them Google Gemini’s study assistant, which is fantastic.

Best regards,
Eduardo Kraus

Translated using ChatGPT

Average of ratings: Useful (4)
In reply to Eduardo Kraus

AI for pedagogical

by Marcel Almeida -

Hi,

"Human oversight" often ends up being more symbolic than effective, and in contexts of overload and constant pressure for efficiency, there is a growing tendency to accept automated decisions without more careful analysis. The contradiction you point out-criticizing students while using AI institutionally-does indeed exist, but perhaps the central issue is not exactly inconsistency, but rather the absence of clear criteria to guide these two uses.

Finally, in practice, no one will be held accountable-neither those who designed the system, nor those who configured the AI, nor those who chose to forgo pedagogical mediation at certain points-but it does open an interesting discussion.