AI Manager

Local plugins ::: local_ai_manager
Maintained by Peter Mayer, Philipp Memmel, ByCS Lernplattform
The local_ai_manager is a powerful Moodle plugin that enables the integration of AI functionalities for different tenants. Tenants are separated by specific user fields such as institution and department. The plugin has a modular structure and supports a variety of language models that can be easily extended.
Latest release:
458 sites
494 downloads
19 fans
Current versions available: 2

The local_ai_manager acts as a central interface for connecting and managing different language models within a Moodle system. The tenant separation is realized through the use of user fields such as institution and department, which enables a clear demarcation and management of AI resources.

Main functions:
  1. Modular architecture: The plugin is designed to support different language models (e.g. ChatGPT, Ollama, Gemini) and can easily be extended to support other models due to its subplugin structure.
  2. Define purposes: Administrators can define specific deployment scenarios for the language models to provide different configurations for different use case
  3. Tenant administrators: Each tenant administrator has control over whether and which AI functionalities are activated for the users of their tenant.
  4. Credit management: Each tenant can independently procure credit and make it available to their teachers and students. This enables flexible and needs-based use of the AI tools.
  5. Detailed Statistics: The tenant admin can view detailed statistics about the usage of users and different language models. More statistics than the default ones can be enabled by capabilities.
  6. User Control: The tenant admin can enable and disable each user individually
  7. Role control: Each user can have a role to act as. The consequence is, that the tenant admin can configure different language models for different roles. E.g. gpt4o-mini for students and gpt4o for teachers.
  8. Integration of self-hosted AI tools: In addition to external language models, AI tools hosted by your organization themselves (e.g. Ollama) can also be seamlessly integrated.
  9. Extensibility: The plugin is designed to support future extensions and the integration of new AI tools.

The local_ai_manager provides a flexible and scalable solution that enables educational institutions to efficiently use and manage state-of-the-art AI technologies.

You need other plugins to work with the ai_manager: 

Screenshots

Screenshot #0
Screenshot #1
Screenshot #2
Screenshot #3
Screenshot #4
Screenshot #5

Contributors

Peter Mayer (Lead maintainer)
ByCS Lernplattform
Please login to view contributors details and/or to contact them

Comments RSS

Vis kommentarer
  • Plugins bot
    tir.. 24. sep.. 2024, 17:00
    Approval issue created: CONTRIB-9696
  • Aaron Tian
    man.. 22. sep.. 2025, 11:05
    This plugin gives extensive support to AI integration. Thank you so much and if the later version could provide OpenAI-API-compatible for alternative providers, that will be amazing!
  • Philipp Memmel
    man.. 22. sep.. 2025, 12:45
    Hi Aaron, thank you for your reply. Basically this already does exist, but we were hesitant to provide the possibility, because there is no such thing as an OpenAI-compatible API. They all at least to some minor extent have their own ways of for example returning different errors etc. So we did not want users to believe that our plugin is buggy, because people are trying to use "OpenAI-compatible APIs" smiler But we're likely to add a switch soon to allow this as we of course see the necessity for this. Thanks for your response!
  • Muhamad Oka Augusta
    man.. 29. sep.. 2025, 20:46
    is this plugin working? I keep getting error 404 in AI chat and tiny AI, other AI doesnt work too. I use Gemini AI first, doesnt work, then use Vertex AI, still doesnt work.
  • Muhamad Oka Augusta
    man.. 29. sep.. 2025, 21:34
    gemini model 1.5 flash has been retired, thats why I keep getting error 404. Just in case someone else has the same issue and is as foolish as I am.

    Would be nice if we can type our own model instead of providing dropdown, to prevent further issues in case another model is retired so we can just input our own model.
  • Inti Garces Vernier
    tir.. 7. okt.. 2025, 21:00
    Hi, I'm trying to probe the plugin with ollama local server and llama3.1 model:
    Ollama server receive the request:
    l will not be utilized
    Oct 07 14:56:06 debian-moodle.nos.localdomain ollama[33671]: llama_context: CPU output buffer size = 0.50 MiB
    Oct 07 14:56:06 debian-moodle.nos.localdomain ollama[33671]: llama_kv_cache_unified: CPU KV buffer size = 512.00 MiB
    Oct 07 14:56:07 debian-moodle.nos.localdomain ollama[33671]: llama_kv_cache_unified: size = 512.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (f16): 256.00 MiB, V (f16): 256.00 MiB
    Oct 07 14:56:07 debian-moodle.nos.localdomain ollama[33671]: llama_context: CPU compute buffer size = 300.01 MiB
    Oct 07 14:56:07 debian-moodle.nos.localdomain ollama[33671]: llama_context: graph nodes = 1126
    Oct 07 14:56:07 debian-moodle.nos.localdomain ollama[33671]: llama_context: graph splits = 1
    Oct 07 14:56:07 debian-moodle.nos.localdomain ollama[33671]: time=2025-10-07T14:56:07.688+02:00 level=INFO source=server.go:1289 msg="llama runner started in 104.42 seconds"
    Oct 07 14:56:07 debian-moodle.nos.localdomain ollama[33671]: time=2025-10-07T14:56:07.697+02:00 level=INFO source=sched.go:470 msg="loaded runners" count=1
    Oct 07 14:56:07 debian-moodle.nos.localdomain ollama[33671]: time=2025-10-07T14:56:07.699+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
    Oct 07 14:56:07 debian-moodle.nos.localdomain ollama[33671]: time=2025-10-07T14:56:07.701+02:00 level=INFO source=server.go:1289 msg="llama runner started in 104.44 seconds"
    Oct 07 14:56:07 debian-moodle.nos.localdomain ollama[33671]: [GIN] 2025/10/07 - 14:56:07 | 200 | 1m45s | 127.0.0.1 | POST "/api/generate"

    but it seems that where the ollama server try to send the answer, Moodle fails:
    Error code: generalexceptionmessage
    * line 190 of /local/ai_manager/classes/local/prompt_response.php: TypeError thrown
    * line 69 of /local/ai_manager/tools/ollama/classes/connector.php: call to local_ai_manager\local\prompt_response::create_from_result()
    * line 216 of /local/ai_manager/classes/manager.php: call to aitool_ollama\connector->execute_prompt_completion()
    * line 82 of /local/ai_manager/classes/external/submit_query.php: call to local_ai_manager\manager->perform_request()
    * line ? of unknownfile: call to local_ai_manager\external\submit_query::execute()
    * line 253 of /lib/external/classes/external_api.php: call to call_user_func_array()
    * line 83 of /lib/ajax/service.php: call to core_external\external_api::call_external_function()

    Any hint wil appreciated!
  • Philipp Memmel
    tir.. 7. okt.. 2025, 22:05
    Hi,
    it's a bit difficult to debug just from your output. Maybe it's sufficient to raise the timeout in the AI manager admin configuration setting?
  • Inti Garces Vernier
    ons.. 8. okt.. 2025, 00:17
    Hi, but using the built-in plugin Ollama I receive a correct answer. I've made this plugin work using https://localhost/ollama (served by a reversed procy), but if in local_ai_manager I can't get any response unless using https://localhost/ollama/api/generate. Did you ever try ollama in local_ai_manager?
    tnx
  • Philipp Memmel
    ons.. 8. okt.. 2025, 02:28
    I'm not sure I understand. You are supposed to define the whole ollama endpoint which would be https://your_ollama_host/ollama/api/generate. It's maybe different to what you have to insert into the core_ai ollama provider, but the local_ai_manager is not at all related to the core_ai subsystem, so it's implemented differently.
  • Inti Garces Vernier
    ons.. 8. okt.. 2025, 03:44
    Sorry. probably I haven't explained it well. I used core_ai_subsystem in order to test the ollama server. It works, but I want to use local_ai_manager, and in this case Moodle fails parsing the result sent by the ollama server. So, I need to know the parameters that I must use to setup the connection of local_ai_manager to the ollama server. The model is llama3.1.
  • Philipp Memmel
    ons.. 8. okt.. 2025, 04:01
    It's supposed to work by providing the endpoint in the format https://your_ollama_host/api/generate. That should be sufficient.
  • Inti Garces Vernier
    ons.. 8. okt.. 2025, 04:54
    ok, thanks. I will set a greater timeout, as you said, this can be the cause.
Please login to post comments