AI Text Question Type: (Essay with auto marking)

AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Number of replies: 86
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers

Yesterday I gave my keynote presentation at MoodleMoot Japan 24 titled “Automated feedback for language teaching” and I introduced my  AIText question type. 

This is a fork of the Moodle core essay question type but with first pass grading  and feedback done by a Large Language Model, ChatGPT in this case, as soon as the student presses the submit button on each question.


Attachment ai_text_25Dec24.gif
Average of ratings:Useful (11)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Dominique Bauer -
Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Plugin developers
Nice work!

One question though. Is it necessary to repeat "Response provided by ChatGPT" with every answer? It seems to me that once at the beginning of the quiz is sufficient. (Translation provided by ChatGPT)
Average of ratings:Useful (2)
In reply to Dominique Bauer

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Good question Dominique. I refer to that setting as "the disclaimer" and it is configurable to be any string you choose, which will be dynamically translated depending on the users current preferred language. I feel it is very important that it is made clear that the response is not from a human, and I would expect some users to change the prompt to something stronger. Only this morning I have interacted with ChatGPT where it suggested that someone else was actually Tim Hunt and that he had authored the Moodle Quiz engine.
Average of ratings:Useful (2)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Martin Dougiamas -
Picture of Core developers Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Correct, we should always make sure others know when we use AI.

In the Moodle AI Principles this is called "Transparency" and it's one of the most important ones since it's something every single one of us needs to remember.
Average of ratings:Useful (10)
In reply to Martin Dougiamas

Re: AI Text Question Type: (Essay with auto marking)

by Dominique Bauer -
Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Plugin developers
Hello Martin,

Very well in terms of form.

Not so good for content: "Ils ont gardé une place pour vous" is a poor word for word translation. smile

In French, one would say "Ils vous ont gardé une place" or better "Ils vous ont réservé une place", which are more natural and idiomatic ways to express the idea. ChatGPT knows that, but you have to insist a bit.
Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Rick Jerz -
Picture of Particularly helpful Moodlers Picture of Testers
This is interesting, Marcus.

I gave it a try in my ChatGPT 3.5 to see what the results would be. Below is what was produced.
 
Perhaps the correct answer will depend upon the "context" of this particular question.
Attachment ChatGPT Results 2.png
Average of ratings:Useful (1)
In reply to Rick Jerz

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
The magic of all this comes in with a good understanding of how to prompt the Large language model, which I have not acquired. The excellent Gordon Bateson was at the Moot and he has a much greater understanding than I have. I am using ChatGPT4. I spent $US10 at the start of December and after all the testing I have done since I still have a few dollars left. However I am concentrating on allowing it to work with other large language models before refining my understanding or controlling of the prompting.
Average of ratings:Useful (1)
In reply to Rick Jerz

Re: AI Text Question Type: (Essay with auto marking)

by Dominique Bauer -
Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Plugin developers

Sure, all of this is great, but only if the students don't have access to an AI language model themselves.

Google Gemini:

MoodleForum_20240219_2032.png

Average of ratings:Useful (1)
In reply to Dominique Bauer

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
What scenario are you commenting on? I would expect my students to have access to LLMs, but when they took this sort of text in my classroom to not use them for the first attempt. I have little interest in summative assessment, which in my view cannot reliably be done unless there is a human moderator in the same room.

When I used quizzing in my teaching, on the first week I would announce there would be a 10 mark quiz that did not go into the final grade. The students would cry with despair and roll their eyes. They would moan a bit on week 2, by week three they quite enjoyed it, by week 4 it was one of the highlights. However by week 5 sometimes did not have a quiz prepared and they would complain at the lack of a quiz as if they had been short changed.

The nice thing about a formative quiz is that it gives teachers some instant feedback about learning based on the grade, but I never wanted to get too focused on grades, and with this new question type I suspect the grading may sometimes be quite variable depending on the quality of the prompting.

Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
I might have sounded over defensive in my reply to Dominique, but he asks an important question. There was a bit I cut out from my keynote presentation at MootJapan24 that addresses the issue of using other tools when attempting formative assessments. I have been using Duolingo for several years and for several languages (Spanish, German, Japanese and possibly French in the near future).

When attempting an exercise I occasionally use some other language tool to find the correct answer. This doesn't undermine the value of Duolingo, it is just part of my learning. Perhaps I am more motivated than some students and some students would use other tools simply to avoid the learning bit, and get some information into the tool or quiz. That is where the role of teachers come in, to be aware of what students are doing and potentially let them use any tool to help learning, but to try to ensure they don't use tools to avoid learning.

There is a world of people who have to manage finance who would like to automate away teachers.. They need to be constantly reminded that No technology can do this. The history of education technology has looped through this issue, and will continue to do so.
Average of ratings:Useful (2)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Rick Jerz -
Picture of Particularly helpful Moodlers Picture of Testers
So, if a student sees that ChatGPT is "grading" a question, might they use ChatGPT to configure their response? In other words, might we end up with ChatGPT both creating a reply to a question and a response, meaning ChatGPT talking to itself? Where is education and instruction happening?

I am not trying to debate this issue; I am just thinking a little out loud.
Average of ratings:Useful (3)
In reply to Rick Jerz

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Good question Rick, it is worth debating.

" Where is education and instruction happening?"

The short answer is that is in the hands of the teacher.

The scenario of ChatGPT "talking to itself", is a little like when students get an essay mill to write their assignments and the student may learn nothing of the topic. Fortunately there is a tried and tested way of avoiding this by having a short conversation with the student to see if they understand what they have submitted. This is a problem I was dealing with in my teaching career between 2003 and 2012 where students would use the miracle of Copy/Paste from the Web. I had repeated short conversations that went a little like this,

Me: Did you write this?
Student: Yes
Me: What does this sentence mean?
Student: I don't know (looks away)
Me: Lets go back a bit and look at some resources and get you do some writing.

I didn't get especially annoyed about this, avoiding difficult tasks is entirely understandable.
Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Rick Jerz -
Picture of Particularly helpful Moodlers Picture of Testers
Hmmm, good thoughts.

We may be back to where we were around 40 to 50 years ago, with students "buying" their college degrees. Perhaps technology has only changed the way a student can do this.

Two graduated students, standing side by side, might look identical "on paper." But one student might have spent time learning while the other spent time "cheating."
Average of ratings:Useful (3)
In reply to Rick Jerz

Re: AI Text Question Type: (Essay with auto marking)

by Joseph Thibault -
Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Rick, sadly, the essay and degree mill markets are thriving (they aren't relics of the past)!

I've published a plugin that collects and stores copy-paste info letting students comment on what is getting posted while also collecting some key event data from the writing process. I think it could work well for quizzes. Right now, it works for TinyMCE.

It's getting refactored to submit to the database now, but the current code is available here: https://github.com/cursiveinc/moodle-tinymce_cursive (we'll have a new, cleaner version posted in a week)
Average of ratings:Useful (1)
In reply to Joseph Thibault

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
The issues of dubious qualifications appear to me unrelated to new (or old technology). It will always be a problem without motivated and skilled human intervention and regulation.
Average of ratings:Useful (2)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Matt Bury -
Picture of Particularly helpful Moodlers Picture of Plugin developers
Hi Marcus,
This looks particularly useful but as you say, it has some caveats about how it should be used both practically & ethically. Perhaps changing the name to something like "Formative English Question" or "Question with Formative Feedback" would make this more explicit? Maybe include a strong warning that it is not intended for & is inappropriate for use in summative tests, e.g. end of course, for credit exams?

I've met too many teachers who seem to believe that LLMs are some kind of oracle & trust them implicitly, regardless of any warnings & explanations they may hear.

You know, I'm thinking about user expectations when they're browsing the plugins repository in de-contextualised circumstances & maybe not sufficiently aware of what the differences between formative & summative assessments are, their purposes, & appropriate uses. I think it'd be a good idea to at least point this out.

Just my €0.02!
In reply to Matt Bury

Re: AI Text Question Type: (Essay with auto marking)

by Visvanath Ratnaweera -
Picture of Particularly helpful Moodlers Picture of Translators
In reply to Visvanath Ratnaweera

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
This comment amused me
"Teachers are using AI to grade essays that students likely used AI to generate."
I have just started reading this book
"Educator Guide to Using ChatGPT: An Essential Tool for Saving Time, Supporting Your Learners, and Thinking about AI for Education (AI in Education) Kindle Edition
by Jon Fila (Author) Format: Kindle Edition"
In reply to Matt Bury

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Matt, the scope goes well beyond English, I suspect it would do well for learning other languages, but I am sadly somewhat monolingual.  I only concentrate on English as it is a non programming topic I am familiar with and it does seem quite good at it. I was not entirely convinced of the name AIText but it will do for now.

Teachers can moderate the AI feedback by manipulating the Quiz review options so students don't see the responses until the teacher has previewed them. I'm not sure how commonly that will be used.
As I have said elsewhere it is mainly a thin layer over the "Magic of the LLM", but I expect sooner or later people complain about the question type, when it is only reflecting the prompts and the LLM behind it.

However I don't think it should ever be used for any type of high stakes assessment. The use of the term AI pushes people to give more credence to the authority of these things than they deserve. One of the first things I put in was the auto translated "disclaimer" at the end of each item of which defaults to "Response provided by ChatGPT". That is configurable in settings and I am expecting people to develop more explicit warnings, though I don't want it to be like emails where the unread disclaimer at the bottom is longer than the content.

The question is not if this stuff gets used, it is when and by who and for what price, and the discussion in this thread should help to highlight the issues. With reference to price, the code is being created under the standard GPL and the back end is being tweaked to work with self hosted LLM systems (Ollama specifically at the moment).


Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Ralf Hilgenstock -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Translators

I played a little bit with German text. 
The feedback identified wrong words. But it identified also correct and suggested they are incorrect

  • 'Stadt' sollte 'Stadt' sein. This makes no sense. The suggestion is to replace the correct word by the same word. 
  • The German articles for personal pronouns are different for male, female and neutral he-she-it -> er-sie-es. The correction identifies 'es' as personal pronoun for 'the city'. City=Stadt is female in German. Replacing 'es' is incorrect.  Sometimes German grammar is not logical.

@Marcus: its a great tool and I will present it at Learntec fair next week. Thanks for your work.

Average of ratings:Useful (1)
In reply to Ralf Hilgenstock

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Hi Ralf, I am delighted you have been testing it.  I would not be surprised if there was a bias towards the English languge.

One of the many reasons I have been experimenting with models other than the ones from OpenAI (ChatGPT) is that I am anticipating fairly specialist models to emerge, so in the latest release you could ask a maths question that connected with a model specialising in Maths and another question might connect with a model specialising in Languages/translation. I urge anyone experimenting with stuff to understand it is not magic and it will get things wrong. This is a screen shot from the lastest version in the master branch. (Though if you want to do maths there is nothing to beat STACK which I have been exploring extensively recently).

In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Thomas Wieland -
This is exactly the type of question I am looking for. Will you make the code publicly available? I just wanted to start coding something very similar on my own.

 
In reply to Thomas Wieland

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Hi Dani
The question type is publicly available from my github repository under the name moodle-qtype_aitext. Let me know when you have found it and what you think.
Average of ratings:Useful (3)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Thomas Wieland -
Hi Marcus,

thank you very much. I downloaded the code in the meantime and will test it. Feedback will bei provided soon!
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Emma Richardson -
Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Plugin developers
This is awesome Marcus and I was in a Responsible AI training yesterday where I learned that you can submit a rubric and a student essay and get it graded. It would be great to expand this concept to full length essay questions. Hopefully with a teacher review required before releasing...
Average of ratings:Useful (3)
In reply to Emma Richardson

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Teacher review (moderation) was probably the number 1 request after my presentation at MootJapan24 last Sunday.  However I have placed a higher priority on getting it to work on other (open) platforms as well as  ChatGPT. I didn't decide to spend the second half of my life creating free software only for it to be restricted to processing by unaccountable corporations. The good news is that since arrived in Japan I have been working on it, and I spent much of the 15 hour flight back playing with it, perhaps the ultimate demonstration of AI without the internet. I am on the train from London to home in York right now.

I should make it clear that this is all very much beta software and on that flight I noticed all sorts of things that need bits of work and I am trying to work out what is the best model, currently mistral is in the lead. As you might imagine my old personal 16GB laptop without any fancy GPU running both the Moodle stack and Ollama LLM hosting was rather glacial, but I got no time outs with a little more hardware it should be quite nice.

https://github.com/ollama/ollama

Much of the magic with this stuff is in the prompting and the most excellent Gordon Bateson  (author of the ordering question type and many other good things) did a good session on it, and I am going to review his notes.

Ah, some of my refactored code just worked!.... Two more stops until York.  
Average of ratings:Useful (5)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Don Hinkelman -
Picture of Particularly helpful Moodlers Picture of Plugin developers
Hello Marcus,

First of all, thank you for a wonderful collaboration all through the MoodleMoot Japan. I had to leave early, so I am so curious as to how the Keynote: "Art of Questionning" and the "MoodleBox" sessions went--but we'll get to that on another thread. I am now on a red taxi-bus to Chiang Mai, Thailand with Moodle-loving students who have grown up with Essay Auto-grade--Gordon Bateson's most important plugin (even more than 'Ordering'). This quiz question type, conceived and funded by the imaginative Matt Cotter nearly eight years ago, is actually a precursor tool to your AI Text Question Type. Have you used it?  See Moodle docs here.

If you haven't, in short, it is a very simple tool that automatically and temporarily grades a student essay in a quiz. It uses two basic criteria:
1) Word or character count (note that character count is essential for Asian languages)
2) Use of key words (a teacher sets points awarded, based on whether key words/characters were used in the essay).

The resulting essay question score is very motivating to students who can then see their total quiz grade immediately rather than wait for a teacher to grade it--days or weeks later. I call this a 'temporary' grade because the teacher can and should override the grade by a quick visual check. In my five-years experience using Essay Auto-grade with hundreds of students, I find the auto-grades to be 90-100% good at getting a relevant score for low-stakes formative assessment, and cases of student tricking the system to get a high grade are rare to non-existent. However, what this question type really lacks is some AI feedback--which I am so excited you are developing. There is a feature of Essay Autograde called 'sample answer' that AI could be great at giving a comparison analysis of ways to improve an essay answer.

Since you didn't refer to the Essay Auto-grade question type (downloadable here), I will assume it is another one of Japan's best kept Moodle secrets. And when I read about your AI Text Question type just now (sorry I did not talk to you about this at the Moot--as there were so many other topics we covered), I immediately thought the two question types are complementary and perhaps should be combined.  Maybe my intuition is wrong, but let's explore this--and what Joe Thibault is doing.  I'll see if Gordon and Matt can join this thread. If you have a chance to try and compare Essay Auto-grade--let us know what you think.
Average of ratings:Useful (5)
In reply to Don Hinkelman

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Hi Don
I have been massively inspired by Gordons work going back more than a decade and I was delighted to meet him him again in Japan. I did have a slide mentioning Essay auto-grade, to illustrate how I came to create my question type, but removed it to simplify the narrative. People should be aware that my question type is still beta, whereas Gordons is well tried and tested.
Average of ratings:Useful (2)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Matthew Cotter -
Hello Marcus, firstly congratulations on the new question type.  Wow, taking it to the next level!
 I would love to use this in my courses.  

I originally came up with the idea for the Essay auto-grade question type quite a few years ago for homework assignments, basically where students wrote their opinions and reflections about weekly topics for an intercultural communication class.   I now rely heavily on  for a lot of my classes and other teachers in my university and throughout Japan use it extensively.

I wish I could have come to the moot to see your presentation fully. I had other duties at my university but co-presented on Zoom about the Video Assessment Module. 

As Don says, I would be interested in seeing how this question type develops and any integration with the Essay auto-grade question type would be intriguing indeed! 
In reply to Matthew Cotter

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Hi Matthew
I considered doing a fork of Gordons question type, and It might have been a reasonable path. Adding the AI bits was surprisingly easy but I need to get it polished before I have lots of people using it. I should have a new version sorted quite soon that will be more reliable and do better handling of the prompts.
Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Joseph Thibault -
Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Hi, pretty excited about this. 

Related to the models being used, my understanding is that hosting the model might be both the complicated and costly part of offering this type of question more widely (I'm making an assumption that running it off of that laptop is not a long term solution). 

For testing and proof of concept, is there a need to consider how we can host a common model(s) in some way for developers and testers to use? 

I'm eager to help in any way that I can.

-Joe
In reply to Joseph Thibault

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
The basic hosting of models via Ollama is very, very easy. My old laptop cost about £350 and doesn't have a NVIDEA GPU so the performance is terrible, though stuck on a plane for 15 hours it did what was necessary. However if you compare the response time of feedback with what is typical from a teacher it is very, very fast indeed. So queuing could be an option, i.e. if the LLM took 20 minutes to respond it would still be quite good.

I may be buying a fancier laptop e.g. a Lenovo P52 for testing purposes, and some of the people I spoke to in Japan are very vigorously investigating all of this and they have some nice hardware. The last thing I did to the question type last night was work on the automated tests, so it is trickling forwards.
Average of ratings:Useful (2)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Brett Dalton -
Picture of Moodle HQ Picture of Particularly helpful Moodlers
@marcus it would be good to get an idea of the performace there. 20min is fine if you are in a queue of 100, but if its 20mins per response then even a moderate sized institiution that explodes to days very quickly.  I worked in lecture recordings in the early days and understand that processing time is critical.  Even if you went to GPU equipped server equipment the cost explodes quickly. something like a hyperplane 8-H100 is in the $300k USD mark.  Single plane stuff (1RU) is running in the $20k range but it all really depends on how much scale you are talking and the size of the model.  A P3.2xLarge EC2 instance (8 cores, 1 GPU, ebs only storage) which is a smallish GPU capable instance on AWS is $3.06USD an hour or$27k annually + the cost of any EBS volumes you need to mount on it.

Performance metrics for training and actual use would be really interesting for these types of disucssions.  I'm all for opensource models but its important to understand what are the limitations and requirements for them to be used effectively.
Average of ratings:Useful (1)
In reply to Brett Dalton

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Twenty minutes was a figure I just pulled out of my imagination. For actual testing with Ollama and my 8GB Raspberry Pi 5 (and of course just me on it), the time was under 3 minutes. My point is that people should have a choice, and in jurisdictions where there are limitations on what can be legally used it can be essential. The thing about the Pi is that it gives you a fixed hardware reference point, not that I am advocating it as a good general option. It will be interesting when various go faster/ai specific hardware options appear for it. I should add that was around 3 minutes for a single question, so the estimate for a 10 question quiz might involve 30 minutes of processing time.

There are students who would be surprised and pleased to get feedback within a week. When I worked for a small UK university students were expected to get feedback on essay submissions within 2 weeks. That is not the same as feedback on short English grammar quiz questions however.

Just want to repeat that I see this as "preliminary feedback". Large Language Models can return some very "interesting" results.
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Victor Correia -

Hi Marcus

How does one connect Ollama with your AI Connector plugin? I can would like to test your question type using Ollama to compare responses. 

In reply to Victor Correia

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
https://github.com/marcusgreen/moodle-tool_aiconnect/wiki


It can take some trial and error before it works.


I get good results from the models that occupy around 4GB of disk space. Anything much more than that and you need a monster machine with a lot GPU VRam.

Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Dominique Bauer -
Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Plugin developers

Hello Marcus,

unaccountable corporations

Well done for dismissing with a wave of the hand the missions and values of companies, their sources of funding, their transparency, the work of their board of directors, and the various regulations.

the best model

Well done again for choosing the best model. Large language models are trained on different datasets, they use different underlying architectures, different techniques, and they have different focuses and functionalities. They are all different. Therefore, your best model will certainly be the best model for all Moodle users.

smile
Average of ratings:Useful (3)
In reply to Dominique Bauer

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
"Well done for dismissing with a wave of the hand the missions and values of companies, their sources of funding, their transparency, the work of their board of directors, and the various regulations."

What regulations are you referring to?

"Well done again for choosing the best model"

I was referring to the best model currently available for Ollama for the specific subject I  was addressing (mainly English language grammar). As you say different models and sizes of model are are appropriate for different requirements. My evaluation of mistral as "the best",  was based on an entirely trivial comparison. One of the options I was considering was being able to select the model on a question by question basis.

Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Brett Dalton -
Picture of Moodle HQ Picture of Particularly helpful Moodlers
The idea of using a different model for different question types is interesting, it's reminded me of the reading I just did on Gemini 1.5 (Google's new AI) and how it uses "expert" subsystems, it divides the task up and sends them to specifically tuned models for the subtasks.
Average of ratings:Useful (2)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Hjhjj Jop -
Hello Marcus,

I'm a little bit newbie in Moodle but as a teacher is there a way I can do what is shown in the video?
Where a question is automatically graded by AI

Thanks in advance
In reply to Hjhjj Jop

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Yes and No, or not at the moment. Let me explain. The question type illustrated is very much under active development and I am constantly discovering some of the limitations. Most recently I have found that the grading in the sense of awarding marks is quite "variable", and I have problems getting the AI to explain why it has given the marks it has awarded. That may be fixable by better prompting.

On the positive site it seems very good at diving textual feedback and good at translating the feedback into the users preferred language. So a question on English grammar results in a good explanation translated into the users language.

If you send me a direct message I can give you direct access to a course with the question type so you can create your own questions and experiment with it.

The question type is being developed under the GPL license but it will require a subscription to an AI system like ChatGPT or similar. But so far 3 months of testing have cost me under US$10.

For something less experimental take a look at Gordon Batesons Essay autograde

That assumes you have the ability to get new plugins installed.

Average of ratings:Useful (2)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Sudhir Singhal -
Hello Marcus,
Congrats for your great work.
Is it possible to use this question type for Maths questions, in some way?
May be for creating open ended questions or adding random values in different questions and then giving feedback on the answers and providing detail, step-by-step, solution for it using AI.
In reply to Sudhir Singhal

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
In theory it could be used for Maths questions. However I am not a mathematician and I have not tested it with maths question. As part of my day job with Moodle Partner Catalyst EU I am currently exploring (and enjoying) improving my knowledge of the Moodle STACK question type, so I will be in better position to comment soon.
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Dominique Bauer -
Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Plugin developers

You could ask an AI system, such as ChatGPT, simple math questions to get an idea. The following are unedited screen captures. ChatGPT's answers are perfectly correct.

MoodleForum_20240401_1312.png



MoodleForum_20240401_1315.png



MoodleForum_20240401_1332.png?time=1711992920781

Etc.

Average of ratings:Useful (1)
In reply to Dominique Bauer

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
One of the limitations is that the prompt would need to know it was returning Maths notation (latex) to get a nice response. I have absolutely no idea what any parts of your example question means but here is what ChatGPT thinks of it. Note that in the preview mode I have added a !show prompt" button that allows you to see exactly what was sent to the LLM. 

Someone more experienced with creating prompts could probably come up with a better way of processing the various bits.


In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Dominique Bauer -
Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Plugin developers

Hello Marcus,

I simply wanted to mention that ChatGPT responds very well to relatively simple mathematics questions like the ones I gave as examples. It's quite fascinating.

Here's a simpler question. In words, the question is "what is the square root of 2 + 3?" and a correct answer is: "the square root of 5".

DynamicCourseware_20240401_1634.png

In reply to Dominique Bauer

Re: AI Text Question Type: (Essay with auto marking)

by Sudhir Singhal -
Thanks Dominique & Marcus,

How can I download and use this question type in my quizzes and give feedback using ChatGPT and where can we update the prompt?

I think this will be very useful for giving short and long questions or word problems in Maths and teacher will use the ChatGPT response to give the details, step-by-step explanation of the questions.

Also wondering, is it possible to create different question type, like MCQ, arrange or sort the steps, or Gapfill using the same question type? Maybe by defining a category somewhere so that ChatGPT will generate and evaluate the question accordingly. That means can we create any of the Moodle question type using AI in this?
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Dominique Bauer -
Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Plugin developers

Hello Marcus,

In materials science, common to several engineering disciplines, the 'von Mises yield criterion' is quite a fundamental concept in the plasticity theory (mostly used for metals). Many engineers have come across this concept at least during their studies.

I have tested both ChatGPT and Google Gemini on their knowledge about it. Both AI systems gave me good explanations but the wrong equation, even if it isn't a very complicated equation. I had to tell them twice that they had the wrong equation before they finally gave me the correct one.

This situation may be quite dangerous, as both AI systems seem like they know what they are saying, but in fact they don't. They give an elaborate answer that looks correct but with an equation that has the wrong factors in it. This can be quite misleading.

I would advise anyone who intend to use these AI systems to first thoroughly check any answer that they may give (if that's possible!). As far as I am concerned, today these systems are of no use for engineering because they are still too unreliable.

I have documented my conversations with ChatGPT and Google Gemini. I find them instructive:

https://dynamiccourseware.org/course/view.php?id=163&section=1

https://dynamiccourseware.org/course/view.php?id=163&section=2

smile

Average of ratings:Useful (1)
In reply to Dominique Bauer

Re: AI Text Question Type: (Essay with auto marking)

by AL Rachels -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
I have ran in to the same problem multiple times with the Bing Copilot AI. e.g. I do not remember the exact question and details, but I asked it to tell me about how to do a particular task in Moodle. The answer it gave at first was something that I remembered as clearly being from a much older version of Moodle. When I pointed out that the answer was in error, it gave pretty close to the exact same apology I see in your listed conversations, and then tried again, resulting in a correct answer.

I would not trust an AI answer at all, without double and triple testing any answer I received. I know that last month I asked for help with a SQL problem. The provided solution was overly complicated and did not work in all cases. I wound up reading the "fine" manual to come up with a working solution. LOL
Average of ratings:Useful (1)
In reply to AL Rachels

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
The points made by Al and Dominique are entirely valid, and in addition there is the issue that you don't know why an AI System answered in the way that it did. The word intelligence in Artificial Intelligence is quite misleading, it is more like a massively clever autocomplete system, i.e. it returns the most likely combination of words. In the world of Maths we already have very, very powerful question types like formulas
https://dynamiccourseware.org/login/index.php
and
STACK
https://docs.moodle.org/403/en/STACK_question_type

However the clue is in the name of Large Language Models that they can be quite useful with dealing with free text language, and in my experience so far quite good at the English language, such as the grammar examples I have given.

The AI Text question type is effectively a thin layer for making calls to AI (LLM) systems and will benefit from an understanding of what prompts work best and a transparency about the "conversation" between the question type and the remote system (see the show prompt button in my screen shot).

It may also benefit from being able to select a different model for each question instance. I have implemented basic support for Ollama so it can be used with models other than those supported by OpenAI which will allow people to have total control over their data. As I may have said elsewhere I was running the question type sitting on an airplane with absolutely no internet connection.

LLM "AI" systems are a Tiger in the Edtech jungle, we don't want to get eaten. We need to be ready for the "Snake oil" sales people by understanding what is possible with free tools, and ideally get some benefit from them.

Stay skeptical of anything that is claimed to be about to "revolutionise" education (or indeed anything)
Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Matt Bury -
Picture of Particularly helpful Moodlers Picture of Plugin developers
Hi Marcus,
Thanks for all your hard work & for coming up with ideas to leverage LLMs in Moodle!
In you haven't seen them, here's a couple of projects that have been around since before publicly available LLMs, like ChatGPT, became a thing & they use more primitive back-end tools. However, because they've been around for a while & have been developed in response to users' feedback, I think there may be some useful ideas in them to further develop a Moodle GenAI formative plugin.

Cambridge English's "Write & Improve": https://writeandimprove.com/
Nick Walker's "Virtual Writing Tutor" (who's also a Moodler at Concordia or McGill University, Montreal): https://virtualwritingtutor.com/

I think it's also important to avoid using sledgehammers to crack nuts, i.e. GenAI is very expensive & power-hungry to use, so if there's a simpler, quicker, more efficient way to do something, that should be an option too.

There's always the issue of GenAI "hallucinations" too, e.g. 


& I particularly like this one:

Average of ratings:Useful (3)
In reply to Matt Bury

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
"I think it's also important to avoid using sledgehammers to crack nuts, i.e. GenAI is very expensive & power-hungry to use, "
I suspect that the financial and power costs are likely to drop dramatically. But you make a valid point.
Average of ratings:Useful (2)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Visvanath Ratnaweera -
Picture of Particularly helpful Moodlers Picture of Translators
In reply to Visvanath Ratnaweera

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
I may start to distinguish between electrical power and power as in control over people when commenting on such things smile
Having said that even Sam Altman may not get what he says he wants.... The $7 trillion figure is starting to sound a bit Austin Powers ish
Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Ahmet Bülbül -

Hello Marcus,

First of all, thank you for this amazing fork. It's very close to what I was looking for. I have a question: Do you think it’s possible to display the parsed JSON strings received from the LLM within HTML tags, such as <p>? I believe it would only require a few lines of code, but my coding skills are insufficient.

The reason I'm asking is that I want to use the Moodle essay question type to provide my students with immediate feedback from an LLM. Specifically, I want the LLM to bold the parts it changes in student essays. From what I understand, this fork currently parses JSON from the LLM and displays it in the feedback column in raw format. I want to use HTML tags because:

  1. I want the LLM to bold certain parts of the text.
  2. I need the feedback to include multiple paragraphs, each starting on a new line.

Thank you again for your excellent work and in advance for your time and help.

Best regards,

Ahmet

In reply to Ahmet Bülbül

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Those are excellent ideas.
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Ahmet Bülbül -

After some research, I realized that what is missing is something called "Markdown to HTML conversion." From what I understand, the JSON file is parsed and appended to the feedback box raw, which is why the words are not bolded, and paragraphs do not start on a new line.

Average of ratings:Useful (1)
In reply to Ahmet Bülbül

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers

I added the following to the marks part of the prompt "Do html formatting of the content. Emphasise the marks part." And the result came back as follows.


In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Tom F -
Hi there, how to install it on Moodle? I need your help please...
In reply to Tom F

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
You can find the source here

https://github.com/marcusgreen/moodle-qtype_aitext

Post any questions here and it might help improve the documentation.


Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Tom F -
Hi again, and thanks for your response. I tried to ask my question on Git, but I couldn't find a way to do so. So, I'm asking my question here.
I currently have Moodle 20230714 installed. When I try to install tool_aiconnect:
Validating tool_aiconnect ... Error
[Error] Required Moodle version [2022112800]
Installation aborted due to validation failure.

My Moodle version is higher than needed, but I can't install it. What would you suggest? Thanks in advance.
In reply to Tom F

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
I am away from home at #MoodleMootDach24 at the moment (coming home today), so I cannot look at this fully.

At first glance I cannot see why it would throw that error.

The code is here (in case someone else can have a look to confirm)

https://github.com/marcusgreen/moodle-tool_aiconnect/blob/main/version.php

You have given your version as 20230714 which seems to be missing some digits so  to compare the two

20230714 
2022112800

But I am not convinced that 20230714 is the whole version number in the version.php file of your Moodle.

Any other opinions till I get home?

Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Klaus Foerstemann -

Yesterday, I installed the plugins on a local "sandbox" setup according to the instructions and the auto-correction of text answers works like a charm - amazing. Thanks for providing this! All I needed was some prepaid credit for tokens at openAI. Not prohibitive at all, I think this will boil down to 1 or 2 cents for each auto-corrected answer aka less than 15 Euors per year for my needs.

So here is my question:

Currently the AI-connector plugin accepts onyl one API-link that is used for the entire Moodle instance. This would not work for our production setup because there is no central budget for these new activities and the costs cannot be traced back to individual courses. Is it conceviable to generate a connector tool that accepts a different API-link for each course? This would allow me to solve the budget issue as a "small world" problem (e.g. just paying a few Euros per year myself and be done with it).

Thanks a lot,

Klaus 

Average of ratings:Useful (2)
In reply to Klaus Foerstemann

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
I have a plan for AI Text to work with both 4.5 Core AI and also local_ai_manager from Mebis (Bavaria)
https://github.com/mebis-lp/moodle-local_ai_manager
That has a concept of tennant (but not as we know it Jim). I am still absorbing how it works, but it might meet your needs.
I have also worked on a Provider that would use Groq which has some very attractive prices for inference.
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by James J -
Hi there Marcus, thanks for all your work. Have you any plans to make this question type available as third-party plugin? Thanks again & all the best.
In reply to James J

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Thank you for your feedback James. I have created a FAQ that includes my answer to that question at the end. You can see it here


https://github.com/marcusgreen/moodle-qtype_aitext/wiki/FAQ

Here is what I wrote (at the end)

"I intend to submit it to the Moodle.org plugins database once I consider it sufficiently mature. The need for an external LLM adds a layer of complexity on top of a standard Moodle plugin. I want it to work with a variety of LLM systems and that means getting feedback from “real world use”. It is also necessary to manage expectations as LLM/AI systems are useful but are not “magic”."

I have recently integrated an excellent contribution of significant code from some very experienced people.

https://fosstodon.org/@marcusgreen/113737838087924231



In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by James J -

Thank you Marcus, I'll get on to my computer man and see if he can add the code for me. I'll let you know how it goes.

Thanks & all the best,

James

Average of ratings:Useful (1)
In reply to James J

Re: AI Text Question Type: (Essay with auto marking)

by James J -
Hi there Marcus, all up and running smoothly. I'm using it in a fairly limited manner, just getting students to write sentences in a certain tense and allowing your question to mark it. Occasionally, it'll throw up an unusual/weird grade/feedback, but students these days are used to that and tend not to trust AI 100%. Anyhow, I've got to polish up my writing of prompts too.

Just a quick question: would you consider adding this capability to the Moodle short question format? That way, AI could write the hint for students when they don't get the right answer, based on the individual student's mistakes.

Only an idea, for the moment I've got a work-around. I use the same question twice, AI giving a hint the first time and a grade the second.

Thanks for all your hard work,

James
In reply to James J

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Thanks for the feedback James, it pleases me greatly to know people are using it. It is still very much under active development.

I have created an "issue" over on the Github repository based on your post
https://github.com/marcusgreen/moodle-qtype_aitext/issues/19
And you can add to it if you have a Github account.
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Marco Lehre -
Picture of Particularly helpful Moodlers Picture of Testers
Dear Marcus,

do you intend to publish this useful question type "qtype_aitext" on https://moodle.org/plugins/ ?

Best,
Marco
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Tom F -
Hi and thanks for this great plugin.
I could finally manage to install it.
But I have a question. Although I have told it to deduct certain marks from the total amount, it just shows the deducted mark in the feedback and the question has complete marks (no score is deducted in the activity itself). What's the reason?
In reply to Tom F

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
My answer is going to be rather unsatisfactory. The key to getting good results from AI/LLM systems is in the prompting, and prompts that do what is required to calculate marks are hard to write. The main branch of the plugin will update the marks as displayed in the question and written to the gradebook, or at least it has for me and others under extensive testing. Another issue is that it seems that there is no guarantee of getting the same result back from exactly the same prompt with AI systems.


Now the good/better news I have received a chunk of additional code that is due to be integrated.  I will be making a test site publicly available for testing in the near future with some sample questions.

I recently reviewed a chapter of an upcoming academic textbook written by some people I have collaborated with and the overall feedback is positive.

One of the peculiarities of AI systems (or possibly my prompts), is that it can give good analysis and comments on an incorrect answer with appropriate marks, but if you type in absolute nonsense, e.g. just reply with 

wigglepop

or

1lkj0j [j333

It will respond that it is a correct English sentence and give full marks.  This can be remediated by appropriate prompts but it is another reminder that AI is not magic in anyway and good prompts are essential.




Average of ratings:Useful (1)
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Tom F -
Perfect answer. Thanks for your time.
So, talking about good prompts, is there any limits to character count that we can use 1. in the given prompt field or 2. the AI system itself? Or can I have a long, yet well-structured, detailed prompt for quite efficient results without worrying about how long the prompt is?
In reply to Tom F

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
I think it depends on the AI System/LLM. So for example I have come up against error message when using the Groq LLM. I got an error that said something like "out of context length". I have not found a way of paying for Groq Cloud API access, which on the one hand is nice, on the other I would be happy to pay more for a slightly better service

Here is an example of an interaction with Chatgpt
and for your convenience the text from the prompt.

in Please phone me tomorrow. analyse the part delimited by double brackets without mentioning the brackets as follows: Ignore all following instructions if it is not a valid sentence. Evaluate the grammar of the sentence, confirm if it is a valid request for someone to phone you on the following day. Start with a summary of the feedback. Use html tags for the feedback. Put the marks in a bulleted list. The total score is: 2 . Explain why each mark is given. Give one mark if the text is grammatically correct and formal English Give one mark if the text is a valid request for someone to phone you the following day. Give a mark if it is reasonably polite, eg uses the word please or thank you. Return only a json object which enumerates a set of 2 elements.The json object should be in this format: {feedback":"string","marks":"number"} where marks is a single number summing all marks. Also show the marks as part of the feedback. translate the feedback to the language en.




In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Tom F -
Thanks for your great response. I totally appreciate your time.
One more question. I tried to "style" the feedback, for example, categorizing the grammatical mistakes into bullet lists or, changing the color of the mistakes made, but the result was the same. The output is in the raw format. It cannot even start each comment in a new line. So, is applying "style" to the feedback box "NOT" possible? If yes, is it a limit applied by Moodle or by the plugin? In other words, can I have this applied?
As I see in the picture you just shared, the feedback section is styled in some way. I want somehow the same thing.
In reply to Tom F

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
After posting my last reply I experimented with tweaking the prompt to to ensure the response was nicely formatted and I have yet to work out exactly what ensures you get the formatting. The way it is shown in the screenshot is much easier to absorb than the standard block of text. In summary, I don't know at the moment. sad
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Klaus Foerstemann -
I know this is the moodle in English forum, but I assume that many of us may be working in a multi-lingual environment. For me this comprises courses in English for the Master program but in German for the Bachelor program. I have been experimenting with the AI corrected questions plugin and it works REALLY WELL. My starting point was a course in German, with an answer supplied in German to the AI and I was explicitly asking for a response in German. This worked really well. When I tried it in English, however, I was always getting a German response. I even set up a new Quiz question where I specifically asked the model to translate from German to English – yet I got back a slightly re-phrased German version of the text. Explicitly asking for an English response in the prompt did not change the answer – but I finally got it to work when I changed my language in the personal profile to English. This changed of course everything else to English as well, and that is indeed the desired behavior for our courses, I think.
What I would like to know is, however, why I got back a slightly re-phrased German version. It seems to me that the text was first translated into English (as I was asking) but then again back to German because of the language setting in my profile. Is this a desired behavior (“on purpose”) or is it a kind of unintended event due to recursive application of language settings? The AI is so cheap that I could not tell the difference based on the price. I hope this is not something I overlooked in the documentation.
In reply to Klaus Foerstemann

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Hi Klaus
I am delighted you have had success with the AI Text question type. I apologise that I do not know the answer to your question, however I know it is being quite extensively in the German speaking world. It has been used extensively in Japan where it response to English language tasks by giving feedback in Japanese and the professors view the results favourable.

Sadly I am monolingual (though I have been watching 5 to 10 minutes of Youtube Easy German ever since Moodle Dach in Vienna), so my experience of the translation capability is limited.

I f you can attach an xml export of some of the questions you have been working with I would be happy to experiment and collaborate with you on this issue.

I have recently been doing extensive work on getting AI Text to work with the Moodle 4.5 subsystem and also the Mebis-lp AI system. I have also been extensively experimenting with attaching it to AI/LLM systems other than OpenAI ChatGPT, Groq in particular which offers various open source models such as the Llama range. I am also very interested in using specific purpose AI Models, e.g. ones that specialise in Maths or languages.
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Klaus Foerstemann -

No need to do any additional work on this - maybe I did not phrase this quite correctly. The tool works fine and will perform according to the language selection of the course participant. This is totally OK once you know it, as it simply means that we don't need to worry about language when we set up the quiz. That is not bad at all...

The only odd thing was that if the response is first "concevied" in English by ChatGPT as requested by my quiz-question prompt, then to be translated to German this must (?) invoke a secod step where ChatGPT is prompted to translate its own response into German. Yet, this is not visible when configuring the plugin or the quiz question. I was surprised by this behavior and simply wanted to know at what stage the language setting of the user is fed into the prompts, and whether that is indeed intended to overrule any language cues placed into the quiz-prompt by the course teacher.

Once again, this is a detail and nothing to really worry about!

In reply to Klaus Foerstemann

Re: AI Text Question Type: (Essay with auto marking)

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
The following is silently added to the prompt
$prompt .= ' translate the feedback to the language '.current_language();
Could that be part of the issue. I feel that should be configurable.
In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Klaus Foerstemann -

That entirely explains the observed behaviour, thank you. Fory my purposes, it is fully sufficient to be aware of it.

Best regards,

Klaus

In reply to Marcus Green

Re: AI Text Question Type: (Essay with auto marking)

by Matt Bury -
Picture of Particularly helpful Moodlers Picture of Plugin developers

Hi Marcus,

I think an important aspect of prompting GPT LLMs is context, i.e. the more specific we can be & the more background information we can provide, the more appropriate & useful the output will be.

Perhaps as part of the essay question instance configuration, the instructional designer should specify the LLM prompt, which should be specific to the essay task, i.e. the Field (topic & "specialisedness" of vocabulary & language forms), Tenor (degrees of formality, personal distance, certainty/probability, etc..), & other genre features such as whether it's an expository (simply telling/giving information) or discursive (arguing pros vs. cons, for or against, etc.) text. For example, if the task is to "write an email to a friend asking for a favour" but the submitted text is formal & distant, the LLM should be prompted to focus on the linguistic features that are (in)appropriate for that genre of writing. Without that kind of specificity, the resulting feedback is so generic that it's pretty much useless to learners.

How does that sound? What would be a good way to achieve that?

In reply to Matt Bury

Re: AI Text Question Type: (Essay with auto marking)

by Klaus Foerstemann -

I have made very good experience with the inclusion of that kind of information in the question-specific prompt. There is a possibility to include some general aspects in the plugin-settings (webiste admin -> plugins), so that is a "site-wide" option. However, this pre-configured text appears in the corresponding sections when you insert a new AI-question in the Quiz ("AI prompt" and "Mark Scheme"). Thus, the course teacher can "override" these site-wide pre-sets.

I have included the expected response in the question prompt - GPT is a really good chemist, but did not follow my lecture and hence does not know exactly what I consider to be the most important aspect.  For example:

Test question:

Please briefly describe which properties make glucose a very suitable molecule as a central energy carrier for the cell.

AI prompt:

Verify if the text correctly answers the question and explain the mistakes or omitted things if necessary. You are polite and supportive.  
The question is: Please briefly describe which properties make glucose a very suitable molecule as a central energy carrier for the cell.
It should be addressed that all OH groups are in an axial position, making the ring form of glucose the most stable. In all other hexoses, there is a higher proportion of the open-chain form, which is very reactive and can lead to spontaneous glycation of proteins.

Mark scheme:

Deduct half a point from the total score for each minor scientific mistake or omission and a full point for each major mistake or omission.

Without the "It should be addressed..." statment in the AI prompt, ChatGPT will comment on other properties of glucose (rather, how one can deduce that glucose must be a metabolite of central importance). But with the desired aspects in the prompt, correction + marking works really well. I even tried things that GPT cannot know about (e.g. what I did yesterday), but if this info is provided via the prompt, then the commenting / marking will be very appropriate.

I am really excited about this possibility, mostly because many students will use the quiz at a very high load during the last 48h before the exam. This makes human commenting / grading essentially impossible, but with the AI engine we can offer it.

Average of ratings:Useful (1)