Hi Gareth thank you for your excellent questions.
I have split your question into a new thread as that other one was getting very long.
I have created some documentation for the quesiton type that covers some of the wider issues relating to this type of technology which you can see here
https://github.com/marcusgreen/moodle-qtype_aitext/wiki
1) Could the tool detect if a response was AI generated before expending time pre-marking?
Possibly, but it is not something I am interested in. When I was a teacher I found a short conversation with students to be quite effective to confirm if a student had created the work they submitted. I write about this here
https://github.com/marcusgreen/moodle-qtype_aitext/wiki/Cheating
2) What do students feel about a machine telling them, a human, how to communicate in their human language?
The feedback I have got is that students assume that the LLM response is "correct", This came up in a a webinar I was in last week which you can watch here. This was referenced in an academic paper that was written about AI Text that is referenced in the presentation.
3) As education is expensive, what do students feel about their fee going towards AI instead of a member of staff to assess their work? As I get the impression that staff will check only if there are issues?
The cost of the inference (the work done by the LLM) is very low on a per student basis. My first year of experimentation with the question type incurred costs of well under $USD100 and the feedback I have yet to hear of cost being a concern. The cost of inference has dropped hugely and I predict it will continue to drop.
"I get the impression that staff will check only if there are issues?"
My advice is that staff should always check, but the issue with these systems is not that they get things obviously wrong, but that they repeatedly get things correct until people become complacent and innacuracies creep in.
4) As AI learns the human language and then corrects humans, then will this then cause the language to stagnate? Over time human language evolves and adapts, we change it to suit the needs of the environment for which we exist in. But if AI causes a static definitive state of the language (like a baseline version in software) where new human adaptions are rejected as false then is this a negative aspect?
My short answer is no, in Language is dynamic and will continue to do so and LLM systems will track human use during those changes, though in the same way tradition media affects how Language is used (e.g. 6 7) LLM's will become part of that loop.
4) Will / could AI introduce new concepts and language constructs of its own accord and we, the humans become what AI wants us to be rather than the other way around?
I think that the people in charge of the Big AI companies may attempt to bend language and beliefs to their own views (See Groqpedia), but I suspect that LLM's will become commodities and underminde their plans. A clue as to this is the way the Chinese models have been highly competitive on price and performance and helped lower the cost of the US alternative.
6) Is there scope for local AI solutions, such as https://www.raspberrypi.com/products/ai-kit/ - for which I have no idea if it is capable enough of running such a language model.
Yes. On my trip to Japan to present I was running an LLM on my modest laptop (Lenovo X280) when I had no internet connection. The performance was slow but OK for testing. The hardware in that link doesn't help with running what is necessary for what I do but I have run it on standard Raspberrry Pi's and the response time is measured in minutes rather than seconds (e.g. 3 or 4 minutes). That may seem slow, but I think it is still potentially very useful and I anticiplate the arrival of hardware to accelerate that process. I am a big fan of the MoodleBox project and have been buying the latest Raspberry Pi each time they come out to see what can be done with it. I will continue to do this and I have been collaborating with people who get EdTech into low resource places, e.g. intermittant power and little to no internet connectivity.
7) Should AI only be used for what we can’t do (within the same relative duration) instead of what we can?
AI/LLM enabled Edtech is just another tool. When you have a big shiny new hammer it is tempting to see everything as a nail. There has been a lot of talk of using LLM's to generate learning material, but to quote Dr Tim Hunt of the Open University
"'As far as I can see, "lack of content" is not a problem the world suffers from. If anything, the opposite. "'
By contrast giving student feedback specific to them is a significant task that absorbs time teachers could spend on the things technology cannot do. It is rare for a student to say their teacher inspired them by the quality and amount of marking they did.
It is worthwhile being aware of some EU policy on the use of AI in Education
https://artificialintelligenceact.eu/annex/3/
Annex III: High-Risk AI Systems Referred to in Article 6(2)
3. Education and vocational training:
(a) AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;
(b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;