Justin, thanks for the question.
1st I should acknowledge that AI detectors, including this one, are not infallible: They can be gamed by various "humanizer" or "undetection" tools or by prompting by students and have been shown to falsely identify human writing as AI, esp in cases of ESL writing or in lower grades (they should not be used the sole evidence in deciding the acceptability of student work).
The 1% shown above is after the report has been run (e.g. a teacher or admin clicked to "call" the API for the particular text).
Originality.AI does a sentence-by-sentence check and returns the data to provide a more granular report within Moodle (thanks to Amit!). This is displayed once the call is completed to the teacher on a new page:
It also includes an unlisted
URL available to the teacher/admin to view the full report on the Originality.AI site which gives the same information with a bit more curb appeal (tho the facts remain the same):
The classification of AI is clearly labeled with a legend on each "full report" page:
According to the site documentation and disclaimers: "This score reflects our AI's confidence in predicting that the content scanned was produced by any popular AI tool such as ChatGPT, GPT-4o, Gemini 1.5 Pro, Claude 3, Mistral, Llama 3, etc. A score of 90% Original and 10% AI should be thought of as "We are 90% confident that this content was created by a human" and NOT that 90% of the article is Human and 10% AI."
I hope that helps!