Incidentally:
// One needs no clever way. The idea of exams is primarily to give an indication of the ability of the individual to understand and take in the subject matter. That doesn't rely on format of exam, or the syllabus. //
This also is utterly nonsense. For example, because syllabuses are changing, comparison over the years is difficult: if students in 2020 were tested on their understanding of something that was *not* tested in 2019 (or vice versa) then it's a comparison of apples with oranges. Even between syllabuses the comparisons are difficult; although they are more or less the same, different exam boards seem to emphasise different aspects, or style their questions differently, making it potentially easier or harder for some students to understand what is being asked and how.
This also ignores the element of luck in exams. Because no exam can test everything, the only safe approach is to understand all the material, but, in principle, a student who attempted to learn 95% of the course could be beaten by a student who learned only 5% of it if the exam happens to focus on the 5% -- and yet, it should be obvious that in that case the exam has ranked the students incorrectly. This has definitely happened in my own history: in my final-year University exams, I gave up studying one module because I simply didn't understand it, had no interest in it, and was utterly bored with the whole thing, so I read the first five pages of an 80-page set of lecture notes and hoped that they came up. If they didn't, I resigned myself to a zero. In the event, they did, and I got 65%. That is luck, not skill. The exam failed to assess me correctly.
A separate flaw is that exams are very dependent on mood and circumstances. Somebody I know once failed an exam because it happened on the same day that their grandfather died, so they spent the entire three hours crying. I don't think anyone could pretend that the exam in that case was a fair assessment either.
This is a wider discussion now than in 2020, but really the point is that exams are a terrible way of assessing ability in the first place. It's more about your ability to regurgitate information in a short space of time, it's more about the luck of what comes up and doesn't, and it's more about your mood at the time you sit the exam. I'm not saying ability doesn't matter, but ability to do what? To be good at exams, not good at the subject.
This does relate to any predictive model because ultimately any student's performance on an exam is only partially connected to their ability at the subject. Clearly a bright student can be expected to do well, and a student that hasn't learned the material well can be expected to do badly, but that is an expectation and not a certainty. On the other hand, any attempt to compensate for the randomness would equate to arbitrarily assigning revised grades based on little more than a coin toss.
Ofqual's approach was flawed from the start, and, since you are trying to achieve the same thing they were going for, so is yours.