I’ve just come back to Memrise after a while away and I will say I like the new look and feel (which I distinctly remember went through one period of looking somewhat amateurish and visually challenging), and so far have enjoyed using it to brush up on some kanji I was having trouble remembering, but the one thing that seems odd is that for multiple choice answers where the “prompt” is in the language being learned (i.e. Japanese in this case), the list of choices are all English words that are somewhat superficially similar to each other. But what’s the point of that - there’s no likelihood that as an English speaker I’m likely to think maybe something means (e.g.) “farther” vs “father” just because they’re similar sounding words? Further more, all of those except the correct ones are invariably ones that I haven’t encountered in the course so far, so I immediately know they’re wrong anyway.
Surely what would make sense is that the set of decoys are a) (English) words I’ve seen in this course so far, so I know they’re plausibly correct, b) ones that other users have frequently incorrectly chosen - i.e. there’s obviously some reason they’re “likely” choices or ideally c) ones that are actually the answers for similar prompt words (in the target language). In my particular case the latter option would need some sort of way of judging the visual similarity between Kanji characters which may be a bit much to ask for (though I’m sure there must exist data online somewhere that already groups characters into visually similar groups). But honestly I’d be happy if there was a way to stick to a) at least, as currently none of the decoys serve any purpose - I can very easily dismiss them as words I simply haven’t seen yet.
Is there any control course creators and/or learners have over how decoy answers are chosen?