using sorters (prefer_uncertain or prefer_high_scores) result in prodigy showing me the same data samples with different predictions

Hi! This is kind of expected or rather, what the sorter selects depends on the data you feed in. Typically, that's (score, example) tuples of all possible analyses and predictions for the given example, so you may have multiple versions of the same example with different scored predictions. The sorter will then prioritise the predictions with the highest/lowest/most uncertain score.

If you're collecting binary annotations, filtering duplicates and only allowing one version of an example doesn't make that much sense, because then you're only ever giving feedback on one single prediction out of many (e.g. the first label it predicts).