I recently updated prodigy, and I noticed that my textcat.batch-train commands started producing the same results on every run. I opened up the code and noticed these additions:
Are these holdovers from debugging or something, or is this on purpose? It seems strange to me to want to force the runs to be convergent like this. It was useful to me to run things a few times and get a little distribution of results for a given training set.