Fixed random seed

I recently updated prodigy, and I noticed that my textcat.batch-train commands started producing the same results on every run. I opened up the code and noticed these additions:

fix_random_seed(0)
random.seed(0)

Are these holdovers from debugging or something, or is this on purpose? It seems strange to me to want to force the runs to be convergent like this. It was useful to me to run things a few times and get a little distribution of results for a given training set.

It's usually useful to know that if you run the same thing twice, you'll get the same results. There's definitely a use-case for varying the random seed and seeing variance, but development's usually harder if you can't make a change, undo it, and get the same result you were getting.

I would suggest just adding an argument to change the random seed. We can also put that in the recipe, for future versions.