I'm training on a machine with 4 gpus and was wondering if there's a way to leverage all of them to improve training speed? The command I'm currently running uses only one gpu:
python -m prodigy train --ner my_ner_data --gpu-id 0
This works fine, but is there a way to use something like --gpu-id 0,1,2,3 to leverage all four available?
Thanks.