Is there any way to set the
device flag on the CLI so that we can enable GPU support in batch training?
If not it would probably be fairly easy to add to the standard models, but in the mean time how would one go about overloading a standard model so that we can add the flag ourselves? The guides for this aren’t up yet sadly.
The GPU support in spaCy isn’t great at the moment, which is why there isn’t really a prominent API for it. On large batches it’s about 2 or 3 times faster than CPU, on small batches it’s slower. So I think it’ll likely be slower for Prodigy.
What do you mean by overloading a standard model? One way to add a command-line flag would be to have a new recipe that wraps some of the existing ones.
I haven’t run this, so some of the details may be incorrect — but something like this should work:
cli_args = prodigy.recipes.textcat.teach.__annotations__
cli_args['gpu_id'] = ("GPU device", "option", "-g", int)
def gpu_wrapper(*args, **kwargs):
return prodigy.recipes.textcat.teach(*args, **kwargs)