Trained model not working on new dataset

Not that I'm aware.

In theory moving the full folder for the spaCy pipeline should work on any machine. What's weird though is if some file was missed when loading the model, you'd get more of an error than just Killed.

For example, this shows what happens if there's a problem with the meta.json file in a spaCy pipeline:

Can you check if your "Datascientist" machine is running into any memory problems?

If not, on the "Datascientist" machine, let's forget Prodigy for a second and see if you can load/run a sample sentence:

import spacy
nlp = spacy.load("path/to/pipeline")
doc = nlp("This is a sample sentence")

# confirm you can run your model on the Datascientist machine
for entity in doc.ents:
    print(entity.text, entity.label_)

GPU could be another factor! Were they used to train the model and/or did you use spacy-transformers anywhere in your workflow?

GPU's can yield more accurate models but can also be trickier to handle. Here's a FAQ on GPU's in spaCy.