How to edit existing texts that were added to a dataset using db-in

I have been using dataturks.com in the past and now I am trying out prodigy.

I have converted my dataturks json file to the format that prodigy uses and I have added it into a dataset using the following command:

prodigy db-in dataset_name output.jsonl --rehash --overwrite

Now, lets say I want to search for a particular text within this dataset to change something, is that something I can do? Probably the main question that I am asking is, can I modify annotations after I've added them to the dataset using db-in? Also the search feature would be cool too.

Off topic, you guys have built an amazing tool, we've been using it for all our NER tasks.

Regards
Mihai Vinaga

Hey, and thanks! :smiley:

Datasets in Prodigy are append-only by design – I've written some more about that concept on this thread:

Prodigy gives you direct access to the datasets via its Python API – so you can use that to implement any filtering or search logic you need, over any fields in the JSON records. You could do a simple keyword search over the "text" values, or do something more complex with regular expressions (or even spaCy if you want a more advanced NLP-powered search :smiley:).

from prodigy.components.db import connect

db = connect()
examples = db.get_datasset("dataset_name")
for eg in examples:
    # do something here...

examples here is a list of dictionaries representing the individual examples. If you've found examples you want to edit, you could either export them to a file and re-annotate them (if you want to change entity spans or more complex stuff), or edit them in your script, and then save the result (previously correct examples, edited examples) to a new dataset.

Thank you Ines for the response, I can't imagine how you guys are able to develop such an incredible library(spacy) and tool(prodigy) and still be able to answer every "Tom, Dick and Harry".

Lets say I've extracted 10 documents from my dataset1 and placed them in a file called file_for_dataset2, I will use the prodigy ner.manual dataset2 ... file_for_dataset2 command to create a new dataset called dataset2 and correct everything in it.

After I am done, I can run something like this prodigy review dataset1 dataset2, where I can merge the changed of dataset2 in dataset1?

Thanks, we try out best :slightly_smiling_face: It's actually very nice to be close to the developers who are using our tools.

The review recipe only really makes sense if you want to go over all existing annotations, merge them, resolve conflicts and create a "master annotation". For instance, if you have multiple annotators working on the same data with many disagreements.

If you know that your annotations are correct, you could just divide your data in two: the 10 documents you want to re-annotate, and the rest. Then re-annotate the 10 documents, and import the rest (other documents that don't need to be changed) to the same dataset. That should be much quicker.