Auto Save

solved
front-end
usage
done
(Akshita Sood) #1

Is there an option to autosave the data in the dataset after every nth annotation ?

0 Likes

(Ines Montani) #2

Yes, annotations are saved automatically once a full batch (via the batch_size setting, which should default to 10) is collected. Prodigy allows you to undo/change annotations in the “history” in the sidebar. After that, they’re sent back to the server in batches. The only time you have to save manually is when you want to stop annotating in the browser, to ensure that the entire history and outbox is cleared and saved.

1 Like

#3

It seems even with batch_size=1 the auto-save is 2 examples behind, so if you just close the window at the end of a labeling session you lose the last 2 labels. For example I had labeled 14 examples in the UI, but only had 12 in the DB until I hit the “save” button manually. Is there a way to fix this?

0 Likes

Choose task item based on previous annotation
#4

Any ideas how this could be fixed? Debugging the bundle.js sounds a bit painful since it’s minified… Without autosave working correctly on every example, it’s hard to make other things work like surviving page refreshes/closing the browser.

0 Likes

(Ines Montani) #5

I’ve posted an idea in the thread referenced above:

The main thing is that this should be decoupled from the batch size and become a separate setting – either instant_submit or even a number of maximum answers to keep (if set to 0, this would mean every answer is saved immediately). You could still combine it with a batch size of 1, which would have the effect you’re looking for – but users could also send out larger batches while still having answers sent back instantly (e.g. to do single updates to the model or trigger something else on every update).

So this is definitely something I can implement for the next release.

0 Likes

#6

Awesome! Looking forward to it.

0 Likes

(Ines Montani) #7

@taavi What do you think of the name answer_batch_size for the setting?

I first wanted to call it auto_save_threshold, but that’s possibly too cryptic. answer_batch_size is analogous to the regular batch size setting and ultimately, that’s really what it is: it’s the number of examples sent back to the server at once. Prodigy will wait until one batch is full. So setting "answer_batch_size": 1 means that we only need to wait for one answer == auto-save.

0 Likes

#8

Sounds good to me. I’m not too picky about the setting name :slight_smile:

0 Likes

(Mike Ross) #9

Hi @ines – will this enable the use case from the other thread linked above? I’m hoping that when a user submits an answer, it goes immediately to the db, so that the data iterator can query the DB in its __iter__() method, and choose the very next item to yield based on the user’s answer. I’m only asking because answer_batch_size=1 sounds like there might still be a batch that would cause a delay of 1 – that is a “wait for one answer”. But I might just be thinking about it wrong. Thank you so much for your help on this!

0 Likes

(Ines Montani) #10

An answer batch size of 1 would mean that as soon as there’s 1 answer in the app, it gets sent back.

For your scenario, there’s still a more problem that’s more difficult to solve in general: you cannot know how long the requests will take. While the answer is sent back, a new batch will already be requested (also since the queue is empty) and it will most likely arrive before the answer is received, stored in the database and retrieved from there in your stream.

Prodigy tries its best to always make sure that there are enough questions – after all, annotation can be pretty fast and the annotator should never see a “Loading…” message in between. You could probably work around that by just waiting to yield out the next task until you have the previous one – but once you’ve implemented that, you might find that it creates a pretty frustrating annotation exprience.

0 Likes

(Mike Ross) #11

This sounds like it will work perfectly for our use case, where we are not concerned about a few seconds of “Loading…” while waiting to yield. Any sense for when a version of Prodigy with the new answer_batch_size parameter will be released?

0 Likes

(Ines Montani) #12

It’s the last ticket on my list for Prodigy 1.7.!

And yeah, in that case, I’m pretty sure the idea would work. Ultimately, you’d just have to orchestrate this between two closures: the stream generator function and the update callback.

0 Likes

(Ines Montani) #13

Okay, so I ended up going with "instant_submit" after all (sorry for the back and forth), because the answer batch size would just lead to many confusing scenarios that were internally consistent and “logical”, but not always what you’d expect. But instant submission is now available in v1.7.0 and should work as expected :smiley: Thanks again for your patience on this!

0 Likes

#14

I finally got a chance to try this out and it’s nice, thanks for implementing it!

There’s only 1 more small issue: the UI calls “get_questions” before it calls “give_answers”, so in case we implement any kind of refresh or browser restart resilience, the last example is shown twice, because it’s not yet saved by the time we’re asking for more examples.

By default whenever you close the browser or refresh, examples that were previously shown are lost forever, since the stream will just pull the next one. In a scenario where we’re trying to get a whole dataset labeled without missing values, that doesn’t work super great - sometimes people’s batteries die or whatever, they need to be able to return to the task.

0 Likes

(Ines Montani) #16

Thanks!

The very first request always needs to be get_questions, because there needs to be a queue of questions. After that, the app tries to make sure there are always enough questions in the queue so that the annotator never hits a block or needs to wait long for the next batch. Also see my comments on this from earlier in the thread:

Yes, in that case (at least if you want to handle everything in a single session), you probably want your stream to be an infinite loop that keeps repeating the stream until all examples are confirmed to be in the dataset. See here for an example.

0 Likes

#17

Yeah I have the endless loop logic implemented, but you see how it’s impossible for me to not show the last example twice, if the “save” doesn’t go through before the “fetch”, right? Of course on a fresh load, the app needs to fetch example(s) first, but when saving, the only reason to read before writing is a minor speed boost…

0 Likes

(Ines Montani) #18

Yes, I definitely know what you mean. I’ll have a look and test if we can delay requesting a new batch in the “instant submit” mode until the answers are sent. I think the tricky part here is that the request posting the answers doesn’t only have to be made, the answer also has to be present in the database already before the next batch is requested.

0 Likes