Sure! The "id" field on the token is the token ID or index. Those are assigned automatically by Prodigy, and allow mapping annotated spans in the text back to their token positions. The first token will receive the ID 0, the second token the ID 1, and so on.
It’s also identical to spaCy’s Token.i property, for example:
doc = nlp(u"Tabdeli Yhi Log Laskengy")
print([token.i for token in doc])
# [0, 1, 2, 3]
The input and task hashes are unique IDs that help Prodigy identify annotations that apply to the same input text. My comment on this thread explains this in more detail. You can also find more information on the hashing functions in your PRODIGY_README.html, available for download with the Prodigy package.
I'm not 100% sure I understand the question correctly. The output data you're exporting with prodigy db-out contains all annotations stored in the database that have been labelled in the web app. So if you load in more texts, annotate them and then save them to a dataset, they should be included in the exported data.
If you haven't seen it already, you might also want to check out the "First steps" guide. It explains the most important terms and concepts, and shows a simple Prodigy workflow: