A bit more detail for anyone curious:
CVAT-DATAUP is a CVAT-compatible fork. Today it adds workflow signals (submit/review/accept) + dataset/class distribution views.
Next thing we’re building is “eval close to the dataset”: compare runs, slice failures, click from metric to the exact images/labels.
If you’re using CVAT in production, I’d love to learn:
how you enforce label consistency across annotators
what your QA sampling strategy looks like
how you debug regressions (what do you look at first?)
A bit more detail for anyone curious: CVAT-DATAUP is a CVAT-compatible fork. Today it adds workflow signals (submit/review/accept) + dataset/class distribution views. Next thing we’re building is “eval close to the dataset”: compare runs, slice failures, click from metric to the exact images/labels.
If you’re using CVAT in production, I’d love to learn:
how you enforce label consistency across annotators
what your QA sampling strategy looks like
how you debug regressions (what do you look at first?)
If you want to try it, be in touch:)