Batch Range Inference
Overview
Batch range inference runs a model independently on every frame in a user-specified range [start_frame, end_frame]. Unlike recurrent inference (where each frame’s output feeds into the next), batch range inference treats each frame as a standalone prediction — no state is carried between frames.
User Workflow
- Configure model and slot bindings as for single-frame inference.
- Click ▶▶ Run Batch in the Properties panel.
- A dialog prompts for Start frame and End frame.
- The View widget’s progress bar shows per-frame progress.
- On completion, all predictions are written to the DataManager at their respective frame indices.
Architecture
SlotAssembler::runBatchRange() (synchronous, legacy)
The original synchronous loop lives in SlotAssembler::runBatchRange(). For each frame in the range:
assembleInputs()— encode dynamic and static inputs for the framemodel->forward()— run the modeldecodeOutputs()— decode output tensors into the DataManager at the frame
A ProgressCallback is invoked before each frame so the UI can update.
SlotAssembler::runBatchRangeOffline() (async, current)
The async version (runBatchRangeOffline()) runs on a BatchInferenceWorker (QThread subclass). Key differences from the synchronous path:
- MediaOverrides: Accepts a map of
data_key → shared_ptr<MediaData>clones so the worker has its own FFmpeg decoder state, avoiding seek contention with the UI thread. - decodeOutputsToBuffer(): Instead of writing decoded outputs directly to DataManager via
addAtTime(), this helper returnsFrameResultstructs containing decoded geometry (Mask2D,Point2D<float>,Line2D) via aDecodedOutputVariant. - BatchInferenceResult: The worker accumulates all
FrameResults into aBatchInferenceResultstruct. On completion, the main thread bulk-writes results to DataManager usingNotifyObservers::No, then callsnotifyObservers()once per affected data key. - Cancellation: Accepts
std::atomic<bool> const & cancel_requested. Checked before each frame; partial results are still returned.
BatchInferenceWorker (QThread)
Defined in an anonymous namespace in DeepLearningPropertiesWidget.cpp, following the PipelineWorker pattern from MLCoreWidget. Constructor receives copies of all bindings and a cloned MediaData. Emits progressChanged(int, int) via Qt queued connection so the main thread progress bar updates naturally.
UI Behaviour During Async Inference
- “Run Batch” becomes “Cancel Batch” while running.
- “Run Single” and “Run Recurrent” are disabled.
- No
processEvents()calls needed — the Qt event loop runs freely. - On completion or cancellation, results are written and UI is restored.
Progress Reporting
The BatchInferenceWorker emits progressChanged(int, int) from the worker thread. This signal is connected to DeepLearningPropertiesWidget::batchProgressChanged via Qt queued connection, which in turn drives the DeepLearningViewWidget progress bar.
Key Files
| File | Role |
|---|---|
BatchInferenceResult.hpp |
FrameResult, DecodedOutputVariant, BatchInferenceResult structs |
SlotAssembler.hpp / .cpp |
runBatchRange() (sync) and runBatchRangeOffline() (async) |
DeepLearningPropertiesWidget.cpp |
BatchInferenceWorker, _onRunBatch(), _onBatchFinished() |
DeepLearningViewWidget.cpp |
Progress bar update slot |
DeepLearningWidgetRegistration.cpp |
Signal/slot wiring |