Create new datasets from formatting pipelines#1566
Conversation
902638e to
e8128d0
Compare
a7b60ea to
85f8a26
Compare
Desktop only. Sets up some scaffolding for doing the same with transcoding pipelines in the future.
c648fa5 to
03a3985
Compare
BryonLewis
left a comment
There was a problem hiding this comment.
Some unused props in JobConfigFilterTranscodeDialog.vue.
Using the Constant instead of ['filter', 'transcode] in a location.
Double job.on('exit', () => ....) calls.
There is a suggestion about refactoring and simplification for the job.on('exit') and that isn't a hard requirement for this PR, just a possible suggestion to simplify some of the logic at the end of the runPipeline function.
There may be another thing that Matt wants and that is support for the system when running a filter to import new Annotations as well. This would require when you find out it is a filter job or a transcode job that it will copy over the existing annotations or if the pipeline creates new annotations it would copy that over.
client/dive-common/components/JobConfigFilterTranscodeDialog.vue
Outdated
Show resolved
Hide resolved
|
One issue is that when track annotations are generated alongside images/videos, the annotations correspond to the new images not the original sequence. I've asked claude to fix this on a local branch, and it has on commit: This commit uses the pipeline prefixes to determine which pipelines produce image or video output. Which is one way to do it, though I'm worried not the best. Maybe DIVE can auto-detect when images are produced, or as a fallback pipelines could have some specifier in their headers that indicates this? Alternatively this commit could probably be taken as-is, though I'm worried I'll put a pipeline in Utilities or something that produces image outputs at some point. |
|
My other concern besides the annotation issue is that having the default output filename have a random hash might be counterintuitive for non-programming users. Maybe instead of random characters at the end, we could have the default string be something more legible - e.g., [origin_name]_[pipeline_postfox (e.g.enhance, debayered, filtered or just filtered by default)][an integer starting from 1])? like sequence_name_filtered1 |
It's a timestamp, which should hopefully still prevent collisions but we don't have to care about any state while creating the default name. As an enhancement I can see updating the bulk pipeline table to let users pick a name for each dataset, but if everything else here seems ok, I'd rather do that as a follow-up. |
BryonLewis
left a comment
There was a problem hiding this comment.
For any remaining issues, just create a task in the backlog for future PRs.
|
@BryonLewis if you could PTAL the newest commit, I updated the @mattdawkins this causes the viame pipelines to write those files directly to the new dataset directory instead of having to move them there later, as claude suggests in 31281e3 |
|
I fixed a small issue where the conditional was checking the I also did a small modification so that when deleting datasets that are found inside of Just check over what I did and I think this is good to merge. |
|
Additional Commit to simplify the DIVE_Output_Jobs checking logic. It took a bit of remembering but I needed to update the tests. The tests before didn't load the meta.json file. To load the meta.json file it uses mock-fs which requires that the file be in a string value (hence the JSON.stringify()) for the files. |
Fix #1453
Changes
Pipelines of the type
filterandtranscodenow save their output to a new directory under VIAME_DATA. This data is imported after the pipelines run.If running pipelines in bulk, a default name is given to the datasets that are created. If running one of these pipelines from a single dataset from the main data view, a new modal window prompts the user to name the new dataset themselves.
A new function
sendToRendererhas been created to send messages from the main process to render processes. It is used as part of this set of changes to tell the renderers to refresh the available datasets, meaning that the newly created datasets are shown to the user right after they are available.Sometimes after import it is clear that a dataset needs to be converted to a web-friendly format. This conversion happens in the same job as the original pipeline after the new data is ingested.
Testing
Test filter and transcode pipelines from both the bulk pipeline menu and dataset view pipeline selector. Ensure that new datasets are created with the expected names, and the resultant data is visible.