Skip to content

Forward destination_data_layer in graph_proxy::unfold#401

Open
olantwin wants to merge 3 commits intoFramework-R-D:mainfrom
olantwin:fix/graph-proxy-unfold-destination-layer
Open

Forward destination_data_layer in graph_proxy::unfold#401
olantwin wants to merge 3 commits intoFramework-R-D:mainfrom
olantwin:fix/graph-proxy-unfold-destination-layer

Conversation

@olantwin
Copy link
Contributor

@olantwin olantwin commented Mar 8, 2026

Playing around with different data layers I noticed that it wasn't possible to specify the destination layer for an unfold. This PR attempts to fill this gap.

Comments very welcome, as I'm just starting with Phlex it's possible or even likely that I overlooked a better way to do this.

With the PR code, a registration like follows becomes possible:

m.unfold<tw::ClusterSplitter>(
    "find_candidates",
    &tw::ClusterSplitter::has_more,
    [](tw::ClusterSplitter const& splitter, std::size_t index) {
        return splitter.emit(index);
    },
    concurrency::unlimited,
    "event_candidate")
    .input_family(product_query{
        .creator = "merge_window",
        .layer = "time_window",
        .suffix = "tw_data"})
    .experimental_when("tw_module:trigger")
    .output_products("candidate");

All the best,
Oliver

graph_proxy::unfold was missing the destination_data_layer parameter and
used create_glue(false) which produces glue<void_tag> instead of
glue<Splitter>. This meant algorithm authors could not specify the child
layer for unfold, and the Splitter template was not forwarded correctly
to glue::unfold.

This commit forwards the parameter and construct glue<Splitter>
directly, matching what framework_graph already does.
@greenc-FNAL
Copy link
Contributor

Review the full CodeQL report for details.

@knoepfel knoepfel force-pushed the fix/graph-proxy-unfold-destination-layer branch from 1ddb400 to 51fa5fe Compare March 19, 2026 01:49
@Framework-R-D Framework-R-D deleted a comment from github-actions bot Mar 19, 2026
auto unf,
concurrency c = concurrency::serial)
concurrency c = concurrency::serial,
std::string destination_data_layer = "")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, @olantwin! I think we want destination_data_layer to not have a default. So can you swap the order of concurrency c and std::string destination_data_layer and remove the latter's default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@codecov
Copy link

codecov bot commented Mar 19, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

@@            Coverage Diff             @@
##             main     #401      +/-   ##
==========================================
- Coverage   84.43%   84.31%   -0.13%     
==========================================
  Files         127      127              
  Lines        3329     3329              
  Branches      564      564              
==========================================
- Hits         2811     2807       -4     
- Misses        325      326       +1     
- Partials      193      196       +3     
Flag Coverage Δ
unittests 84.31% <ø> (-0.13%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
phlex/core/graph_proxy.hpp 90.00% <ø> (ø)

... and 4 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6d7eb1a...24ec1c8. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants