-
Notifications
You must be signed in to change notification settings - Fork 56
CVS-178009: Fix provider tests based on code review #880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: ovep-develop
Are you sure you want to change the base?
Conversation
|
Looks good to me. I will comment in the mainline about the tests. |
MayureshV1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
Dmitri from ORT team has also reviews the change.
|
This implementation would fail the test case in case single ops fallback to running on default CPU EP. needs more time to rework. Converting to draft. For now, we will revert these test change. PR #883 FYI. @yuslepukhin |
8ec98b6 to
6ae5d7e
Compare
0649789 to
4cebc5d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR addresses provider test issues identified during code review by updating OpenVINO execution provider test exclusions and support configurations. The changes primarily involve migrating from deprecated test APIs to newer configuration methods and adding OpenVINO-specific test variants with adjusted expected outputs.
Key changes:
- Migrated test exclusion patterns from deprecated
test.Run()parameter toConfigExcludeEps()andRunWithConfig()API - Added OpenVINO-specific test cases with adapted expected values due to different quantization rounding behaviors
- Updated OpenVINO capability definitions to enable previously disabled operations
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| slice_op.test.cc | Added OpenVINO to excluded providers for tests with ONNX shape disagreement |
| resize_op_test.cc | Migrated to new exclusion API and added OVEP-specific test variants for uint8 resize operations |
| quantize_linear_test.cc | Migrated to new exclusion API and added OVEP-specific quantization tests with adjusted expected outputs |
| cast_op_test.cc | Added exclusion for OpenVINO when input size is zero |
| loop_test.cc | Added OpenVINO to excluded providers for iteration count test |
| quantize_ops_test.cc | Migrated contrib op tests to new API and added OVEP-specific test variants |
| data_ops.cc | Removed operations from model-only support list and added UINT32 type support |
Comments suppressed due to low confidence (1)
onnxruntime/test/contrib_ops/quantize_ops_test.cc:1
- The expected output differs from the original test (line 535 expects
Int4x2(-1, 1)while this OVEP variant expectsInt4x2(0, 1)for the second pair). Add a comment explaining why OpenVINO's Int4 quantization produces a different value for the-3.0finput.
// Copyright (c) Microsoft Corporation. All rights reserved.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <[email protected]>
This reverts commit 8b81a24.
| [&onnx_name](const auto& ov_parameter_info) { return ov_parameter_info.get_names().contains(onnx_name); }); | ||
| bool matched_names = it != ov_parameters.end(); | ||
|
|
||
| if (it == ov_parameters.end()) continue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we can't match onnx io names to ov io names we shouldn't just skip it. We should fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, the OV has optimized out this input because it is unused. How to handle this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, in that case, perhaps we only allow skipping if there's fewer ov i/o parameters than onnx. We should also log a warning that we couldn't find the onnx i/o name in the resulting ov compiled model and that it may have been optimized out.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a mechanism in OV ONNX FE to avoid the parameter be optimized out? This need more investigation to research how to keep the unused Parameter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean sure, ideally, OV doesn't remove I/O but in the time between now and a potential future that it doesn't, I'd prefer that we don't silently skip binding I/O.
As an aside, do we know the FE is removing the input? Or are you proposing that there be ONNX FE setting that prevents later passes from removing model I/O?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tried to remove the code in OV ONNX FE, the remove_dangling_parameters will delete the floating parameters in the model. Because the unused parameter may 0-sized buffer, which will have an issue when allocating the tensors. How about just output some warnings?
std::shared_ptr<ov::Model> Graph::convert() {
convert_to_ov_nodes();
remove_dangling_parameters();
auto function = create_model();
set_metadata(function);
return function;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sgbihu , If we need an OV change we will not intercept this for ORT1.24 and can plan those test changes later.
Any changes we make to the test, Can we ensure functionality with OV 2025.4.
| auto shape_size = ov::shape_size(shape.get_shape()); | ||
| if (0 == shape_size) { | ||
| has_dynamic_io_ = true; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate a bit more on this case? The shape came from the OV compiled model, and the shape is reported as static but has a zero dimension? Or maybe no dimension? Does that actually make it dynamic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way is we can disable this test case due to 0 size buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer we don't have specific handling in i/o binding for it a case ov doesn't support, but if we keep it, just comment why it's there so it can be more easily evaluated if we want to change it in the future.
| [&onnx_name](const auto& ov_parameter_info) { return ov_parameter_info.get_names().contains(onnx_name); }); | ||
| bool matched_names = it != ov_parameters.end(); | ||
|
|
||
| if (it == ov_parameters.end()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer that we only do this if onnx_input_map.size() > std::distance(ov_parameters.begin(), ov_parameters.end()) and we check it after we handle session_context.enable_causallm.
| auto shape_size = ov::shape_size(shape.get_shape()); | ||
| if (0 == shape_size) { | ||
| has_dynamic_io_ = true; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer we don't have specific handling in i/o binding for it a case ov doesn't support, but if we keep it, just comment why it's there so it can be more easily evaluated if we want to change it in the future.
ericcraw
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to be out for the holidays, as long as we address [this]. Consider it an approval for me. (#880 (comment))

Description
Motivation and Context
https://jira.devtools.intel.com/browse/CVS-178009