Skip to content

Conversation

@sgbihu
Copy link

@sgbihu sgbihu commented Dec 9, 2025

Description

  • Microsoft forbade setting the integer type's error.
  • Using new API to enable/disable the tests.

Motivation and Context

https://jira.devtools.intel.com/browse/CVS-178009

@yuslepukhin
Copy link

Looks good to me. I will comment in the mainline about the tests.

Copy link

@MayureshV1 MayureshV1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.
Dmitri from ORT team has also reviews the change.

@MayureshV1 MayureshV1 marked this pull request as draft December 10, 2025 00:23
@MayureshV1
Copy link

This implementation would fail the test case in case single ops fallback to running on default CPU EP. needs more time to rework. Converting to draft.

For now, we will revert these test change. PR #883

FYI. @yuslepukhin

@MayureshV1 MayureshV1 changed the title Fix provider tests based on code review CVS-178009: Fix provider tests based on code review Dec 10, 2025
@sgbihu sgbihu force-pushed the fix_provider_tests branch from 8ec98b6 to 6ae5d7e Compare December 11, 2025 03:17
@sgbihu sgbihu force-pushed the fix_provider_tests branch from 0649789 to 4cebc5d Compare December 15, 2025 23:30
@MayureshV1 MayureshV1 marked this pull request as ready for review December 15, 2025 23:44
@MayureshV1 MayureshV1 self-requested a review December 15, 2025 23:45
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses provider test issues identified during code review by updating OpenVINO execution provider test exclusions and support configurations. The changes primarily involve migrating from deprecated test APIs to newer configuration methods and adding OpenVINO-specific test variants with adjusted expected outputs.

Key changes:

  • Migrated test exclusion patterns from deprecated test.Run() parameter to ConfigExcludeEps() and RunWithConfig() API
  • Added OpenVINO-specific test cases with adapted expected values due to different quantization rounding behaviors
  • Updated OpenVINO capability definitions to enable previously disabled operations

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
slice_op.test.cc Added OpenVINO to excluded providers for tests with ONNX shape disagreement
resize_op_test.cc Migrated to new exclusion API and added OVEP-specific test variants for uint8 resize operations
quantize_linear_test.cc Migrated to new exclusion API and added OVEP-specific quantization tests with adjusted expected outputs
cast_op_test.cc Added exclusion for OpenVINO when input size is zero
loop_test.cc Added OpenVINO to excluded providers for iteration count test
quantize_ops_test.cc Migrated contrib op tests to new API and added OVEP-specific test variants
data_ops.cc Removed operations from model-only support list and added UINT32 type support
Comments suppressed due to low confidence (1)

onnxruntime/test/contrib_ops/quantize_ops_test.cc:1

  • The expected output differs from the original test (line 535 expects Int4x2(-1, 1) while this OVEP variant expects Int4x2(0, 1) for the second pair). Add a comment explaining why OpenVINO's Int4 quantization produces a different value for the -3.0f input.
// Copyright (c) Microsoft Corporation. All rights reserved.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

[&onnx_name](const auto& ov_parameter_info) { return ov_parameter_info.get_names().contains(onnx_name); });
bool matched_names = it != ov_parameters.end();

if (it == ov_parameters.end()) continue;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we can't match onnx io names to ov io names we shouldn't just skip it. We should fail.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, the OV has optimized out this input because it is unused. How to handle this case?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, in that case, perhaps we only allow skipping if there's fewer ov i/o parameters than onnx. We should also log a warning that we couldn't find the onnx i/o name in the resulting ov compiled model and that it may have been optimized out.

Thoughts?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a mechanism in OV ONNX FE to avoid the parameter be optimized out? This need more investigation to research how to keep the unused Parameter

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean sure, ideally, OV doesn't remove I/O but in the time between now and a potential future that it doesn't, I'd prefer that we don't silently skip binding I/O.

As an aside, do we know the FE is removing the input? Or are you proposing that there be ONNX FE setting that prevents later passes from removing model I/O?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried to remove the code in OV ONNX FE, the remove_dangling_parameters will delete the floating parameters in the model. Because the unused parameter may 0-sized buffer, which will have an issue when allocating the tensors. How about just output some warnings?

std::shared_ptr<ov::Model> Graph::convert() {
    convert_to_ov_nodes();
    remove_dangling_parameters();
    auto function = create_model();
    set_metadata(function);
    return function;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sgbihu , If we need an OV change we will not intercept this for ORT1.24 and can plan those test changes later.
Any changes we make to the test, Can we ensure functionality with OV 2025.4.

auto shape_size = ov::shape_size(shape.get_shape());
if (0 == shape_size) {
has_dynamic_io_ = true;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate a bit more on this case? The shape came from the OV compiled model, and the shape is reported as static but has a zero dimension? Or maybe no dimension? Does that actually make it dynamic?

Copy link
Author

@sgbihu sgbihu Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason is the result is 0 size tensor. The OV can't create 0 size buffer. We need set this as dynamic to avoid allocate the buffer.
image

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another way is we can disable this test case due to 0 size buffer.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer we don't have specific handling in i/o binding for it a case ov doesn't support, but if we keep it, just comment why it's there so it can be more easily evaluated if we want to change it in the future.

[&onnx_name](const auto& ov_parameter_info) { return ov_parameter_info.get_names().contains(onnx_name); });
bool matched_names = it != ov_parameters.end();

if (it == ov_parameters.end()) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer that we only do this if onnx_input_map.size() > std::distance(ov_parameters.begin(), ov_parameters.end()) and we check it after we handle session_context.enable_causallm.

auto shape_size = ov::shape_size(shape.get_shape());
if (0 == shape_size) {
has_dynamic_io_ = true;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer we don't have specific handling in i/o binding for it a case ov doesn't support, but if we keep it, just comment why it's there so it can be more easily evaluated if we want to change it in the future.

Copy link

@ericcraw ericcraw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to be out for the holidays, as long as we address [this]. Consider it an approval for me. (#880 (comment))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants