-
Notifications
You must be signed in to change notification settings - Fork 600
[common] Add support for cuBLASLt GEMM for GroupedTensor #2502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Pawel Gadzinski <[email protected]>
for more information, see https://pre-commit.ci
- Add FP8 scale_inv pointer handling in nvte_grouped_gemm for proper FP8 GEMM - Fix random padding in tests to ensure 16-byte alignment for all dtypes - Reorder GroupedGemmSetupWorkspace members for natural alignment - Remove debug prints Signed-off-by: Pawel Gadzinski <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Pawel Gadzinski <[email protected]>
for more information, see https://pre-commit.ci
|
/te-ci L0 |
Greptile SummaryAdds
The implementation follows established patterns from the codebase, includes proper validation, and has thorough test coverage. Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant User
participant nvte_grouped_gemm
participant Validation
participant Operand Selection
participant Setup Kernel
participant cuBLASLt
User->>nvte_grouped_gemm: Call with A, B, C, D, alpha, beta
nvte_grouped_gemm->>Validation: Check SM >= 100 (Blackwell)
nvte_grouped_gemm->>Validation: validate_grouped_gemm_inputs()
Validation-->>nvte_grouped_gemm: OK
nvte_grouped_gemm->>Operand Selection: select_grouped_operand(A, transa)
Operand Selection->>Operand Selection: Check FP8 TN layout requirements
Operand Selection->>Operand Selection: Choose row-wise vs column-wise data
Operand Selection-->>nvte_grouped_gemm: A_sel (dptr, dtype, trans, use_columnwise)
nvte_grouped_gemm->>Operand Selection: select_grouped_operand(B, transb)
Operand Selection-->>nvte_grouped_gemm: B_sel (dptr, dtype, trans, use_columnwise)
nvte_grouped_gemm->>Setup Kernel: Allocate setup workspace
nvte_grouped_gemm->>Setup Kernel: launch_grouped_gemm_setup()
Setup Kernel->>Setup Kernel: setup_grouped_gemm_kernel<<<blocks, threads>>>
Note over Setup Kernel: Per-tensor computation:<br/>- Compute A/B/C/D pointers from offsets<br/>- Compute M/N/K from dimensions<br/>- Fill alpha_ptrs, beta_ptrs arrays
Setup Kernel-->>nvte_grouped_gemm: Workspace populated
nvte_grouped_gemm->>cuBLASLt: init_matrix_layouts(descA, descB, descC, descD)
nvte_grouped_gemm->>cuBLASLt: init_matmul_desc(op_A, op_B)
nvte_grouped_gemm->>cuBLASLt: set_fp8_scale_pointers() if FP8
nvte_grouped_gemm->>cuBLASLt: select_grouped_gemm_algo() with avg hints
cuBLASLt-->>nvte_grouped_gemm: Algorithm selected
nvte_grouped_gemm->>cuBLASLt: cublasLtMatmul()
Note over cuBLASLt: Execute grouped GEMM:<br/>D[i] = alpha[i] * op(A[i]) @ op(B[i]) + beta[i] * C[i]
cuBLASLt-->>nvte_grouped_gemm: GEMM complete
nvte_grouped_gemm-->>User: Return
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (4)
-
tests/cpp/operator/test_grouped_gemm.cu, line 75 (link)logic: missing columnwise_data in move assignment
-
tests/cpp/operator/test_grouped_gemm.cu, line 336-337 (link)logic: kSameFirst and kSameLast test cases use identical shape tuples. Should kSameFirst and kSameLast have different shape patterns to properly test the respective scenarios?
-
tests/cpp/operator/test_grouped_gemm.cu, line 367-378 (link)logic: missing case for InputCase::kFP8Delayed
-
transformer_engine/common/include/transformer_engine/gemm.h, line 265-266 (link)style: The
avg_m,avg_n,avg_kparameters are not documented in the function commentWhat do these average dimension parameters represent and how should they be computed?
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
4 files reviewed, 4 comments
- Remove unused alignment parameter from GroupedGemmSetupWorkspace::from_buffers - Simplify select_grouped_operand by removing dead code branches - Add GroupedOperandSelection.tensor field to avoid passing tensor separately - Extract set_fp8_scale_pointers and init_matrix_layouts helpers - Add safety check for FP8 on Hopper column-wise fallback - Support NULL C tensor when beta=0 (uses D as placeholder) - Remove unused get_scale_inv() from test - Add use_null_c test parameter and test case - Fix documentation: alpha/beta are single element tensors only Signed-off-by: Piotr Gadzinski <[email protected]> Signed-off-by: Pawel Gadzinski <[email protected]>
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (1)
-
transformer_engine/common/include/transformer_engine/gemm.h, line 266 (link)syntax: Documentation incorrectly states
alpha[i]andbeta[i]. The implementation uses a singlealphaandbetavalue for all matrices in the group (batch stride is 1 ininit_matmul_descat cublaslt_gemm.cu:1404), not per-matrix scaling.
4 files reviewed, 1 comment
- Change alpha/beta from single values to per-matrix arrays - Validate alpha/beta have exactly num_tensors elements - Update kernel to index alpha_ptr[idx] and beta_ptr[idx] - Move alpha/beta validation to validate_grouped_gemm_inputs - Update tests to use per-matrix alpha/beta arrays - Update documentation Signed-off-by: Piotr Gadzinski <[email protected]> Signed-off-by: Pawel Gadzinski <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Piotr Gadzinski <[email protected]> Signed-off-by: Pawel Gadzinski <[email protected]>
|
/te-ci |
| NVTEMatmulConfig config, cudaStream_t stream, const int64_t *avg_m, | ||
| const int64_t *avg_n, const int64_t *avg_k); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The average sizes seem like advanced configs that would be better to leave out of the top-level API. Can we move them inside NVTEMatmulConfig?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was suggestion of @ptrendx - customer may potentially want to use them if they know something more about the shapes. For example if there are multiple tensors of with k dimension D and one tensor with K dimension equal to 1, then it is potentially true that telling cublas that avg dim = D will result in better performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should definitely expose these options, but it would be better to put them in NVTEMatmulConfig rather than the top-level function signature. If you look at nvte_cublas_gemm:
TransformerEngine/transformer_engine/common/include/transformer_engine/gemm.h
Lines 109 to 112 in 97a09c2
| void nvte_cublas_gemm(const NVTETensor A, const NVTETensor B, NVTETensor D, const NVTETensor bias, | |
| NVTETensor pre_gelu_out, bool transa, bool transb, bool grad, | |
| NVTETensor workspace, bool accumulate, bool use_split_accumulator, | |
| int math_sm_count, cudaStream_t stream); |
You see advanced options like
pre_gelu_out and use_split_accumulator that are only needed for specialized cases. It's even worse for use_split_accumulator, since that's only relevant for FP8 on Hopper and now it's sitting uselessly in the API forever. Compare with the v2 API, which is much closer to the original GEMM API:TransformerEngine/transformer_engine/common/include/transformer_engine/gemm.h
Lines 131 to 133 in 97a09c2
| void nvte_cublas_gemm_v2(int transa, int transb, const float *alpha, const NVTETensor A, | |
| const NVTETensor B, const float *beta, const NVTETensor C, NVTETensor D, | |
| NVTETensor workspace, NVTEMatmulConfig config, cudaStream_t stream); |
The idea of NVTEMatmulConfig is to hold these advanced, kernel-specific options in a way where we can add or deprecate them easily without breaking API changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added analogous NVTEGroupedMatmulConfig.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's worth noting that cublas does not support pre_gelu_out and use_split_accumulator for grouped gemm.
Signed-off-by: Pawel Gadzinski <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (2)
-
transformer_engine/common/include/transformer_engine/gemm.h, line 256 (link)syntax: Documentation mentions non-existent
configparameter - this parameter is not in the function signature on line 276-280 -
tests/cpp/operator/test_grouped_gemm.cu, line 141 (link)logic: Alignment calculation appears incorrect - multiplying 16 bytes by 8 bits gives 128 bits, not the intended 16-byte alignment
6 files reviewed, 2 comments
Signed-off-by: Pawel Gadzinski <[email protected]>
|
/te-ci L0 |
Signed-off-by: Pawel Gadzinski <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (1)
-
tests/cpp/operator/test_grouped_gemm.cu, line 485 (link)syntax: incorrect version check - should be
130100not130200The API requires cuBLAS 13.1+ (version 130100), but this conditional check uses 130200. This mismatch means tests will be skipped even on cuBLAS 13.1.
8 files reviewed, 1 comment
Signed-off-by: Pawel Gadzinski <[email protected]>
Greptile's behavior is changing!From now on, if a review finishes with no comments, we will not post an additional "statistics" comment to confirm that our review found nothing to comment on. However, you can confirm that we reviewed your changes in the status check section. This feature can be toggled off in your Code Review Settings by deselecting "Create a status check for each PR". |
|
/te-ci |
Signed-off-by: Pawel Gadzinski <[email protected]>
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (4)
-
transformer_engine/common/gemm/cublaslt_grouped_gemm.cu, line 445 (link)style: Use
const_castsparingly - casting away const froma_basecould be risky if the caller expects the data to remain unchangedNote: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
transformer_engine/common/gemm/cublaslt_grouped_gemm.cu, line 451-455 (link)style: The comment mentions "Test stores A" but this is implementation code, not test code - update comment for clarity
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
tests/cpp/operator/test_grouped_gemm.cu, line 282-283 (link)style: Commented code should be removed before merging
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
transformer_engine/common/gemm/cublaslt_grouped_gemm.cu, line 313-318 (link)style: Variable naming could be clearer -
rowa/cola/rowb/colbcould be confused with actual row/column counts vs storage dimensionsConsider renaming to
lda_rows/lda_colsetc. or adding clarifying commentsNote: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
8 files reviewed, 4 comments
Signed-off-by: Pawel Gadzinski <[email protected]>
Signed-off-by: Pawel Gadzinski <[email protected]>
for more information, see https://pre-commit.ci
| inline int64_t compute_avg_first_dim(const transformer_engine::GroupedTensor *t) { | ||
| // logical_shape[0] is either num_tensors*M (uniform) or sum_of_M (varying first) | ||
| // In both cases, dividing by num_tensors gives the average | ||
| return static_cast<int64_t>(t->logical_shape.data[0]) / static_cast<int64_t>(t->num_tensors); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is logical_shape.data[0]? do we have access to this field if we want cuda graph (ie. without D2H copy)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logical_shape represents total shape of all tensors in groupped tensor. For example we can have grouped tensor of logical shape [10, 10] and tensors of shape [3, 10] and [7, 10] inside it. The logical shape must be constant to use cuda graphs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, this could be an issue for future cuda graph support of dynamic shapes MOE. Basically, how it works is like for input buffer shape [10, 10], this [10, 10] shape is static, but there will be an on-device tensor that says the splits will be [[3, 10], [3, 10]], and the actual valid shape will be [6, 10]. The excessive shape should be the gemm performance wasted (?)
That being said, this function is simply for getting a cublas heuristic, so it's nothing fundamental. Can we support injecting static values directly here? Assuming users know their model well, and they can manually set those values and avoid knowing these shapes on host, since that would mean D2H copy, therefore no cuda graph.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only problem may be that the wrong statistics are computed, not any cuda errors or incorrect memory read, am I right?
There is support for custom statistic injection here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wrong statistics can be avoided if we do zero pad, but that's another story, need to check with cublas folks
btw, does it PR have BF16 grouped gemm? is grouped gemm optimization sm100 only for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it supports bf16 and there are plans to support it on other devices also
Co-authored-by: Przemyslaw Tredak <[email protected]> Signed-off-by: Paweł Gadziński <[email protected]>
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (4)
-
transformer_engine/common/gemm/cublaslt_grouped_gemm.cu, line 449-453 (link)style:
const_castremoves const correctness. While the comment says cuBLAS won't modify the data, this breaks type safety guarantees.Consider if cuBLASLt API could accept
const void**for input matrices, or document why this is necessary.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
tests/cpp/operator/test_grouped_gemm.cu, line 136-145 (link)style: Random padding for alignment testing is good, but the random seed is hardcoded (12345).
Verify tests are deterministic and document why this specific padding strategy was chosen.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
transformer_engine/common/gemm/cublaslt_grouped_gemm.cu, line 582-585 (link)style: Fast accumulator mode is always enabled for FP8, but split accumulator isn't supported by cuBLASLt grouped GEMM API.
Check that this accuracy trade-off is documented in the public API so users are aware.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
transformer_engine/common/gemm/cublaslt_grouped_gemm.cu, line 514 (link)style: SM100 (Blackwell) requirement is very restrictive.
Verify this is a hard requirement from cuBLASLt grouped GEMM API and document the limitation clearly for users.
8 files reviewed, 4 comments
Signed-off-by: Pawel Gadzinski <[email protected]>
Signed-off-by: Pawel Gadzinski <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Pawel Gadzinski <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (3)
-
transformer_engine/common/gemm/config.h, line 47-48 (link)logic:
attr_sizescalculation for optional fields includes fullstd::optionalsize rather than justint64_twhen
nvte_get_grouped_matmul_config_attributecopies value to buffer, it only copiessizeof(int64_t)(line 150-160 in config.cpp), butattr_sizesdeclaressizeof(avg_m)which issizeof(std::optional<int64_t>)(typically 16 bytes, not 8) -
transformer_engine/common/gemm/cublaslt_grouped_gemm.cu, line 449 (link)style:
const_castremoves const qualifier from input pointers A, B, Cwhile comment notes this is safe because cuBLAS won't modify the data, this relies on cuBLAS internal behavior and could break if cuBLAS implementation changes - consider verifying this is documented in cuBLAS API
-
tests/cpp/test_common.cu, line 1093-1102 (link)style: random padding logic may generate non-deterministic test behavior
while the seed is fixed (12345), random padding affects memory layout which could expose alignment issues inconsistently across test runs - consider documenting this is intentional for broader coverage
9 files reviewed, 3 comments
Additional Comments (5)
Check if
Verify the dimension logic is correct for both row-wise and column-wise layouts with all transpose combinations
|
Description
Adds
nvte_grouped_gemmAPI using cuBLASLt grouped matmul for batched GEMM on tensors with varying shapes. A GPU kernel (setup_grouped_gemm_kernel) convertsNVTEGroupedTensorformat (contiguous buffer + offsets) to cuBLAS requirements (pointer arrays + per-matrix M/N/K).New API
Computes
D = alpha * op(A) @ op(B) + beta * Cfor groups of matrices with potentially different shapes.Type of change
Changes
GroupedGemmSetupWorkspacestruct for cuBLAS workspace layouttest_grouped_gemm.cucomparing againstnvte_multi_tensor_gemm(FP8/BF16, various shapes and transpose layouts)Checklist: