-
Notifications
You must be signed in to change notification settings - Fork 767
[slim tensor migration 1/n] copy and paste slim tensor into ExecuTorch #16304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh/gasoonjia/67/base
Are you sure you want to change the base?
Conversation
This stack aims to migrate slim tensor into ExecuTorch stack to make it as internal tensor representation of cudabackend. This diff introduce slimtensor required c10 dependencies into ExecuTorch by copy and paste c10 headers slim tensor needs but not show in the ExecuTorch stack. Note that to support slimtensor first, in this diff we just copy and paste required c10 files, but not making it the same as current c10 in pytorch. We will try to sync it with latest c10 and move them into `executorch/runtime/core/portable_type/c10/c10/` after slim tensor migration done. Differential Revision: [D89417354](https://our.internmc.facebook.com/intern/diff/D89417354/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16304
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Cancelled Job, 2 Unrelated FailuresAs of commit 59a7260 with merge base c61b2ed ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This stack aims to migrate slim tensor into ExecuTorch stack to make it as internal tensor representation of cudabackend. This diff introduce slimtensor required c10 dependencies into ExecuTorch by copy and paste c10 headers slim tensor needs but not show in the ExecuTorch stack. Note that to support slimtensor first, in this diff we just copy and paste required c10 files, but not making it the same as current c10 in pytorch. We will try to sync it with latest c10 and move them into `executorch/runtime/core/portable_type/c10/c10/` after slim tensor migration done. Differential Revision: [D89417354](https://our.internmc.facebook.com/intern/diff/D89417354/) ghstack-source-id: 330099391 Pull Request resolved: #16304
This PR needs a
|
…o ExecuTorch" This stack aims to migrate slim tensor into ExecuTorch stack to make it as internal tensor representation of cudabackend. This diff aims to copy and paste slimtensor class into ExecuTorch codebase, with ZERO change on logic, namespace, class, etc. The only change would be the include path. In the diffs above it I will gradually update the slim tensor class to make it suitable for executorch, and reviewer friendly. This diff will not be landed until all updates have been done. Differential Revision: [D89417354](https://our.internmc.facebook.com/intern/diff/D89417354/) [ghstack-poisoned]
Pull Request resolved: #16304 This stack aims to migrate slim tensor into ExecuTorch stack to make it as internal tensor representation of cudabackend. This diff aims to copy and paste slimtensor class into ExecuTorch codebase, with **ZERO** change on logic, namespace, class, etc. The **ONLY TWO** changes the include path and buck target files. In the diffs above it I will gradually update the slim tensor class to make it suitable for ET, and to make the udpate reviewer friendly. This diff will not be landed until all updates have been done. ghstack-source-id: 330156779 @exported-using-ghexport Differential Revision: [D89417354](https://our.internmc.facebook.com/intern/diff/D89417354/)
|
Instead of copy pasting all the c10 headers can you depend on the already copy pasted ones in executorch core |
…o ExecuTorch" This stack aims to migrate slim tensor into ExecuTorch stack to make it as internal tensor representation of cudabackend. This diff aims to copy and paste slimtensor class into ExecuTorch codebase, with ZERO change on logic, namespace, class, etc. The only change would be the include path. In the diffs above it I will gradually update the slim tensor class to make it suitable for executorch, and reviewer friendly. This diff will not be landed until all updates have been done. Differential Revision: [D89417354](https://our.internmc.facebook.com/intern/diff/D89417354/) [ghstack-poisoned]
yes that's in my plan. You can refer to the diff summary or PR's top comment for what i;m gonna doi |
…o ExecuTorch" This stack aims to migrate slim tensor into ExecuTorch stack to make it as internal tensor representation of cudabackend. This diff aims to copy and paste slimtensor class into ExecuTorch codebase, with ZERO change on logic, namespace, class, etc. The ONLY TWO changes are the include pathes and buck target files. However below are things are blocking slimtensor to land: 1. Inconsistent namespace. Now slimtensor has its own namespace structure (standalone::, standalone::c10, etc) which not inline with ExecuTorch. 2. Self-supported Exception Handling Strategy. slimtensor adopts STANDALONE_CHECK, STANDALONE_INTERNAL_ASSERT, STANDALONE_INTERNAL_ASSERT_DEBUG_ONLY, or directly throw std::runtime_error to check and raise error. We should align SlimTensor's error handling strategy with ExecuTorch's approach. 3. Duplicated c10 functions. Now slimtensor owns c10 it is using , some of them have be in executorch/runtime/core/portable_type/c10/ and we should deduplicate. 4. Duplicated data types. Slimtensor holds and uses several datatypes that et has and it should just reuse them, like ArrayRef, Span, irange, IntArrayRef, etc. This diff will not be landed until all things above have been done. In the diffs above it I will gradually update the slim tensor class to make it suitable for ET, and to make the udpate reviewer friendly. There're other stuffs not perfect, but i think we can gradually solve them after all migration has been done. 1. Move all c10 files under executorch/runtime/core/portable_type/c10/. It would be great if we can gather all c10 header together. 2. Make new introed c10 files sync with pytorch/pytorch. those c10 files has been decoupled with pytorch/pytorch for a while and need some extra work to make them sync. 3. Reuse executorch macro Differential Revision: [D89417354](https://our.internmc.facebook.com/intern/diff/D89417354/) [ghstack-poisoned]
Stack from ghstack (oldest at bottom):
This stack aims to migrate slim tensor into ExecuTorch stack to make it as internal tensor representation of cudabackend.
This diff aims to copy and paste slimtensor class into ExecuTorch codebase, with ZERO change on logic, namespace, class, etc. The ONLY TWO changes are the include pathes and buck target files.
However below are things are blocking slimtensor to land:
This diff will not be landed until all things above have been done. In the diffs above it I will gradually update the slim tensor class to make it suitable for ET, and to make the udpate reviewer friendly.
There're other stuffs not perfect, but i think we can gradually solve them after all migration has been done.
Differential Revision: D89417354