refactor: llguidance#288
Conversation
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Enforce conventional commitWonderful, this rule succeeded.Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
|
jakelorocco
left a comment
There was a problem hiding this comment.
Looks like there's some mypy errors as well.
I don't know why there is a mypy error. Mypy passes locally for me. |
3ebb8bd to
8dce1f2
Compare
We've seen this happen recently and it came down to mypy versions. Could be the cause. I re-ran the checks on the latest branch. If those still fail and you still pass with mypy, try checking your mypy version. |
|
With an updated lockfile the mypy passes. Test is terminated but locally it is passing. If you approve, I will merge this |
938d919 to
62923fe
Compare
There was a problem hiding this comment.
@guicho271828, can you remind me if we ever got the local vllm backend running on mac (or if you remember a discussion about that)?
I tried installing this version of the mellea package on my mac and was getting versioning errors.
Details
(mellea) ➜ mellea git:(pr/guicho271828/288) uv pip install -e '.[all]' --all-extras --group dev -r pyproject.toml
Using Python 3.12.0 environment at: /opt/homebrew/Caskroom/miniforge/base/envs/mellea
× No solution found when resolving dependencies:
╰─▶ Because only the following versions of nvidia-cudnn-frontend are available:
nvidia-cudnn-frontend<=1.13.0
nvidia-cudnn-frontend==1.14.0
nvidia-cudnn-frontend==1.14.1
nvidia-cudnn-frontend==1.15.0
nvidia-cudnn-frontend==1.16.0
nvidia-cudnn-frontend==1.17.0
and nvidia-cudnn-frontend>=1.13.0 has no wheels with a matching platform tag (e.g., `macosx_15_0_arm64`), we can conclude that
nvidia-cudnn-frontend>=1.13.0 cannot be used.
And because flashinfer-python==0.5.3 depends on nvidia-cudnn-frontend>=1.13.0 and vllm==0.13.0 depends on flashinfer-python==0.5.3, we
can conclude that vllm==0.13.0 cannot be used.
And because only vllm<=0.13.0 is available and mellea depends on vllm>=0.13.0, we can conclude that your requirements are
unsatisfiable.
It looks like the newest version I can install on a mac is 0.11.0.
The vllm engine can't seem to get enough resources to run locally on my mac anyways, so maybe we can just add a note somewhere that "mella[vllm]" doesn't work for macs?
To run vllm locally, vllm assumes
So yes |
29588fb disables installing vllm on darwin. |
27a9159 to
91a9e1a
Compare
jakelorocco
left a comment
There was a problem hiding this comment.
Please also confirm that the huggingface / vllm tests run with the most recent changes.
| device = torch.device("mps") | ||
| model = AutoModelForCausalLM.from_pretrained(model_id).to(device) | ||
| model = AutoModelForCausalLM.from_pretrained(model_id) | ||
| # model = model.to(device=device) # this part does not pass mypy; possible misconfiguration |
There was a problem hiding this comment.
This code does not pass mypy and this error existed from before, it is not written by me.
Annoyingly, without fixing this, I cannot commit any changes with pre-commit enabled, especially because now all code in docs/ are also checked by pre-commit.
d3c9044 to
81ee868
Compare
|
Two huggingface tests failed which are out of scope of this PR.
-> structured output failure due to a weak model
-> test for error handling, discussed in #432 One vllm test failed.
-> structured output failure due to a weak model |
…e is any env name with "mellea"
…, disable vllm on osx
…, disable vllm on osx
…, disable vllm on osx
81ee868 to
6b558b5
Compare
6205b36
No description provided.