Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

False Positive on CI tests: test-readme #1315

Open
Jack-Khuu opened this issue Oct 18, 2024 · 2 comments
Open

False Positive on CI tests: test-readme #1315

Jack-Khuu opened this issue Oct 18, 2024 · 2 comments
Labels
actionable Items in the backlog waiting for an appropriate impl/fix bug Something isn't working CI Infra Issues related to CI infrastructure and setup triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@Jack-Khuu
Copy link
Contributor

🐛 Describe the bug

In CI, there are a few tests that should be flagged as failing, but are currently marked as green.

Specifically they seem to revolve around the test-readme unittests: see surfacing PR for examples (#1309)

Versions

NA

@Jack-Khuu Jack-Khuu added bug Something isn't working actionable Items in the backlog waiting for an appropriate impl/fix labels Oct 18, 2024
@mikekgfb
Copy link
Contributor

mikekgfb commented Nov 5, 2024

@seemethere @malfet @kit1980 can you please have a look as to why these tests are not flagged as failing?

ditto, from yesterday - https://github.com/pytorch/torchchat/actions/runs/11634438558/job/32413340991?pr=1339 is shown as passed even though commands in the test failed and aborted with an error indication?

@mikekgfb
Copy link
Contributor

And still happening.

One possible explanation might be that somewhere the code is catching the exception, pretty printing it, and then exit with a non-error code because the fails such as https://github.com/pytorch/torchchat/actions/runs/12243820522/job/34154220414?pr=1404 show that the code continues executing (see test run dump below) despite the fact that the first command in the generated test is set -eou pipefail which instructs the shell to immediately abort on the first error, and report a failure to the caller. (And recursively, the workflows also use this setting to cascade the first test error all the way to the top and make a test fail)

+ python3 torchchat.py generate stories15M --prompt 'write me a story about a boy and his bear'
## Running via PyTorch 
  Downloading https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt...
  Downloading https://github.com/karpathy/llama2.c/raw/master/tokenizer.model...
  NumExpr defaulting to 6 threads.
  PyTorch version 2.6.0.dev20241013 available.
  Moving model to /Users/runner/.torchchat/model-cache/stories15M.
  
  Downloading builder script:   0%|          | 0.00/5.67k [00:00<?, ?B/s]
  Downloading builder script: 100%|██████████| 5.67k/5.67k [00:00<00:00, 5.30MB/s]
  Traceback (most recent call last):
    File "/Users/runner/work/torchchat/torchchat/torchchat.py", line 96, in <module>
  Using device=mps 
  Loading model...
      generate_main(args)
    File "/Users/runner/work/torchchat/torchchat/torchchat/generate.py", line 1235, in main
      gen = Generator(
    File "/Users/runner/work/torchchat/torchchat/torchchat/generate.py", line 293, in __init__
      self.model = _initialize_model(self.builder_args, self.quantize, self.tokenizer)
    File "/Users/runner/work/torchchat/torchchat/torchchat/cli/builder.py", line 603, in _initialize_model
      model = _load_model(builder_args)
    File "/Users/runner/work/torchchat/torchchat/torchchat/cli/builder.py", line 465, in _load_model
      model = _load_model_default(builder_args)
    File "/Users/runner/work/torchchat/torchchat/torchchat/cli/builder.py", line 427, in _load_model_default
      checkpoint = _load_checkpoint(builder_args)
    File "/Users/runner/work/torchchat/torchchat/torchchat/cli/builder.py", line 412, in _load_checkpoint
      checkpoint = torch.load(
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1359, in load
      return _load(
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1856, in _load
      result = unpickler.load()
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/_weights_only_unpickler.py", line 388, in load
      self.append(self.persistent_load(pid))
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1820, in persistent_load
  Time to load model: 0.10 seconds
      typed_storage = load_tensor(
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1792, in load_tensor
      wrap_storage=restore_location(storage, location),
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1693, in restore_location
      return default_restore_location(storage, map_location)
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 601, in default_restore_location
      result = fn(storage, location)
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 467, in _mps_deserialize
      return obj.mps()
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/storage.py", line 260, in mps
      return torch.UntypedStorage(self.size(), device="mps").copy_(self, False)
  RuntimeError: MPS backend out of memory (MPS allocated: 1.02 GB, other allocations: 0 bytes, max allowed: 15.87 GB). Tried to allocate 256 bytes on shared pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
+ echo ::group::Completion
Completion
  + echo 'tests complete'
  tests complete
  + echo '*******************************************'
  *******************************************
  + echo ::endgroup::

mikekgfb added a commit to mikekgfb/torchchat-1 that referenced this issue Jan 24, 2025
source test commands instead of executing them.  
(Possible fix for pytorch#1315 )
@Jack-Khuu Jack-Khuu added CI Infra Issues related to CI infrastructure and setup triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Feb 4, 2025
Jack-Khuu added a commit that referenced this issue Feb 21, 2025
* Update run-readme-pr-macos.yml

source test commands instead of executing them.  
(Possible fix for #1315 )

* Update run-docs

source instead of exec

* Update README.md

somebody pushed all the model exports into exportedModels, but... we never create the directory.

we should do that also do this in the user instructions, just because storing into a directory that doesn't exist is not good :)

* Update multimodal.md

multimodal doc needed end of tests comment.

* Update ADVANCED-USERS.md

Need to download files before using them, lol. We expect the users to do this, but we should verbalize.  Plus, if we extract for testing, then it obviously fails.

* Update native-execution.md

( triggers unexpected token in macos zsh

* Update run-readme-pr-macos.yml

          # metadata does not install properly on macos
          # .ci/scripts/run-docs multimodal

* Update run-readme-pr-mps.yml

          # metadata does not install properly on macos
          # .ci/scripts/run-docs multimodal

* Update ADVANCED-USERS.md

install wget

* Update run-readme-pr-macos.yml

          echo ".ci/scripts/run-docs native DISABLED"
          # .ci/scripts/run-docs native

* Update run-readme-pr-mps.yml

          echo ".ci/scripts/run-docs native DISABLED"
          # .ci/scripts/run-docs native

* Update run-docs

switch to gs=32 quantization
(requires consolidated run-docs of #1439)

* Create cuda-32.json

add gs=32 cuda quantization for use w/ stories15M

* Create mobile-32.json

add gs=32 for stories15M

* Update run-readme-pr.yml

Comment out tests that currently fail, as per summary in PR comments

* Update install_requirements.sh

Dump location of executable to understand these errors:
https://hud.pytorch.org/pr/pytorch/torchchat/1476#36452260294

2025-01-31T00:18:57.1405698Z + pip3 install -r install/requirements.txt --extra-index-url https://download.pytorch.org/whl/nightly/cpu
2025-01-31T00:18:57.1406689Z ./install/install_requirements.sh: line 101: pip3: command not found

* Update install_requirements.sh

dump candidate locations for pip

* Update README.md

Some of the updown commands were getting rendered. Not sure why/when that happens?

* Update run-docs

readme switched from llama3 to llama3.1, so replace llama3.1 with stories15M

* Update run-readme-pr-macos.yml

remove failing gguf test

* Update run-readme-pr-mps.yml

Remove failing gguf test

* Update run-readme-pr.yml

Can we mix `steps:` with `script: |` in git workflows?

Testing 123 testing!

* Update run-docs

remove quotes around replace as the nested quotes are not interpreted by the shall but seem to be passed to updown.py.

We don't have spaces in replace, so no need for escapes.

* Update run-readme-pr.yml

1 - Remove steps experiment.
2 - add at-get install pip3

Maybe releng needs to look at what's happening with pip?

* Update run-docs

remove quotes that mess up parameter identification.

* Update run-readme-pr.yml

try to install pip & pip3

* Update run-readme-pr.yml

debug

        which pip || true
        which pip3 || true
        which conda || true

* Update run-readme-pr-macos.yml

* Update run-readme-pr-linuxaarch64.yml

debug info

```
        which pip || true
        which pip3 || true
        which conda || true
```

* Update quantization.md

use group size 32 which works on all models

* Update run-readme-pr.yml

Cleanup, comment non-working tests

* Update run-readme-pr-macos.yml

Uncomment test code requiring unavailable pip3

* Update run-readme-pr-mps.yml

comment non-working tests

* Update run-readme-pr-linuxaarch64.yml

comment out test code requiring pip3

* Update run-docs

Avoid nested quotes

* Update run-readme-pr.yml

Enable distributed test

* Update install_requirements.sh

Remove extraneous debug messages from install_requirements.sh

* Update install_requirements.sh

remove debug

* Update run-readme-pr.yml

Comment out failing quantization-any (glibc version issue) and distributed (nccl usage)

* Update run-readme-pr.yml

Disable remaining tests

* Update run-readme-pr.yml

enable readme

* Update run-readme-pr.yml

remove run of readme

---------

Co-authored-by: Jack-Khuu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
actionable Items in the backlog waiting for an appropriate impl/fix bug Something isn't working CI Infra Issues related to CI infrastructure and setup triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

2 participants