Skip to content

How can we use CPU offloading when using AutoModelForCausalLM and THUDM/cogvlm2-llama3-chat-19B #1246

How can we use CPU offloading when using AutoModelForCausalLM and THUDM/cogvlm2-llama3-chat-19B

How can we use CPU offloading when using AutoModelForCausalLM and THUDM/cogvlm2-llama3-chat-19B #1246

Triggered via issue February 2, 2025 22:46
@nnilayynnilayy
commented on #35751 62db3e6
Status Startup failure
Total duration
Artifacts

self-comment-ci.yml

on: issue_comment
Get PR number
Get PR number
get-sha
get-sha
get-tests
get-tests
Reply to the comment
Reply to the comment
Create run
Create run
Matrix: Run all tests for the model
Waiting for pending jobs
Update Check Run Status
Update Check Run Status
Fit to window
Zoom out
Zoom in

Annotations

1 error
Invalid workflow file: .github/workflows/self-comment-ci.yml#L10
The workflow is not valid. .github/workflows/self-comment-ci.yml (Line: 10, Col: 10): The maximum allowed memory size was exceeded while evaluating the following expression: format('{0}-{1}-{2}', github.workflow, github.event.issue.number, (startsWith(github.event.comment.body, 'run-slow') || startsWith(github.event.comment.body, 'run slow') || startsWith(github.event.comment.body, 'run_slow'))) .github/workflows/self-comment-ci.yml (Line: 10, Col: 10): Unexpected value ''