Skip to content

[ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu (#1430) #12

[ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu (#1430)

[ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu (#1430) #12

# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Test
name: Build latest images on push event
on:
push:
branches: [ 'main' ]
paths:
- "**.py"
- "**Dockerfile*"
- "**docker_image_build/build.yaml"
- "**/ui/**"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-on-push
cancel-in-progress: true
jobs:
job1:
uses: ./.github/workflows/_get-test-matrix.yml
with:
test_mode: "docker_image_build"
image-build:
needs: job1
strategy:
matrix: ${{ fromJSON(needs.job1.outputs.run_matrix) }}
fail-fast: false
uses: ./.github/workflows/_example-workflow.yml

Check failure on line 30 in .github/workflows/push-image-build.yml

View workflow run for this annotation

GitHub Actions / .github/workflows/push-image-build.yml

Invalid workflow file

error parsing called workflow ".github/workflows/push-image-build.yml" -> "./.github/workflows/_example-workflow.yml" (source branch with sha:3d3ac59bfb38b1e593bed0a87825bca1f8c9daad) --> "./.github/workflows/_helm-e2e.yml" : failed to fetch workflow: workflow was not found.
with:
node: ${{ matrix.hardware }}
example: ${{ matrix.example }}
secrets: inherit