Skip to content

[Updated] APL guides and platform changes #7250

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: "This guide includes steps and guidance for deploying a large langu
authors: ["Akamai"]
contributors: ["Akamai"]
published: 2025-03-25
modified: 2025-04-17
modified: 2025-04-25
keywords: ['ai','ai inference','ai inferencing','llm','large language model','app platform','lke','linode kubernetes engine','llama 3','kserve','istio','knative']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
Expand Down Expand Up @@ -102,7 +102,7 @@ Sign into the App Platform web UI using the `platform-admin` account, or another

1. Click **Create Team**.

1. Provide a **Name** for the Team. Keep all other default values, and click **Submit**. This guide uses the Team name `demo`.
1. Provide a **Name** for the Team. Keep all other default values, and click **Create Team**. This guide uses the Team name `demo`.

### Install the NVIDIA GPU Operator

Expand Down Expand Up @@ -170,11 +170,7 @@ A [Workload](https://apl-docs.net/docs/for-devs/console/workloads) is a self-ser

1. Continue with the rest of the default values, and click **Submit**.

After the Workload is submitted, App Platform creates an Argo CD application to install the `kserve-crd` Helm chart. Wait for the **Status** of the Workload to become healthy as represented by a green check mark. This may take a few minutes.

![Workload Status](APL-LLM-Workloads.jpg)

Click on the ArgoCD **Application** link once the Workload is ready. You should be brought to the Argo CD screen in a separate window:
After the Workload is submitted, App Platform creates an Argo CD application to install the `kserve-crd` Helm chart. Wait for the **Status** of the Workload to become ready, and click on the ArgoCD **Application** link. You should be brought to the Argo CD screen in a separate window:

![Argo CD](APL-LLM-ArgoCDScreen.jpg)

Expand Down Expand Up @@ -386,11 +382,9 @@ Wait for the Workload to be ready again, and proceed to the following steps for

1. Click **Create Service**.

1. In the **Name** dropdown list, select the `llama3-model-predictor` service.
1. In the **Service Name** dropdown list, select the `llama3-model-predictor` service.

1. Under **Exposure (ingress)**, select **External**.

1. Click **Submit**.
1. Click **Create Service**.

Once the Service is ready, copy the URL for the `llama3-model-predictor` service, and add it to your clipboard.

Expand Down Expand Up @@ -493,11 +487,9 @@ Follow the steps below to follow the second option and add the Kyverno security

1. Click **Create Service**.

1. In the **Name** dropdown menu, select the `llama3-ui` service.

1. Under **Exposure (ingress)**, select **External**.
1. In the **Service Name** dropdown menu, select the `llama3-ui` service.

1. Click **Submit**.
1. Click **Create Service**.

## Access the Open Web User Interface

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: "This guide expands on a previously built LLM and AI inferencing ar
authors: ["Akamai"]
contributors: ["Akamai"]
published: 2025-03-25
modified: 2025-04-17
modified: 2025-04-25
keywords: ['ai','ai inference','ai inferencing','llm','large language model','app platform','lke','linode kubernetes engine','rag pipeline','retrieval augmented generation','open webui','kubeflow']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
Expand Down Expand Up @@ -290,11 +290,9 @@ Create a [**Network Policy**](https://apl-docs.net/docs/for-ops/console/netpols)

1. Click **Create Service**.

1. In the **Name** dropdown menu, select the `ml-pipeline-ui` service.
1. In the **Service Name** dropdown menu, select the `ml-pipeline-ui` service.

1. Under **Exposure**, select **External**.

1. Click **Submit**.
1. Click **Create Service**.

Kubeflow Pipelines is now ready to be used by members of the Team **demo**.

Expand Down Expand Up @@ -633,13 +631,9 @@ Update the Kyverno **Policy** `open-webui-policy.yaml` created in the previous t

1. Click **Create Service**.

1. In the **Name** dropdown menu, select the `linode-docs-pipeline` service.

1. In the **Port** dropdown, select port `9099`.

1. Under **Exposure**, select **External**.
1. In the **Service Name** dropdown menu, select the `linode-docs-pipeline` service.

1. Click **Submit**.
1. Click **Create Service**.

1. Once submitted, copy the URL of the `linode-docs-pipeline` service to your clipboard.

Expand Down Expand Up @@ -687,11 +681,9 @@ Update the Kyverno **Policy** `open-webui-policy.yaml` created in the previous t

1. Click **Create Service**.

1. In the **Name** dropdown menu, select the `linode-docs-chatbot` service.

1. Under **Exposure**, select **External**.
1. In the **Service Name** dropdown list, select the `linode-docs-chatbot` service.

1. Click **Submit**.
1. Click **Create Service**.

## Access the Open Web User Interface

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ description: "This guide shows how to deploy a RabbitMQ message broker architect
authors: ["Akamai"]
contributors: ["Akamai"]
published: 2025-03-20
modified: 2025-04-25
keywords: ['app platform','lke','linode kubernetes engine','rabbitmq','microservice','message broker']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
Expand Down Expand Up @@ -108,7 +109,7 @@ When working in the context of an admin-level Team, users can create and access

1. Click **Create Team**.

1. Provide a **Name** for the Team. Keep all other default values, and click **Submit**. This guide uses the Team name `demo`.
1. Provide a **Name** for the Team. Keep all other default values, and click **Create Team**. This guide uses the Team name `demo`.

### Create a RabbitMQ Cluster with Workloads

Expand Down Expand Up @@ -136,27 +137,37 @@ This guide uses an example Python chat app to send messages to all connected cli

The example app in this guide is not meant for production workloads, and steps may vary depending on the app you are using.

### Add the Code Repository for the Example App

1. Select **view** > **team** and **team** > **demo** in the top bar.

1. Select **Builds**, and click **Create Build**.
1. Select **Code Repositories**, and click **Add Code Repository**.

1. Provide a name for the Build. This is the same name used for the image stored in the private Harbor registry of your Team. This guide uses the Build name `rmq-example-app` with the tag `latest`.
1. Provide the name `apl-examples` for the Code Repository.

1. Select the **Mode** `Buildpacks`.
1. Select *GitHub* as the **Git Service**.

1. To use the example Python messaging app, provide the following GitHub repository URL:
1. Under **Repository URL**, add the following GitHub URL:

```command
https://github.com/linode/apl-examples.git
```

1. Set the **Buildpacks** path to `rabbitmq-python`.
1. Click **Add Code Repository**.

1. Click **Submit**. The build may take a few minutes to be ready.
### Create a Container Image

{{< note title="Make sure auto-scaling is enabled on your cluster" >}}
When a build is created, each task in the pipeline runs in a pod, which requires a certain amount of CPU and memory resources. To ensure the sufficient number of resources are available, it is recommended that auto-scaling for your LKE cluster is enabled prior to creating the build.
{{< /note >}}
1. Select **Container Images** from the menu.

1. Select the *BuildPacks* build task.

1. In the **Repository** dropdown list, select `apl-examples`.

1. In the **Reference** dropdown list, select `main`.

1. Set the **Path** field to `rabbitmq-python`.

1. Click **Create Container Image**.

### Check the Build Status

Expand All @@ -176,12 +187,10 @@ The backend status of the build can be checked from the **PipelineRuns** section

Once successfully built, copy the image repository link so that you can create a Workload for deploying the app in the next step.

1. Select **Builds** to view the status of your build.
1. Select **Container Images** to view the status of your build.

1. When ready, use the "copy" button in the **Repository** column to copy the repository URL link to your clipboard.

![App Build Ready](APL-RabbitMQ-build-ready.jpg)

## Deploy the App

1. Select **view** > **team** and **team** > **demo** in the top bar.
Expand All @@ -206,7 +215,7 @@ Once successfully built, copy the image repository link so that you can create a
image:
repository: {{< placeholder "<image-repo-link>" >}}
pullPolicy: IfNotPresent
tag: {{< placeholder "latest" >}}
tag: {{< placeholder "main" >}}
env:
- name: {{< placeholder "NOTIFIER_RABBITMQ_HOST" >}}
valueFrom:
Expand Down Expand Up @@ -259,11 +268,9 @@ Create a service to expose the `rmq-example-app` application to external traffic

1. Select **Services** in the left menu, and click **Create Service**.

1. In the **Name** dropdown menu, select the `rmq-example-app` service.

1. Under **Exposure**, select **External**.
1. In the **Service Name** dropdown list, select the `rmq-example-app` service.

1. Click **Submit**. The service may take a few minutes to be ready.
1. Click **Create Service**. The service may take around 30 seconds to be ready.

### Access the Demo App

Expand Down