Skip to content

Two simple python scripts that use a local instance of llama3.2 (and supporting models)

Notifications You must be signed in to change notification settings

tpronk/ollama_demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Using a local LLM using Python

This repo contains two simple python scripts that use a local instance of llama3.2 (and supporting models). hello_world.py offers a basic prompt-response demo, while parse_directory.py processes a bunch of docx and pptx files to run semantic search on.

The demos were tested on Python 3.12; Python 3.13 was not supported (yet).

Installing dependencies

  1. Install Ollama
  2. Start a terminal session and run ollama pull llama3.2
  3. Start Ollama
  4. Install python requirements by running pip install -r requirements.txt

You're now good to go! Run one of the two Python scripts to try them out.

About

Two simple python scripts that use a local instance of llama3.2 (and supporting models)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages