A Story Teller supported by Huggingface Inference Api on Stable-Diffusion and LLM
Huggingface Inference Api is a free of use toolkit for trying models on the fly, one can try rapid prototyping on AI applications with the help of it.
This project is an attempt to build a Graphic storytelling project use their api.
Name | HuggingFace Space link |
---|---|
🎥💬 Book Cover (Comet Atomic) Story Teller | https://huggingface.co/spaces/svjack/Comet-Atomic-Story-Teller |
🧱 Pixel Story Teller | https://huggingface.co/spaces/svjack/Pixel-Story-Teller |
Install by
pip install -r requirements.txt
Run Book Cover Story Teller
python book_cover_app.py
Run Pixel Story Teller
python pixel_app.py
Then visit 127.0.0.1:7860
in above demos, will get and use Huggingface API_TOKEN from environment variables, you can set it mannally.
API_TOKEN = os.environ.get("HF_READ_TOKEN")
book_cover_demo.mp4
pixel_demo.mp4
Following are some results of two demos.
book_cover_connect.mp4
pixel_ori_connect.mp4
pixel_trans_connect.mp4
For more compare results, you can take a look at. videos
The story teller can deal with the story of "Someone Do SomeThing", the LLM part complete the cause, process and result,
and the Stable-Diffusion part draw images for them.
- ’🎥💬 Book Cover Story Teller‘ can add book cover to the story (click from left image gallery), and all the image are transformed to the cover style.
- ‘🧱 Pixel Story Teller’ can downsampling the image to pixel style, make the output like shot from pixel games.
svjack - https://huggingface.co/svjack - [email protected] - [email protected]
Project Link:https://github.com/svjack/Diffusion-Story-Teller