No description
  • JavaScript 79.7%
  • CSS 9.5%
  • HTML 9.4%
  • Shell 0.9%
  • Dockerfile 0.5%
Find a file
2026-04-10 23:38:23 +01:00
defaults 0.8.4 2026-04-10 23:38:23 +01:00
lib 0.8.1 generic db and export import scripts 2026-04-07 13:01:46 +01:00
public 0.8.4 2026-04-10 23:38:23 +01:00
.gitignore 0.8.4 2026-04-10 23:38:23 +01:00
CHANGELOG.md 0.8.4 2026-04-10 23:38:23 +01:00
CloudronManifest.json 0.8.4 2026-04-10 23:38:23 +01:00
CloudronVersions.json 0.8.4 2026-04-10 23:38:23 +01:00
DESCRIPTION 0.3.2 2026-04-05 17:55:42 +01:00
Dockerfile 0.7.10 zitadel working 2026-04-06 23:32:20 +01:00
LICENSE initial commit 2026-04-02 23:31:47 +01:00
logo.png initial commit 2026-04-02 23:31:47 +01:00
package-lock.json 0.7.10 zitadel working 2026-04-06 23:32:20 +01:00
package.json 0.7.10 zitadel working 2026-04-06 23:32:20 +01:00
POSTINSTALL.md 0.8.3 ui tweaks 2026-04-07 14:21:11 +01:00
README.md 0.7.1 2026-04-06 14:02:13 +01:00
server.js 0.8.3 ui tweaks 2026-04-07 14:21:11 +01:00
start.sh 0.8.1 generic db and export import scripts 2026-04-07 13:01:46 +01:00

Sparkie's RAG

An effective "light in weight" RAG tool for Cloudron. There are several good RAG apps (Windmill, Dify, AnythingLLM), so why make another? Because personally I find the others too wide in scope for my needs and too large when I want a targeted solution.

Sparkie's RAG is a simpler tool that focuses on the core features of RAG, upload documents, index them, and then query them back.

Like my eponymous faithful companion for many years (now sadly passed and much missed), Sparkie's RAG is small, compact, strong and terrier-like in information retrieval.

Features

  • It is designed for use with a self-hosted Ollama instance.

  • The system prompt is exposed in the UI and can be customised. This means that you can tailor the behaviour to your specific needs, and a custom prompt is key to getting the style of response you want back.

  • It uses postgres and pg-vector for the indexing of the files.

  • in my setup, I use a self-hostedOllama deployment from Cloudron, and I pull these models into it : nomic-embed-text-v2-moe:latest cogito-2.1:671b-cloud Using an Ollamd cloud model for inference and processing of indexed data with my self-hosted instance means no GPU required.

  • Settings are stored in Postgres and configured in the UI.