Skip to content

spha-code/RAG-ollama-system

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RAG ollama LLM App in Flask Backend Local Directory


Screenshot

Structure of the Project

rag-system/
├── venv/
├── docs/
│   ├── ...
├── vectorstore/
│   └── ...
├── ingest.py
├── app.py          <-- Flask application
└── templates/
    └── index.html  <-- HTML in templates folder
  1. mkdir rag-system (run on WSL)

  2. cd rag-system

  3. Create a virtual environment with venv python -m venv venv

  4. Activate the virtual environment

    On Windows:

    .\venv\Scripts\activate

    On macOS/Linux:

    source venv/bin/activate

  5. Install requirements.txt pip install -r requirements.txt

  6. mkdir docs and upload files (.pdf, .txt) manually in the directory

  7. Install Ollama Locally sudo snap install ollama

  8. Download a model with ollama ollama pull deepseek-r1:1.5b

    Downloaded deepseek-r1:1.5b which is 1.1GB

    See here for a list of models: https://ollama.com/search

  9. ollama pull nomic-embed-text

  10. Flask uses a templates folder by default to find HTML files. Create this folder in your project root

    mkdir templates

  11. Inside the templates folder, create index.html

  12. Open a new terminal and run ollama serve This command starts the Ollama server, do not close this terminal

  13. run python ingest.py

  14. run python app.py

About

RAG

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors