Installing your own PrivateGPT on Mac
I stumble across an article on how to install your own PrivateGPT so that you can have your own version of LLM (Large language Model) to chat to. Further more you can ingest a bunch of your own document so that it can response back to you as if you are talking to a book. So I can’t resist the temptation to have my own PrivateGPT and feed it with data to my own liking. So here are the steps that I have gone through to get it going.
0. Pre-Requisite
- Pip
- Python 3.11
- Homebrew
update pip to the latest
pip install pip --upgrade
1. Clone the PrivateGPT repository
git clone https://github.com/imartinez/privateGPT cd privateGPT
2. Install Poetry
pip install poetry poetry --version
3. Install ollama.ai
Go to https://ollama.com/ and follow the instruction. I am installing the OSX version, given that I have a Mac. Once downloaded, unzip the file and double click on the ollama icon, it will then prompt you to copy to the application folder, follow the instruction to do that.
Once installed you want to close it from running on the task bar by selecting “Quit Ollama”
4. Install the model to be used, I am using mistral 7b LLM and nomic-embed-text
ollama pull mistral ollama pull nomic-embed-text
5. Start Ollama service
ollama serve
if you encounter the following error message, that means ollama is already running, so you need to close it from your desktop.
Error: listen tcp 127.0.0.1:11434: bind: address already in use
6. Now install the Ollama LLM using poetry
You will need to type the following command in a new window
poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"
7. Run your PrivateGPT
PGPT_PROFILES=ollama make run
Your PrivateGPT should be running, and you can point your browser to http://localhost:8001
I will be posting the video shortly.
Final Note:
if you encounter issue due to the slowness of the CPU or you are not able to use the GPU like me, you can edit the line in the following location and add “request_timeout=300”
private_gpt/components/llm/llm_component.py
PPS:
If you need to ingest multiple documents into PrivateGPT you can do it using the following command
make ingest ./folder/with/files -- --watch
What this command does is watching the changes in the folder and start ingesting the files within in. For details of the command you can go to https://docs.privategpt.dev/manual/document-management/ingestion
You can watch the install video below: