1: DEPENDING ON WHERE YOU ARE SETTING UP YOUR LOCAL SERVER IT WILL VARY A BIT. 2: A SIMPLE METHOD IS BY SETTING IT UP ON A MAC USING THE OLLAMA DMG INSTALLATION ALREADY RUNNING AND INSTALLING YOUR DESIRED MODEL THROUGHT THE TERMINAL. like so : ollama run llama2-uncensored download the program here it'll show you the setup for linux as well its self-explantory. https://ollama.com/download 3: ONCE YOU HAVE OLLAMA INSTALLED WITH YOUR DESIRED MODELS YOU WILL NEED TO SET UP A DOCKER CONTAINER. DOCKER DESKTOP FOR A GUI VISUAL https://www.docker.com/products/docker-desktop/ 4: FOR DEFAULT YOU SHOULD SET IT UP TO RUN ON PORT 3000 WITH THE FOLLOWING COMMAND: docker run -d -p 3000:8080 \--add-host=host.docker.internal:host-gateway \-v ollama-webui:/app/backend/data \--name ollama-webui \--restart always \ghcr.io/ollama-webui/ollama-webui:main 5: CHECK TO SEE IF THE DOCKER CONTAINER IS RUNNING USING : docker ps 6: EVERYTHING SHOULD BE VISIBLE ON THE GUI OF DOCKER. 7: IF YOU WANT TO ACCESS THE AI FROM OUTSIDE YOUR LOCAL HOST YOU COULD INSTALL NGROK TO ACCESS YOUR LOCAL HOST FROM A FREE FORWARDED LINK PROVIDED FROM NGROK. IF SO , RUN brew install ngrok/ngrok/ngrok 8: NOW HEAD OVER TO https://ngrok.com/ AND MAKE AN ACCOUNT THIS WILL GIVE YOU A AUTH USER TOKEN 9: ONCE YOU HAVE IT START THIS COMMAND WITH YOUR TOKEN : ngrok config add-authtoken YOURKTOKENHERE 10: THIS WILL SET UP YOUR ACCOUNT WITH NGROK NOW ALL YOU HAVE TO DO IS RUN : ngrok http http://localhost:3000 THIS WILL START NGROK ON THE TERMINAL AND DISPLAY THE STATUS OF THE SERVER ACCOUNT INFO , REGION, LATENCY AND MOST INPORTANTLY THE FORWARDING ADDRESS URL YOU WILL USE TO ACCESS THE AI FROM ANYWHERE IN THE WORLD AS LONG AS YOUR SYSTEM IS ALWAYS ON. ALL SORTS OF CONNECTIONS WILL POP UP WHEN YOUR ARE TRYING TO CONNECT . I SUGGEST HAVING DOCKER DESKTOP AND THAT NGROK TERMINLA WINDOW OPENED SIDE BY SIDE TO SEE EVERYTHING THATS HAPPENING. 11: THATS PRETTY MUCH IT CONGRATS YOU'VE OFFICALLY SET UP A LLM ON YOUR MACHINE. REMEMBER THE STRONGER YOUR MACHINE THE MORE IT'LL BE ABLE TO PROCCESS LARGER MODELS WITH BILLIONS OF PARAMETERS.