I’ve recently played with the idea of self hosting a LLM. I am aware that it will not reach GPT4 levels, but beeing free from restraining prompts with confidential data is very nice tool for me to have.
Has anyone got experience with this? Any recommendations? I have downloaded the full Reddit dataset so I could retrain the model on this one as selected communities provide immense value and knowledge (hehe this is exactly what reddit, twitter etc. are trying to avoid…)
The best/easiest way to get started with a self-hosted LLM is to check out this repo:
https://github.com/oobabooga/text-generation-webui
Its goal is to be the Automatic1111 of text generators, and it does a fair job at it.
A good model that’s said to rival gpt-3.5 is the new Falcon model. The full sized version is too big to run on a single GPU, but the 7b version “only” needs about 16GB.
https://huggingface.co/tiiuae/falcon-7b
There’s also the Wizard-uncensored model that is popular.
https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored
There are a ton of models out there with new ones popping up every day. You just need to search around. The oobabooga repo has a few models linked in the readme also.
Edit: there’s also h20gpt, which seems really promising. I’m going to try it out in the next couple days.
I’m about to start this journey myself. I found this, which looks promising: https://github.com/ggerganov/llama.cpp
Would be nice if someone here with some experience could share.
Edit: also this https://gpt4all.io/index.html
deleted by creator
Do you need some particular python stuff or is it all provided ?
If you don’t have a good GPU then just use gpt4all
I personally use
llama.cpp
in a VM, however if you have a nvidia GPU with lots of VRAM you’ve got more options available, as well as much faster inference (text generation) speed.Check out the community at !localllama@sh.itjust.works, they’re pretty experienced with running LLMs locally
Repos you might want to (git) checkout:
I would advise not training your own model but instead use tools like langchain and chroma, in combination with a open model like gpt4all or falcon :).
So in general explore langchain!
Not sure if youre asking about already trained models or you want to train yours.
If you just want to have fun the small to medium models are pretty ok. Things like Wizard Vicuna 13b or the smaller 7b. You just have to try some of them until you find ehats best for your use case. Ex I have a model running discord bots (with different personalities) but the same model would work badly with my other projects. Esp considering that with some models you can just chat while others need instructions.
There are also recent models that approach gpt levels. Downside is they are huge in terms of hardware cost (hundreds of gbs of ram, multiple gpus). But they wont necesarly be better than a small more focused model.
Get oobabooga (the automatic1111 of chat llms) and then search for TheBloke on huggingface for models.
If you want to host a text model thats is reachable by you or anyone securely over the internet, I suggest you turn your pc into a worker for the ai horde. You would then be able to access the model you’re serving from everywhere but also everyone else’s llm and stable diffusion models with priority. You would also be improving the commons
You might find some starting points or even projects or terms to look for in this article:
Honestly all these are great suggestions for today, but this area is moving so fast I almost would suggest holding off six months to a year or so for a better solution to rise to the top. Their capabilities grow daily, and you may put in the work to get this set-up and have a much more capable solution appear soon afterwards. Just a thought though, if it’s mainly for a fun experiment then try some of these out!
There is also runpod.io. you can rent quite powerful machines on a hourly base which gives you the possibility to run the large models. Also they have templates so the machine will be set up ready to go in minutes. All you have to do is to load the model you like to try via the oogaboga web interface of your machine.