I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.
Some questions to get a nice discussion going:
- Any of you have experience with this?
- What are your motivations?
- What are you using in terms of hardware?
- Considerations regarding energy efficiency and associated costs?
- What about renting a GPU? Privacy implications?
You must log in or register to comment.
I have https://github.com/oobabooga/text-generation-webui for LMS and https://github.com/lllyasviel/stable-diffusion-webui-forge for stable diffusion models.
They just run on 8g vram so I turn one off to use the other, lol.
Both work well for my needs