I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.
Some questions to get a nice discussion going:
- Any of you have experience with this?
- What are your motivations?
- What are you using in terms of hardware?
- Considerations regarding energy efficiency and associated costs?
- What about renting a GPU? Privacy implications?
You must log in or register to comment.
I run ollama on my laptop in a VM with open web UI. It works great and I have plenty of models to choose from.
I recently was playing around with TTS and it is pretty solid as well. I am thinking about taking the smaller phi models and throwing it onto my pine64 quartz64 for a portable AI assistant while traveling. The only potential problem is the time it takes to process.