I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.

Some questions to get a nice discussion going:

  • Any of you have experience with this?
  • What are your motivations?
  • What are you using in terms of hardware?
  • Considerations regarding energy efficiency and associated costs?
  • What about renting a GPU? Privacy implications?
  • @Audalin@lemmy.world
    link
    fedilink
    English
    57 months ago

    If your CPU isn’t ancient, it’s mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.

    8x7B is mixtral, yeah.