L_Acacia@lemmy.onetoSelfhosted@lemmy.world•Guide to Self Hosting LLMs Faster/Better than OllamaEnglish
2·
1 month agollama.cpp works on windows too (or any os for that matter), though linux will vive you better performances
llama.cpp works on windows too (or any os for that matter), though linux will vive you better performances
Buying second hand 3090/7090xtx will be cheaper for better performances if you are not building the rest of the machine.
You are limited by bandwidth not compute with llm, so accelerator won’t change the interferance tp/s
Yeah was kinda drunk when I wrote this comment
I use it + portmaster + O&O to kill all of windows spyware, this is great software and should be recommended if you have to run windows
Edited typo
Scrubbles’s comment outlined what would likely be the best workflow. Having done something similar myself, here are my recommendations:
In my opinion, the best way to do STT with Whisper is by using Whisper Writer, I use it to write most most messages and texts.
For the LLM part, I recommend Koboldcpp. It’s built on top of llama.cpp and has a simple GUI that saves you from looking for the name of each poorly documented llama.cpp launch flag (cli is still available if you prefer). Plus, it offers more sampling options.
If you want a chat frontend for the text generated by the LLM, SillyTavern is a great choice. Despite its poor naming and branding, it’s the most feature-rich and extensible frontend. They even have an official extension to integrate TTS.
For the TTS backend, I recommend Alltalk_tts. It provides multiple model options (xttsv2, coqui, T5, …) and has an okay UI if you need it. It also offers a unified API to use with the different models. If you pick SillyTavern, it can be accessed by their TTS extension. For the models, T5 will give you the best quality but is more resource-hungry. Xtts and coqui will give you decent results and are easier to run.
There are also STS models emerging, like GLM4-V, but I still haven’t tried them, so I can’t judge the quality.