ok. you run the start_linux.sh on oobabooga to run it on Linux. I’ve never run it on Linux, though.
The app will freeze the computer if you use models that are too big. It also produces stuttering in the smaller models.
It runs smoother and with no memory bottlenecks. Besides, you can load any gguf you want. You are not limited by the LLMs offered by GPT4ALL
oobabooga is better than GPT4ALL. The software is better. You load gguf files using llama.cpp that is integrated with it.
I saw another reporting on the same topic, apparently there are 3 algorithms developed.
I used it to check a user input format.
Maybe we just need a different type of NLP to work with summarization. I have noticed before LLMs are unlikely to escape their ‘base’ knowledge.