I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.

I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.

In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.

P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!

  • h3ndrik@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 months ago

    It depends on the exact specs of your old laptop. Especially the amount of RAM and VRAM on the graphics card. It’s probably not enough to run any reasonably smart LLM aside from maybe Microsoft’s small “phi” model.

    So unless it’s a gaming machine and has 6GB+ of VRAM, the graphics card will probably not help at all. Without, it’s going to be slow. I recommend projects that are based on llama.cpp or use it as a backend, for that kind of computers. It’s the best/fastest way to do inference on slow computers and CPUs.

    Furthermore you could use online-services or rent a cloud computer with a beefy graphics card by the hour (or minute.)