• 0 Posts
  • 2 Comments
Joined 1 year ago
cake
Cake day: July 27th, 2023

help-circle
  • I’d be interested in setting up the highest quality models to run locally, and I don’t have the budget for a GPU with anywhere near enough VRAM, but my main server PC has a 7900x and I could afford to upgrade its RAM - is it possible, and if so how difficult, to get this stuff running on CPU? Inference speed isn’t a sticking point as long as it’s not unusably slow, but I do have access to an OpenAI subscription so there just wouldn’t be much point with lower quality models except as a toy.


  • The issue is that, in the function passed to reduce, you’re adding each object directly to the accumulator rather than to its intended parent. These are the problem lines:

    if (index == array.length - 1) {
    	accumulator[val] = value;
    } else if (!accumulator.hasOwnProperty(val)) {
    	accumulator[val] = {}; // update the accumulator object
    }
    

    There’s no pretty way (that I can think of at least) to do what you want using methods like reduce in vanilla JS, so I’d suggest using a for loop instead - especially if you’re new to programming. Something along these lines (not written to be actual code, just to give you an idea):

    let curr = settings;
    const split = url.split("/");
    for (let i = 0; i < split.length: i++) {
        const val = split[i];
        if (i != split.length-1) {
            //add a check to see if curr[val] exists
            let next = {};
            curr[val] = next;
            curr = next;
        }
        //add else branch
    }
    

    It’s missing some things, but the important part is there - every time we move one level deeper in the URL, we update curr so that we keep our place instead of always adding to the top level.