Our AI-generated future is going to be fantastic.

Archive link, so you don’t have to visit Substack: https://archive.is/hJIWk

  • watersnipje@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Yeah, if you already have it then it’s not really an extra cost. But the smaller models perform less well and less reliably.

    In order to write a book that’s convincing enough to fool at least some buyers, I wouldn’t expect a Llama2 7B to do the trick, based on what I see in my work (ML engineer). But even at work, I run Llama2 70B quantized at most, not the full size one. Full size unquantized requires 320 GPU vram, and that’s just quite expensive (even more so when you have to rent it from cloud providers).

    Although if you already have a GPU that size at home, then of course you can run any LLM you like :)