• bron@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    So fully explaining how these systems work will be a huge project that humanity is unlikely to complete any time soon.

    Great read. This quote really stuck out to me and gave me chills. Reading about AI is so fascinating. Feels like we’re on the cusp of something big.

    • PenguinTD@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      cause in the end it’s all statistics and math, human are full of mistakes(intentional or not), living language evolve over time(even the grammar), so whatever we are building “now” is a contemporary “good enough” representation.

  • pezhore@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Does anyone else start freaking out when we have such complex programs that researchers don’t fully understand how they work?

    • Gaywallet (they/it)@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      For what is worth a lot of medicine works this way. I’m fairly certain this isn’t the only field, either. I’d imagine studying ecology or space feels similar

    • CarbonIceDragon@pawb.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      It does make me vaguely curious what happens if you try to make one of these on the more powerful end explain step by step how its own program works. I dont really expect it to be accurate, given that if people dont know how the thing works, it probably wont find much about that in it’s training data, but if what it learns ultimately enables it to make connections about how the real world works to some degree, could it figure out enough to give even marginally useful hints?

      • Czorio@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Not really, it’s super fucking expensive to train one of these, on-line training would simply not be economically feasible.

        Even if it was, the models don’t really have any agency. You prompt, they respond. There’s not much prompting going on from the model, and if there was, you can choose to not respond, which the model can’t really do.