• oyo@lemmy.zip
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      The dictionary is certainly more knowledgeable about words than you.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        1 day ago

        Let’s abstract if further. Rip every page out of the dictionary and put it through a shredder. All the knowledge is still there, the paper hasn’t been destroyed and the knowledge can be accessed by someone patient, it’s just not in a form that can be easily read.

        But is that pile of shredded paper knowledgeable?

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          24 hours ago

          I don’t get your analogy. Put your brain through a shredder. Is it still intelligent? All the atoms are still there.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            24 hours ago

            Exactly? Both intelligence and knowledgeably are emergent, you can’t just have all the knowledge in one place and then call it knowledgeable (or intelligent, for that matter). A book (or a chatbot) isn’t knowledgeable, it merely contains knowledge.

            • wischi@programming.dev
              link
              fedilink
              arrow-up
              2
              ·
              23 hours ago

              I’m not a native speaker, but that sounds like semantics to me. How would you, when chatting, differentiate if the other end is “knowledgeable” or if it “merely contains knowledge”?

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                23 hours ago

                The distinction is important because an intelligent being can actually be trusted to do the things they are trained to do. LLMs can’t. The “hallucination” problem comes from these things just being probability engines, they don’t actually know what they’re saying. B follows A, they don’t know why and they don’t care and they can’t care. That’s why LLMs are not actually able to really replace workers, at best they’re productivity software that can (maybe, I’m not convinced) make human workers more productive.

                One distinction is that it requires work to actually get real useful knowledge out of these things. You can’t just prompt it and then always expect the answer to be correct or useful, you have to double check everything because it might just make shit up.

                The knowledgeably, the intelligence, still comes from the human user.

                • wischi@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  4 hours ago

                  To be fair all of what you’ve said applies to humans too. Look how many flat earthers there are and even more people that believe in homeopathy or think that vaccines give you autism, think that aliens built the pyramids.

                  But nobody calls that “hallucinations” in humans. Are LLMs perfect? Definitely not. Are they useful? Somewhat; but definitely extremely far from PhD level intelligence as some claim.

                  But there are things LLMs are why better than any single human already (not collectively). Giving you a hint (doesn’t have to be 100% accurate) what topics to look up if you can just describe is vaguely but don’t know what you would even search for in a traditional search engine.

                  Of course you can not trust it blindly, but you shouldn’t trust humans blindly either, that’s we we have the scientific method, because humans are unreliable too.

                  • queermunist she/her@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    4 hours ago

                    My boss trusts me to be able to accurately check the quality of parts I weld. That’s literally my job. It’s not blind trust, I’d lose my job if I couldn’t consistently produce good results, but once I was trained to do my job I can be left to my own devices. You can’t do that with an LLM because you’d still need a human to double-check to make sure it didn’t hallucinate that the part was correctly welded.

                    That’s the difference - intelligent beings, once they understand something, can be trusted. Obviously if they don’t understand something, like the fact that the Earth is round, then you can’t trust them. Intelligence still requires education and training, but the difference is that educating and training intelligent beings actually produces consistent results that can be relied on.

                    Notably, “hallucinate” is actually a term invented by the companies behind LLMs. It’s not really accurate, because hallucination still implies intelligence. They’re just pattern recognition engines, they don’t “hallucinate” they just don’t have any idea what the patterns mean or why they happen. B follows A, that’s all it knows. If a sequence occurs where C follows A it makes a mistake and we call that “hallucination” even though it’s really just a mindless machine churning thoughtlessly repeating the patterns it was trained on.

        • oyo@lemmy.zip
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Are you trying to say that the word ‘knowledgeable’ has some implication of intelligence? Because, depending on context, yes it can. Or are you trying to say that LLMs take a lot of time and/or energy to reassemble their shredded data? To answer your question, yes, the pile of shredded paper contains knowledge, and its accessibility is irrelevant to the conversation.

            • Jinarched@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              8 hours ago

              Your exchange makes me think about the chinese room thought experiment.

              The person inside the room has instructions and a dictonary they uses to translate chinese symbols into english words. They never leave the room and never interact with anyone. They just translate single words.

              They don’t understand chinese, but the output of the system (the room) gives the impression that there is thinking behind the process. If I remember correctly, it was an argument against the Turing test. The claim was that computers could be extremely efficient into constructing anwsers that seems to be backed by human consciousness/thinking.

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                4 hours ago

                Right, so the parking lot covered with shredded dictionaries needs a human mind or else its just a bunch of trash.

                The human inside the Chinese room, or in the parking lot picking up and organizing the trash, or in a discussion with a chatbot is still critical to the overall intelligence/knowledgeability of the system. It’s still needed for that spark and, without it, it’s just trash.

              • wischi@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                4 hours ago

                I think you are right. IMHO the room actually does speak/understand Chinese, even of the robot/human in the room does not.

                There are no neurons in your brain that “understand” English, yet you do. Intelligence is an emergent property. If you “zoom-in” enough everything is just laws of physics and those laws don’t understand English or Chinese.

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 hours ago

                  If we carry the thought experiment forward, the parking lot requires a human to put in energy to make the whole system knowledgeable. In order for knowledgeability or intelligence to emerge we still need a human involved in the process, whether it’s a Chinese room or a parkinglot covered with shredded dictionaries or a chatbot productivity software.

                  We have not eliminated the human from the process, and until we do, we can not say that it is intelligent or knowledgeable.