Lawsuits: OpenAI didn’t report ChatGPT user to cops to protect Altman, IPO.

  • Bane_Killgrind@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    3
    ·
    16 days ago

    Absolutely not.

    Leaders rejected the safety team’s urgings and declined to report the user to law enforcement.

    OpenAI will “find ways to prevent tragedies like this in the future” and to continue “working with all levels of government to help ensure something like this never happens again,” Altman said.

    They already have a fucking way to prevent this and they opted not to, for PR reasons. They are complicit, they provided a service that aided planning and decided to continue service and allowed further planning.

    If you post a message to a website, that message is not private from the website regardless of the method they use to receive it. They have the moral responsibility to respond to threats to life regardless of the legal responsibility they are arguing they don’t have.

    If I put a cork board up in front of my house and someone pins threats to it, when I notice it it’s now my responsibility to act on that.

      • new_world_odor@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        16 days ago

        it’s really not. more like gathering a crowd of a few billion people, asking them a question, hearing the loudest answer and assuming it’s correct

            • Skullgrid@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              16 days ago

              There is a huge difference between hosting an archive of conversations that took place, and providing a place where you can participate in conversations.

              This is the equivalent of looking at the archives of debates transcribed in newspapers. When you do that, you are not participating in a debate, you are reading the transcript of a debate

              • Bane_Killgrind@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                ·
                16 days ago

                The model responds based on conversations it’s trained on? It’s a bespoke response. It’s not simply showing a browsable list of responses, it’s giving particular ones.

                It’s literally feeding these mentally ill people responses that a human, with the same context, would be legally culpable for.