ThefuzzyFurryComrade@pawb.socialM to Fuck AI@lemmy.world · 2 months agoOn AI Reliabilitypawb.socialimagemessage-square11linkfedilinkarrow-up1511arrow-down18file-text
arrow-up1503arrow-down1imageOn AI Reliabilitypawb.socialThefuzzyFurryComrade@pawb.socialM to Fuck AI@lemmy.world · 2 months agomessage-square11linkfedilinkfile-text
minus-square𝕸𝖔𝖘𝖘@infosec.publinkfedilinkarrow-up62·2 months agoUnless something improved, they’re wrong more than 60% of the time, but at least they’re confident.
minus-squarehenfredemars@infosec.publinkfedilinkEnglisharrow-up18·2 months agoThis is an excellent exploit of the human mind. AI being convincing and correct are two very different ideals.
minus-squaredavidgro@lemmy.worldlinkfedilinkarrow-up12·2 months agoAnd they are very specifically optimized to be convincing.
minus-squarejsomae@lemmy.mllinkfedilinkarrow-up12·2 months agoThis is why LLMs should only be employed in cases where a 60% error rate is acceptable. In other words, almost none of the places where people are currently being hyped to use them.
minus-squarefriend_of_satan@lemmy.worldlinkfedilinkEnglisharrow-up3·2 months agoHaha, yeah, I was going to say 40% is way more impressive than the results I get.
Unless something improved, they’re wrong more than 60% of the time, but at least they’re confident.
This is an excellent exploit of the human mind. AI being convincing and correct are two very different ideals.
And they are very specifically optimized to be convincing.
This is why LLMs should only be employed in cases where a 60% error rate is acceptable. In other words, almost none of the places where people are currently being hyped to use them.
Haha, yeah, I was going to say 40% is way more impressive than the results I get.