Something from work I noticed today that will unfortunately become the norm.
If you gibbity an answer, you’re regarded as smarter than someone who searched it. The funny part is I’ve actually given it a try, and it’s wrong so often that it actually wastes my time. And, if it’s right and I don’t agree, I can say it’s wrong, and with a 98% confidence level it changes the answer.
Side note, this is exactly why the fascist tech bros are pushing hard to shove this tech down our throats.
I feel so damn old having to look up “gibbity.” Ugh.
I totally agree though, LLMs are consistently wrong when I ask a work-related question. I work in a field that isn’t particularly niche and use software that millions of people use every day, and LLMs will straight up invent entire menus rather than telling me it doesn’t know how to do something. Huge waste of time, especially when asking a coworker gets an accurate answer right away.
What definition did you find? I could only find it meaning either bullshit or acting foolish, neither of which make sense in op’s post. Based on context I’d assume it’s basically “to google” using ai but I can’t find anything to back that up.
I read it as them treating the initialism “GPT” as an acronym, resulting in “gibbity”. I once a YouTuber refer to ChatGPT as “Mr. jip-uh-tee” and I think this is the same idea. I think your definition of “researching using AI” is right, they’re just saying it in a joking way.
That actually makes sense, thanks!
I initially thought OP meant to google too, but that didn’t track for me. Ultimately bullshitting makes sense in this context (thanks urban dictionary), since they compare someone who gibbities to someone who searches.
Either way it’s ridiculous slang to me, but I’m clearly the wrong generation for it, so I’m glad to know I wasn’t the only one confused.
I’m clearly the wrong generation too haha, because this still makes no sense:
If you
gibbitybullshit an answer, you’re regarded as smarter than someone who searched it. The funny part is I’ve actually givenitbullshitting a try, andit’sbullshitting is wrong so often thatitbullshitting actually wastes my time. And, ifit’sbullshit is right and I don’t agree, I can say it’s wrong, and with a 98% confidence level it changes the answer.I used to be with ‘it’, but then they changed what ‘it’ was. Now what I’m with isn’t ‘it’ anymore and what’s ‘it’ seems weird and scary.
I find in my line of work that if you ask it "101-"grade questions, like the kind of questions a beginner who knows next to nothing would ask, you’ll get acceptable (not good, but acceptable) answers from LLMbeciles. As soon as it requires any degree of serious technical know-how, or as soon as context starts to matter, the utility plummets dramatically from “acceptable” to “what the actual fuck are you thinking using this!?”
The problem is that asking those 101- questions of your coworkers gets you acceptable or better answers more quickly, and often, as a bonus, stuff that has examples straight out of your workspace. So LLMbeciles are kind of worthless for beginner questions too.
100% it’s a tool that will be used to control people’s perception and to rewrite history to their tech bro liking.
I’m curious as to what you’re asking it that it would waste your time.
I’m fully convinced that if you’re saving time by wading through SEO sludge (which is mostly bloated ai shit anyway), then you’re just asking bad questions.
Can you give me some examples of questions it answered which were wrong? I want to test it myself.
It was specifically a part # question regarding something at work that is clearly published on the mfg website.
Search does suck now (again, by tech bro design to force us all to use their shitty ai to find anything) but it is a lot better using ddg or kagi, for me anyway. No Google here.