I recently set up a LLM to run locally on my desktop. Now that the novelty of setting it up and playing with different settings has worn off, I’m struggling to come up with actual uses for it. What do you use it for when not doing work stuff?
Drop-in replacement for stack overflow, letting ChatGPT modify my RCode to do simple things, rephrasing text and extracting equations from PDFs as Latex code. I also used Stable Diffusion to make some absurd Christmas cards last year.
I used openAI/whisper to transcribe several thousand .wav files full of human speech (running locally). Much faster than trying to listen to them myself. It wasn’t perfect but the error rate was within acceptable levels.
I used ChatGPT this morning to create a Firefox extension for my favorite website (to allow me to speed up audio playback as desired.) just a few minutes’ back-and-forth and it works perfectly. If you’ve got a favorite site with a UI that you r always wanted slightly tweaked, you could try making a browser extension to do that!
I feed it TOS, Service Agreements, etc and have it simplify and summarize them so i can have a general idea of what is in them without 10 minutes of reading.
I wonder how it does against tosdr
You can use it for anything that requires a little logical multi step thought (anything single fact based is a straight web search with your search engine of choice)
For example,
- Rewrite your CV.
- reply to a letter
- write some code for a particular task.
- debug your computer problem.
- form a legal analysis to a situation ( https://www.legalcheek.com/2024/09/over-40-of-lawyers-now-use-ai-to-accelerate-their-work/ )
- have a conversation about a topic to help you understand yourself or a thing better. (“How do I build a ceph storage cluster in Kubernetes on Talos Linux with a raspberry pi, a mini pc …”). Then you can ask about alternatives solutions or whatever.
- come up with a business idea and talk it through with some’one’. Pricing etc.
- summaries of text.
At the moment they don’t always spit out correct answers to factual questions; they’d rather give crap than say they don’t know (without anthropomorphisising). When I asked Claude for equivalent sections in another jurisdictions legislation I got crap back on several occasions rather than the correct answer, but the false ‘facts’ were easy to check. However, the analysis was correct. ChatGPT gave the correct answer (to the original question). And I’ve had it the other way around too. So for the moment, pair them with Google or something similar for any fact output requested.
They’re excellent tools for analysing situations and providing feedback. The code it writes is pretty good.
Hopefully they never get trained on social media.
Absolutely nothing, because they all give fucking useless results. Hallucinates, is confidently wrong, and isn’t even grammatically competent (depending on the model). Not even good for a draft, because I’d have to completely rewrite it anyway.
LLMs are only as good as the guys training it (who are mostly morons), and the raw data they train on (which is mostly unaudited random shit).
And that’s just regular language. Coding? Hah!
Me: Generate some code to [do a thing].
LLM: [Gives me code]
Me: [Some part] didnt work.
LLM: Try [this] instead.
Me: That didn’t work either.
LLM: Try [the first thing] again.
Me: … that still doesn’t work…
LLM: Oh, sorry. Try [the second thing again].
Me: …Loop continues forever.
One time I found out about a built-in function that I didn’t know about (in LLM generated code that didn’t work), and read the manual for it, and rewrote the code from scratch to get it working. Literally the only useful thing it ever gave me was a single word (that it probably found on Superuser or StackExchange in the first place).
Skill issue. You have to know a bit about the topic and prompt it right.
It’s for boilerplate where you can scan it for errors with your dev ability
An interesting theory, except I know exactly how to do everything I’ve ever asked an LLM about. I would never trust one of these things to generate useful copy/code, I just wanted to see what it could do. It’s been shit 100% of the time. Never even gotten a useful function out of it.
Also “skill issue” is a lazy response. Try reading the post before you reply next time.
I did read it.
You can create great and very useable boilerplate with even gpt 3.5 …
You have a skill issue with your prompts.
If I can’t use the LLM by prompting it the same way I’d prompt one of my colleagues, then it’s not a skill issue; It’s shitty LLM. I don’t care if it’s the input embedder, training data, or the guy who didn’t bother properly building a model that didn’t just spit out bullshit.
If an employee gave me this quality, I’d get rid of them. Why would I waste my time on a shit coder, artificial or otherwise?
Sorry, but holding spicy autocomplete to the same rigor you’d hold a human coworker is probably the beginning of your issue. It’s clear your prompt is not working.
Well, considering the speed of your responses, and your obsession with making excuses for shitty software, I’m guessing you’re and LLM, so I’m gonn start ignoring you too. Good luck surviving the hype phase.
I use GPT 4 for checking Physics Problems quickly. It’s much better than education forums nowadays where you have to sign up and probably pay a subscription to be able to view questions
Do you pay for the GPT-4 API or use Copilot?
I use copilot in a private Firefox container