A new report states that the US House of Representatives has banned its staff members from using Microsoft's Copilot AI assistant due to possible leaks "to non-House approved cloud services.
Until we either solve the problem of LLMs providing false information or the problem of people being too lazy to fact check their work, this is probably the correct course of action.
I can’t imagine using any LLM for anything factual. It’s useful for generating boilerplate and that’s basically it. Any time I try to get it to find errors in what I’ve written (either communication or code) it’s basically worthless.
My little brother was using gpt for homework and he asked it the probability of extra Sunday in a leap year(52 weeks 2 days) and it said 3/8. One of the possible outcomes it listed was fkng Sunday, Sunday. I asked how two sundays can come consecutively and it made up a whole bunch of bs. The answer is so simple 2/7. The sources it listed also had the correct answer.
All it does it create answers that sound like they might be correct. It has no working cognition. People that ask questions like that expect a conversation about probability and days in a year. All it does is combine the two, it can’t think about it.
I find it useful for quickly reformating smaller sample sizes of tables and similar for my reports. It’s often far simpler and quicker to just drop that in there and say what to dp than to program a short python script
Really? It spotted a missing push_back like 600 lines deep for me a few days ago. I’ve also had good success at getting it to spot missing semicolons that C++ compilers can’t because C++ is a stupid language.
All LLMs are trained on open source code without any acknowledgment or compliance with the licenses. So their hard work is responsible for you being able to take advantage of it now. You can say thank you by supporting them.
Imo the human laziness is the issue. Every thread where a lot of people chime in about ai, so many talking about how it’s useless because it’s wrong sometimes. It’s basically like people who use Wikipedia but can’t be bothered to cross reference… Except lazier. They literally expect a machine to be flawless because it seems confident or something?
I think you’re missing the point. I don’t like copilot/chat gpt for important stuff because if I have to double check their solutions I barely gained any time. Especially since it’s correct more often than not because it will make me complacent over enough time (the professors who were patient enough to actually explain why we shouldn’t be using Wikipedia as a primary source also used the same point which I thought made a lot of sense).
You’re going to need to fact check any code you get online anyways, why not have it hyper specific to your current use case? If you’re a good developer, review does not take nearly as long as manual implementation
I very rarely grab code online because I work in videogames and it’s very hard to find good code for the things I struggle with since all the publicly available stuff is for hobbyists and thus usually very basic/unoptimized as hell
Most of the time the stuff I can’t figure out myself isn’t even mentioned anywhere on hobbyist forums because it’s not needed for these applications (for a recent example: assets management. For hobby projects you can usually get away with hard references to all of your assets, so it’s not even a thing)
If what you want is difficult to find publicly, then that also means an LLM is going to be weak in that area as well
What you want is a “general AI” LLM, something capable of stringing together a solution based on past somewhat related solutions. We’re not here yet, so basically you’re asking it to do something beyond what it is capable of and it’s trying its best anyways
Alternatively, you could try fine tuning your own LLM, if you have access to some sort of large repository with non-public solutions or something
So you’re rewriting the wheel every time? I also have worked in games and we definitely utilized public resources whenever possible to save time/money. Asset management in particular has a lot of resources unless you’re talking about truly huge scale things like MMO scale streaming stuff.
Until we either solve the problem of LLMs providing false information or the problem of people being too lazy to fact check their work, this is probably the correct course of action.
I can’t imagine using any LLM for anything factual. It’s useful for generating boilerplate and that’s basically it. Any time I try to get it to find errors in what I’ve written (either communication or code) it’s basically worthless.
My little brother was using gpt for homework and he asked it the probability of extra Sunday in a leap year(52 weeks 2 days) and it said 3/8. One of the possible outcomes it listed was fkng Sunday, Sunday. I asked how two sundays can come consecutively and it made up a whole bunch of bs. The answer is so simple 2/7. The sources it listed also had the correct answer.
All it does it create answers that sound like they might be correct. It has no working cognition. People that ask questions like that expect a conversation about probability and days in a year. All it does is combine the two, it can’t think about it.
I find it useful for quickly reformating smaller sample sizes of tables and similar for my reports. It’s often far simpler and quicker to just drop that in there and say what to dp than to program a short python script
Really? It spotted a missing
push_back
like 600 lines deep for me a few days ago. I’ve also had good success at getting it to spot missing semicolons that C++ compilers can’t because C++ is a stupid language.You can thank all open source developers for that by supporting them.
Huh?
All LLMs are trained on open source code without any acknowledgment or compliance with the licenses. So their hard work is responsible for you being able to take advantage of it now. You can say thank you by supporting them.
Ah yes, I am aware. Gotta love open source :)
Were you under the impression that I said anything to the contrary?
No, just taking any opportunity to spread the word and support open source.
Imo the human laziness is the issue. Every thread where a lot of people chime in about ai, so many talking about how it’s useless because it’s wrong sometimes. It’s basically like people who use Wikipedia but can’t be bothered to cross reference… Except lazier. They literally expect a machine to be flawless because it seems confident or something?
I think you’re missing the point. I don’t like copilot/chat gpt for important stuff because if I have to double check their solutions I barely gained any time. Especially since it’s correct more often than not because it will make me complacent over enough time (the professors who were patient enough to actually explain why we shouldn’t be using Wikipedia as a primary source also used the same point which I thought made a lot of sense).
You’re going to need to fact check any code you get online anyways, why not have it hyper specific to your current use case? If you’re a good developer, review does not take nearly as long as manual implementation
I very rarely grab code online because I work in videogames and it’s very hard to find good code for the things I struggle with since all the publicly available stuff is for hobbyists and thus usually very basic/unoptimized as hell
Most of the time the stuff I can’t figure out myself isn’t even mentioned anywhere on hobbyist forums because it’s not needed for these applications (for a recent example: assets management. For hobby projects you can usually get away with hard references to all of your assets, so it’s not even a thing)
If what you want is difficult to find publicly, then that also means an LLM is going to be weak in that area as well
What you want is a “general AI” LLM, something capable of stringing together a solution based on past somewhat related solutions. We’re not here yet, so basically you’re asking it to do something beyond what it is capable of and it’s trying its best anyways
Alternatively, you could try fine tuning your own LLM, if you have access to some sort of large repository with non-public solutions or something
So you’re rewriting the wheel every time? I also have worked in games and we definitely utilized public resources whenever possible to save time/money. Asset management in particular has a lot of resources unless you’re talking about truly huge scale things like MMO scale streaming stuff.