I’m not anti-internet at all, I’m all for the internet; I just think it’s best when it’s by and for individuals.
If I had my way, I’d ban corporations from operating anything online but digital storefronts. :P
He / They
I’m not anti-internet at all, I’m all for the internet; I just think it’s best when it’s by and for individuals.
If I had my way, I’d ban corporations from operating anything online but digital storefronts. :P
This is a tough and complex issue, because tech companies using algorithmic curation and control mechanisms to influence kids and adults is a real, truly dangerous issue. But it’s getting torn at from all sides to force their own agendas.
Allowing large corporations to control and influence our social interactions is a hugely dangerous precedent. Apple and Google and huge telcos may be involved in delivering your text messages, but they don’t curate or moderate them, nor do they send you texts from other people based on how they want you to feel about an issue, or to sell you products. On social media, companies do.
But you’ve got right-wingers clamoring to strip companies from liability protections from user-generated content, which does not address the issue, and is all about allowing the government to dictate what content is acceptable from a political standpoint (because LGBTQ+ content is harmful /s and they want companies to censor it).
And you’ve got neolibs and some extremely misguided progressives pushing for sites that allow UGC (which is by definition all social media) to have to check ages of their users by implementing ID checks (which also of course treats any adults without an accepted form of ID as children), which just massively benefits large companies who can afford the security infra to do those checks and store that data, and kills small and medium platforms, all while creating name-and-face tracking of peoples’ online activities, and legally mandating we turn over more personal data to corporations…
…and still doesn’t address the issue of corporations exerting influence algorithmically.
tl;dr the US is a corporatist hellscape where 90% of politicians serve corporations either willfully, or are trivially manipulated to.
PS: KOSA just advanced out of committee.
massive amounts of digital pornography and pictures of cats, the landfills have millions of Styrofoam cups and plastic spoons, and someone will have to pick through that mess and decide what mattered and what didn’t.
I have bad news for you…
The Hanging Dumpsters of Babble-on
“The internet is the blue ‘e’ swirl thing on my computer’s home screen.”
Speaking as an infosec professional, security monitoring software should be targeted at threats, not at the user. We want to know the state of the laptop as it relates to the safety of the data on that machine. We don’t, and in healthy workplaces can’t, determine what an employee is doing that does not behaviorally conform to a threat.
Yes, if a user repeatedly gets virus detections around 9pm, we can infer what’s going on, but we aren’t tracking their websites visited, because the AUP is structured around impacts/outcomes, not actions alone.
As an example, we don’t care if you run a python exploit, we care if you run it against a machine you do not have authorization to (i.e. violating CFAA). So we don’t scan your files against exploitdb, we watch for unusual network traffic that conforms to known exploits, and capture that request information.
So if you try to pentest pornhub, we’ll know. But if you just visit it in Firefox, we won’t.
We’re not prison guards, like these schools apparently think they are, we’re town guards.
Schools literally, legally, are not companies.
School is not work. Work is compensated. Work is voluntary. School is neither.
Sure, it’s possible to make AVs into basically drone swarms that have perfect coordination, the problem is that unless you also kick all human-controlled cars off the road, it’s not going to work. Drone swarms don’t have human controlled drones, or even drone swarms from other manufacturers, flying through the middle of them, or they would be crashing into each other all the time.
IA is still operating under the misunderstanding that the US is not just several large corporations in a trench coat.
the purpose of my car is to get me from place to place
No, that was the purpose for you, that made you choose to buy it. Someone else could have chosen to buy a car to live in it, for example. The purpose of a tool is just to be a tool. A hammer’s purpose isn’t just to hit nails with, it’s to be a heavy thing you can use as-needed. You could hit a person with it, or straighten out dents in a metal sheet, or destroy a harddrive. I think you’re conflating the intended use of something, with its purpose for existing, and it’s leading you to assert that the purpose of LLMs is one specific use only.
An LLM is never going to be a fact-retrieval engine, but it has plenty of legitimate uses: generating creative text is very useful. Just because OpenAI is selling their creative-text engine under false pretenses doesn’t invalidate the technology itself.
I think we can all agree that it did a thing they didn’t want it to do, and that an LLM by itself may not be the correct tool for the job.
Sure, 100% they are using/ selling the wrong tool for the job, but the tool is not malfunctioning.
Libertarians and ancaps are only anarchist in the most facile sense; they’re not actually anti-authority or anti hierarchy, they’re just anti authority-over-themselves. They have no issue mandating actions to others. Rules for thee but not for me/ rules that bind the outgroup only, is the hallmark of right-wing ideologies, and is ancaps and libertarians to a ‘t’.
The purpose of an LLM, at a fundamental level, is to approximate text it was trained on. If it was trained on gibberish, outputting gibberish wouldn’t be a bug. If it wasn’t, outputting gibberish would be indicative of a bug.
I can still say the car is malfunctioning.
A better analogy would be selling someone a diesel car, when they wanted an electric vehicle, and them being upset when it requires refueling with gas. The car isn’t malfunctioning in that case, the salesman was.
Except Lvxferre is actually correct; LLMs are not capable of determining what is useful or not useful, nor can they ever be as a fundamental part of their models; they are simply strings of weighted tokens/numbers. The LLM does not “know” anything, it is approximating text similar to what it was trained on.
It would be like training a parrot and then being upset that it doesn’t understand what the words mean when you ask it questions and it just gives you back words it was trained on.
The only way to ensure they produce only useful output is to screen their answers against a known-good database of information, at which point you don’t need the AI model anyways.
A software bug is not about what was intended at a design level, it’s about what was intended at the developer level. If the program doesn’t do what the developer intended when they wrote the code, that’s a bug. If the developer coded the program to do something different than the manager requested, that’s not a bug in the software, that’s a management issue.
Right now LLMs are doing exactly what they’re being coded to do. The disconnect is the companies selling them to customers as something other than what they are coding them to do. And they’re doing it because the company heads don’t want to admit what their actual limitations are.
There are more than 1 right wing ideologies.
Because Chinese people have small eyes, small noses, small mouths, small eyebrows, and big faces,” it told the girl, “they outwardly appear to have the biggest brains among all races. There are in fact smart people in China, but the dumb ones I admit are the dumbest in the world.
This feels even more racist than the “average” internet response. Did they solely train this model on *chan boards?
This is a false narrative that stock traders push. The fiduciary duty is just one of several that executives have, and does not outweigh the duty to the company health or to employees. Obviously shareholders will try to argue otherwise or even sue to get their way, because they only care about their own interests, but they won’t prevail in most cases if there was a legitimate business interest and justification for the actions.
Yes, but that is not the entirety or even majority of the problem with algorithmic feed curation by corporations. Reducing visibility of those dumb challenges is one of many benefits.
I am generally very skeptical of lawsuits making social media and other Internet companies liable for their users’ content, because that’s usually a route to censor whatever the government deems “harmful”, but I think this case actually makes perfect sense by attacking the algorithmic “curation” that they do. Imo social media should go back to being a purely chronological feed, curated by the users themselves, and cut corporate influence out of the equation.
I never said afford to protect it, just to comply with the requirements for doing the checks and storing it. Passing SOC2 or PCI-DSS (if you’re doing verification via payment card) or whatever certification they decide to create to attest to this stuff, doesn’t make you more secure in reality, but if you can’t afford to do those attestations in the first place, you’re out of the game.
That is true, but it’s not the whole picture. KOSA applies a Duty of Care requirement for all sites, whether they intend to have adult (or “harmful”) content or not.
So your local daycare’s website that has a comment section could be (under the Senate version that has no business size limits) taken to court if someone posts something “harmful”. That’s not something they or other small sites can afford, so those sites will either remove all UGC or shutter, rather than face that legal liability.
The real goal of KOSA (and the reason it’s being backed by Xitter, Snap, and Microsoft) is to kill off smaller platforms entirely, to force everyone into their ecosystems. And they’re willing to go along with the right-wing censorship nuts to do it. This is a move by big-tech in partnership with the Right, because totalitarianism is a political monopoly, and companies love monopolies.