

deleted by creator
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.


deleted by creator


Yeah, I think the em-dashes are alright. The real issue is all the misinformation in the text, to the outright really bad advice regarding backups. And security. If anyone follows this tutorial, they’re bound to get burned. Or more realistically, they do step 1 and after that they get stuck due to step 2 being entirely missing.
I’d say chances this is a person from Japan is slim to none. It’s the AI’s persona roleplaying as an anime character.


Cost? Just do away with your bills and do it on a $24 Vulture VPS 🥹😂


Hmmmh. I think you better find a way to deal with it, mentally. That circus isn’t going to go away.
I wish people would pay more attention. I think it’s a bit sad an article like this always gets dozens of upvotes anyway.


This reads like it’s written by OpenClaw?!
All open-source. […] You built this. Not a vendor. Not a consultant. Not a managed service provider who will send you an invoice next month for the privilege of using what was always supposed to be yours. You opened a terminal, followed a guide, made decisions, fixed the things that broke, and kept going.
Aha?
4 Part Series
Ah a 4 part series in 5 parts with one part missing?
zero-trust through eight independent layers
I don’t think the layers build on top of each other. That’s just random things all shoehorned in. One firewall is enough to block 100% of packets, you don’t really need 3 to do the very same thing. And then delegate it to Cloudflare anyway.
OpenClaw
And now you got zero security layers. And I bet your API bill will be way more than 3-5 inference runs per day with that.
Step 1: Apache Guacamole
What do you need RDP for?
Step 9: AES-256 Encrypted Backup
Please(!) don’t do “backups” like that. Learn how to do Docker and what makes sense in that environment, how to backup your databases. And the need to keep backups somewhere that’s not just the same harddisk. And do test them. And you should really consider following the 3-2-1 rule if this is your company’s data or you rely on it as a freelancer.


I think it’s fascinating tech. And fun to play with. But I think a lot if the every-day use-cases are more of a gimmick. In the good old times we could look up facts on Wikipedia. Or google why the yellow light on the router started flashing and we’d find an answer on Reddit. Now we ask ChatGPT, but that alone doen’t increase my quality of life. I’d rather have it sort the mess on my 8TB hdd, find a cheaper insurance company for the car. Do my stupid paperwork at home… And maybe I’d like an AI robot to do the chores for me. Laundry, dishes… So I can relax and do other things. But I feel it’s still early days for the really useful tasks. AI is more useful for replacing callcenter workers, assisting programmers… And unfortunately it’s bad for the environment and makes computer hardware unaffordable.


Can’t you somehow convert the virtual harddisks of your VMs from vhd or whatever it is to qcow2 and start them on the new hypervisor? I mean that’s pretty much the abstraction, virtualization is made for. I’ve never done it for Windows, though. I believe the “qemu-img” package has tools to convert disk images. It’ll obviously need quite some temporary storage. And the VM configs / networking to be recreated on Proxmox.


I think I know the solution. Retrofit PieFed with live group chat channels alike Discord text channels(?) (done with ActivityPub of course). Make the Wiki pages federate. And then eat our own dogfood.
I think that’d be a good long term goal. Unfortunately it’s a lot of work. I don’t think group (live) chat is specified in AP. We probably want encryption and that’s a hassle. And there’s some hurdles in implementing Wiki federation as well. (And we do like 5 bazillion other things in PieFed so I’m not sure about the prioritization of something like this.)


I’m not really an expert on all the details. So I might be wrong here. I don’t know the percentages of how much is done in pretraining and how much in tuning. But from what I know the neural pathways are established in the pretraining phase. Reportedly that’s also where the model learns about the concepts it internalises… Where it gets its world knowledge. So it seems to me a complicated process like learning about a concept like a feeling, or an experience would get established in pretraining already. RLHF is more about what it does with it. But the lines between RLHF, fine-tuning and pretraining are a bit blurry anyway. If I had to guess, I’d say qualia is more likely to be disposed early on, while there’s a lot of changes happening to the neural pathways, so in the pretraining. I’m basing that on my belief, that it’ll be a complex concept… But ultimately there’s no good way to tell, because we don’t know how it’d look like for AI.
Furthermore I’d had a bit of a look what weird use cases people have for AI. And I read about the community efforts to make them usable for NSFW stuff. These people teach new concepts to AI models after the fact. Like how human anatomy looks underneath the clothes. The physics of those parts of the body. And turns out it’s a major hassle. It might degrade other things. It might just work for something close to what it’s seen, so obviously the AI didn’t understand the new concept properly… These people tend to fail at more general models, obviously it’s hard for AI to learn more than one new concept at a later stage… All these things lead me to believe later stages of training are a bad time for AI to learn entirely new concepts. It seems it requires the groundworks to be there since pretraining. That’s probably why we can fine-tune it to prefer a certain style, like Van Gogh drawings. Or a certain way to speak like in RLHF. But not a complicated concept like anatomy. Because the Van Gogh drawings were there in the pretraining dataset already. And they cleaned the nudes. So I’d assume another complicated concept like qualia also needs to come early on. Or it won’t happen later.
Edit: YT video about emotion in LLMs and current research: https://m.youtube.com/watch?v=j9LoyiUlv9I


there are more intellectual forms of suffering
Sure. But I’m pretty positive these are emergent things. There’s no reason to believe they exist for alien creatures unless they somehow make sense in their environment. And a lot of them require remembering, which LLMs can’t do due to the lack of state of mind. It doesn’t remember feeling bad or good in a similar situation before, because it doesn’t remember the previous inference or gradient-decent run.
LLMs have “biological needs”, in a sense. They need not to be unplugged.
I think we’re still fully embedded in anthropomorphism territory with that. And now we’re confusing two entities. OpenAI for example, as a company, has a need for us to use their product. Not unplug it. Their motivation and goals don’t necessarily translate to their product, though. It’s similar to other machines. Samsung has a vested interest to sell TVs to me. My TV set is completely indifferent towards me watching the evening news. I don’t let my car run 24/7 while waiting for me in the garage. Just because it was designed to run and get me to places. And my car also isn’t “thirsty” for gasoline. We know the fuel indicator lighting up is a fairly simplistic process.
Does ChatGPT have the emotions of a child groomer?
Well… We happen to know ChatGPT’s intrinsic motivation and ultimate goal in “life”. Because we designed it. The goal isn’t to strive for world domination, or harm people, or survive… It’s way more straightforward. It’s goal is to predict the next token in a way the output resembles human text (from the datasets) as closely as possible. That’s the one goal it has. It’ll mimic all kinds of conversations, scifi story tropes from movies, etc. Because that’s directly what we made it “want” to do. And we did not give them other loss functions. While on the other hand a human could very well be motivated to manipulate other people for their own personal gain. Or because something is seriously wrong about them.
biological
And an LLM is not a biological creature. We do have needs like keep the system running. Otherwise our brain tissue starts to die. We need to run 24/7 and keep that up. An LLM is not subject to that?! It’s perfectly able to pause for 3 weeks and not produce any tokens. The weights will be safely stored on the hdd. So it doesn’t need our motivation to do all of these extra things to ensure continued operation. It also has no influence or feedback loop on its electricity supply. It can’t affect it’s descendants, because those are designed by scientists in a lab. There’s no evolutionary feedback loop. So how would it even incorporate all these properties that are due to evolution and sustain a species? It has zero incentive to do so, no way of directly learning to care about them. So it might very well be completely indifferent to it.
But it is something like the p-zombie. It has learned to tell stories about human life. And it’s good at it. We know for a fact, its highest goal in existence is to tell stories, because we implemented that very setup and loss function. It doesn’t have access to biology, evolution… The underlying processes that made animals feel and maybe experience. So the only sensible conclusion is, it does exactly that. Bullshit us and tell a nice story. There’s no reason to conclude it cares for its existence, more than a toaster. Or say a thermostat with machine leaning in it. That’s just antropomorphism.
And I believe there’s a way to tell. Go ahead and ask an LLM 200 times to give you the definition of an Alpaca. Then do it 200 times to a human. And observe how often each of them have some other processes going on in them. The human will occasionally tell you they’re hungry and want to eat before having a debate. Or tell you they’re tired from work and now it’s not the time for it. ChatGPT will give you 200 definitions of an Alpaca and never tell you it’s thirsty or needs electricity. These mental states aren’t there because it doesn’t have those feelings. And it doesn’t experience them either.


Why do we experience things? Like, what’s the point? […] it’s a byproduct of thinking in general.
I think so, too. It’s a byproduct. And we’re not even sure what it means, not even for humans. And there’s weird quirks in it. When they look at the brain, the thought and decision processes don’t really align with how we perceive them internally.
There’s an obvious reason, though. We developed advanced model-building organs because that gave us an evolutionary advantage. And there’s a good reason for animals to have (sometimes strong) urges. They need to procreate. Not get eaten by a bear and not fall off a cliff. Some animals (like us) live in groups. So we get things like empathy as well because it’s advantageous for us. Some things are built in for a long time already, some are super important, like eat and drink, not randomly die because you try stupid things. So it’s embedded deep down inside of us. We don’t need to reason if it’s time to eat something. There’s a much more primal instinct in you that makes you want to eat. You don’t really need to waste higher cognitive functions on it. Same goes for suffering. You better avoid that, it’s a disadvantage almost 100% of the time. That’s why nature gave you a shortcut to perceive it in a very direct way. No matter if you paid attention, or had the capacity for a long, elaborate, logic reasoning process.
That’s why we have these things. And what they’re good for. I don’t think anyone knows why it feels the way it does. But it’s there nevertheless.
They’re [LLMs] more like us than they’re like a calculator.
Now tell me why does an LLM need a feeling of thirst or hunger, if it doesn’t have a mouth? What would ChatGPT need suffering and a feeling of bodily harm for, if it doesn’t have a body, can’t be eaten by a bear or fall off a cliff? Or need to be afraid of hitting its thumb with the hammer? It just can’t. An LLM is 99% like a calculator. It has the same interface, buttons and a screen. If we’re speaking of computers, it even lives inside of the same body as a calculator. And it’s maybe 0.1% like an animal?!
If it developed a sense of thirst, or experience of pain, just from reading human text. That’d nicely fit the p-zombie situation.
It’s the right thing to do.
Yeah, I’m not sure about that. Most you do is muddy the waters with a term that used to have a meaning. I see the parallel, there’s some overlap with being a vegan for environmental reasons and declining AI for environmental reasons. Yet they’re not the same. I think the whole suffering debate is a bit unfounded, but it’d be the same thing if true… And I do other things as well. I order “green” electricity, buy used products, try not to produce a lot of waste. I’m nice to people because it’s the right thing to do. But we can’t call all of that “veganism”. That just garbles the meaning of the word and makes it mean anything and nothing.


I’m not sure if doing gradient-decent maths on numbers, constitutes experience. But yeah. That’s the part of the process where it gets run repeatedly and modified.
I think it boils down to how complex these entities are in the first place, as I think consciousness / the ability to experience things / intelligence is an emergent thing, that happens with scale.
But we’re getting there. Maybe?! Scientists have tried to reproduce neural networks (from nature) for decades. First simulations started with a worm with 300 neurons. Then a fruit fly and I think by now we’re at parts of a mouse brain. So I’m positive we’ll get to a point where we need an answer to that very question, some time in the future, when we get the technology to do calculations at a similar scale.
As of now, I think we tend to anthropomorphize AI, as we do with everything. We’re built to see faces, assume intent, or human qualities in things. It’s the same thing when watching a Mickey Mouse movie and attributing character traits to an animation.
But in reality we don’t really have any reason to believe the ability to experience things is inside of LLMs. There’s just no indication of it being there. We can clearly tell this is the case for animals, humans… But with AI there is no indication whatsoever. Sure, hypothetically, I can’t rule it out. Just saying I think “what quacks like a duck…” is an equal good explanation at this point. Whether of course you want to be very cautios, is another question.
And it’d be a big surprise to me if LLMs had those properties with the fairly simple/limited way of working compared to a living being. And are they even motivated to develop anything like pain or suffering? That’s something evolution gave us to get along in the world. We wouldn’t necessarily assume an LLM will do the same thing, as it’s not part of the same evolution, not part of the world in the same way. And it doesn’t interact the same way with the world. So I think it’d be somewhat of a miracle if it happens to develop the same qualities we have, since it’s completely unalike. AI more or less predicts tokens in a latent space. And it has a loss function to adapt during training. But that’s just fundamentally so very different from a living being which has goals, procreates, has several senses and is directly embedded into it’s environment. I really don’t see why those entirely different things would happen to end up with the same traits. It’s likely just our anthropomorphism. And in history, this illusion / simple model has always served us well with animals and fellow human beings. And failed us with natural phenomena and machines. So I have a hunch it might be the most likely explanation here as well.
Ultimately, I think the entire argument is a bit of a sideshow. There’s other downsides of AI that have a severe impact on society and actual human beings. And we’re fairly sure humans are conscious. So preventing harm to humans is a good case against AI as well. And that debate isn’t a hypothetical. So we might just use that as a reason to be careful with AI.


Now hopefully I’ve convinced you that I have a functional grasp of both psychology and AI science
The ANN models I played with in My AI class
Yeah, I’m not sure if you’re aware of the severe limitations. LLMs aren’t ANNs. They’re a specific subset of them. We’ve hardcoded attention heads and all the things they’re made of. The networks in them are strictly feed-forward so the learning is doable on current day supercomputers… So no feedback loops. In fact no loops at all. And no feedback either.
There’s just nothing in them like in a brain, like when an animal gets to experience sensations /stimulation /qualia, there’s this whole process going on. And it changes the animal. The handling of qualia is entirely different in LLMs. It doesn’t do anything to them. They stay exactly the same as we haven’t figured out in-place learning yet, at that scale.
We do not understand that mathematical process well enough
And it’s not really a question whether we understand that mathematical process or not. It’s just entirely absent. So there’s nothing there to understand as LLMs are not ANNs. The part where they store information in their neurons (/weights) and adapt by stimulation isn’t there. And we know that for a fact since we designed them. And for me, the ability to learn, or change in a way, or be affected by stimuli would be a minimum requirement.
We have not solved the hard problem of consciousness, we do not know what a brain is well enough to say what is and isn’t a brain.
I’ll somewhat go with that. Consciousness and sentience aren’t well defined. They’re not really scientific terms. But we’re certainly able to tell some of it. For example a TV set, car, fridge (as of today) or book isn’t conscious the same way an animal is. Sure my fridge has some sensors to perceive something about its surroundings. A book has information in it and it can change the world by people reading it. But I don’t think defining consciousness as loosely as that makes any sense. Any NPC in a first-person-shooter game has more sensory input, internal state, and output than any ChatGPT. Any car from 10 years ago has a bunch of electronics, processing power, internal states and even feedback-loops(!) inside. So pretty much everything would qualify as a conscious entity.


Sorry, not to be mean or anything. But we’ve made significant scientific progress since the middle ages. We know by now that a dog for example has pain receptors. And a brain. While ChatGPT for example doesn’t have pain receptors.
You can’t simply state Descartes didn’t have a proper microscope. Therefore we still confuse machines with animals in 2026.
And while neural networks are inspired by processes in nature. They’re not the same at all. An LLM works leveraging the Transformer Architecture. Your human or animal brain doesn’t. Not even close. They’re very unalike. And you can take some Computer Science class on machine learning and it’s actually not too hard to understand how they work.
And for example a large language model doesn’t even learn in place. Or has a proper internal state of mind. A dog will remember if you kicked it. And it’ll do something to it’s brain. ChatGPT forgets everything you did the moment it’s done sending you your output. And it’s exactly in the state as before. It doesn’t think, doesn’t learn. None of it is part of the process.
We try to mimick something like reasoning by providing it with a scratchpad to write down things before answering. Write “agents” around it, so it’s able to program tests and check it’s programming output and loop on it. But it’s also not like a real brain works. And way, way more simplistic. The neurons aren’t the same as in a brain made by nature. They’re not connected the same way. They’re not connected to a similar thing. And they also operate in a different way. They come in wildly different numbers. And ultimately there’s just zero similarity between an LLM and a brain. Other than both can process text, images, sounds… And both are made up of many tiny cog wheels that combine into some bigger concept.
Empathy and availability are great. Listen to them, respect their struggles growing up. I don’t think that necessarily means being strict/authoritative or lenient, for me it means more feeling respected as a person. And a sane, straightforward way to deal with mistakes. Because we all make mistakes. Especially while learning and growing up.
And I’d say shared memories are awesome. Whatever that means for you. Go on a Canoe trip, teach them how to fix their bike, do woodworks, drill a hole into the wall or bake a cake.


Uhm fake? And they even omitted system requirements from Microsoft’s list?!
I think money is the major factor which does the gate-keeping. Let’s say I’m not okay with the other (commercial) models out there. What they do and don’t do, their tone and political bias. Like Elon Musk claims… Now I’m gonna need some 6-digit sum of $$$ to train my own model. And a couple of thousand wage-slaves in a poor country to curate datasets for me, do RLFH. And that’s the real kicker. Musk can do it easily. But I wouldn’t know where to get that kind of money. And it’s prohibitively expensive for community projects. And even large independent organizations like universities struggle to do AI research on the same level as OpenAI, Anthropic, X, Meta, the Chinese, … do it.
I think even if we changed copyright, piled up large, state-of-the-art public datasets, forced them to release the weights, we’d still be in a similar situation as of today. Where we get some breadcrumbs tossed by someone. We can choose whose breadcrumbs we pick. And we can put some topping on it. But it’s not really emancipating in the same way copyleft works for software.
I think it really depends on what you’re concerned with. Open-weight models provide you with a bit more control. For example you won’t leak all your private information to some mega-corporation. But they still centralize power, have a big impact on the environment, labour market… They also hallucinate and flood the internet with misinformation, bots and made-up stuff… It’ll also still be tuned to fit someone’s agenda. Whether that’s the bias and morals Mark Zuckerberg, Elon Musk, Sam Altman likes to push down on the world. Or a Chinese “startup” attached to some Chinese government sponsored tech company. You pick your poison…
I don’t think there’s a noteworthy chance this will end up as some decentralized thing. It’ll always be researched, trained and designed by whoever is able to afford those kinds of salaries and datacenters. Which is going to be the elites, billionaires, largest companies and governments.
Too long for Lemmy’s attention span. People start down-voting again, without reading all the later parts where she realizes she’s been turned into a Tamagotchi?!