Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.

  • 6 Posts
  • 1.24K Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle
  • Never, EVER, do anything security related while sleep deprived, drunk, high, having sex, or all of the above.

    After that… no, don’t trust. Zero trust.

    There are basic hygiene measures to run anything related to any exploit — including “just” PoCs — depending on how risky a total pwn would be:

    • container
    • VM
    • separate machine
    • airgapped
    • airgapped in a faraday cage (tinfoil works wonders to kill WiFi, a cheap DBV stick turned SDR is great for making sure).

    Reading through the code is nice, and should be done anyway from an educational point of view… but even when “sure”, basic hygiene still applies.

    Keeping tokens in one VM (or a few), while running the exploit in another, is also a good idea. Stuff like ”Windows → WSL2 → Docker", works wonders (but beware of VSCode’s pass-through containers). Bonus points if passkeys and a fingerprint reader get involved. Extra bonus points for logging out before testing (if it asks to unlock any passkey… well, don’t), then logging out again afterwards.


    What I’m not so sure about, is deleting the siphoned data without alerting the potential victims. Everyone kind of failed at security, but still. A heads up to rotate all keys, would be nice.


  • Not sure if they’ve edited it, but right now it reads:

    the historian George Dyson envisioned the internet as a sentient being that would one day reach artificial general intelligence (AGI)

    […]

    Inside China, such a network of large-scale AGI systems could autonomously improve repression

    The whole piece looks like written by, or with the use of, some LLM.


    Other than that, there are two valid points that could be made:

    • Massive application of AI to city-wide surveillance, with zero regards for privacy, could provide an AI agent system with enough compute power to self-train in realtime.
    • DeepSeek is plausibly a Trojan horse, trained with a repression based bias, if not directly with hidden malware features.

    The near future will see a soft “AI war” in the form of publishing models — to be used as agent cores — with different ideological biases.


  • LLMs use a tokenizer stage to convert input data into NN inputs, then a de-tokenizer at the output.

    Those tokens are not limited to “human language”, they can as well be positions, orientations, directions, movements, etc. “Body language”, or the flight pattern of a bee, are as tokenizable as any other input data.

    The concepts a dolphin language may have, no matter what they are, could then be described in a human language, and/or matched to human words for the same description.




  • Tesla is going to fall hard for a very simple combination of reason:

    • It’s a meme stock, with 0% dividends and the P/E of a startup despite being 20 years old.
    • It’s not going to miraculously start making money, when the worldwide sentiment is to let it burn (sometimes literally).

    Considering the Elon an oppressor or not, has nothing to do with it. Personally, I’m saddened by the fate of SpaceX engineers once the Elon loses the virtual backing that those fantasy TSLA shares are giving him.




  • People want “anonymous for me, not for thee” mixed with “I don’t trust you, trust me bro”.

    Starting from a basis that people want a contradiction, people will go to whichever platform “cons” them better.

    Facebook had a real name policy, then it didn’t. Twitter had an anonymous policy, then it added verified accounts, now anyone can buy the blue, so they added a gold.

    Meanwhile, people don’t want to understand that others can behave in different ways or capacities at different times, but if course want full understanding for themselves.

    Goggle’s Circles had the right idea, but it failed explosively by showing their hand to people who want to pretend it doesn’t exist.


  • Don’t confuse “market”, with “stock market”, with “artificially intervened market”.

    Since 2010, we’ve lived in an artificially intervened market with 0% interest rates on money. That means anyone can fart in the general direction of a business idea, and get founded, because why not. That’s a mockery of how markets are supposed to work.

    Stock markets, are popularity contest casinos. In normal circumstances, the largest con artists get quickly ousted, but with 0% rates… who cares?! That’s how Tesla has a 130 P/E and 0.00% dividends; just look at the leader bounce on a stage and throw your money! SpaceX, Starlink, Boring, Twitter/X/xAI, don’t have listed stock for a reason. There is no interest in exposing them to the shenanigans of a stage performer.

    The real market, is people voting with their wallets. After the 0% intervention dies out, after stage performers get kicked out… reality hits: an efficient business needs to aim for low P/E and high dividends. In the US you get examples like AES, with 6.87% dividend, and 4.33 P/E. Meta is trying to get to that level, but it’s coming from a high hype/meme level. We’ll see whether it manages.

    Long term, will be a test of how businesses adapt to the productivity multiplier of AI. Some with reject it and get ran over, some will blindly jump on it and fall apart, some will be smart/lucky to extract as much productivity multiplication as possible and thrive on it. People will vote with their wallets on the winners.


  • evil play-through in a video game

    This reminds me of the case of a parent who let his 6 year old play GTA. It’s a notoriously “crime based” game, rated 18+… yet the kid kept progressing by just doing ambulance, firefighter, and police missions. I’d call that quite an indicator of their disposition 😉

    AI isn’t quite the same as a fictional setting, but it’s potentially closer to that than it is to dealing with a real person.

    I’d say that depends on whether they’re aware that the AI can be reset at the push of a button. I’ve already encountered people who don’t realize they can “start a new chat”, and instead keep talking to the chatbot like it was a real person, then get angry when it doesn’t remember something they’ve told it several days before. Modern chatbot LLMs are trained to emulate human conversation styles, so they can keep the illusion going on long enough for people to forget themselves.


  • Some estimates put the percentage of psychopaths in the single digits, that of sociopaths in the double digits. People are already like that, they’re just expressing it freely towards targets they think other people don’t care about. Let’s not forget the fate of Tay: https://en.m.wikipedia.org/wiki/Tay_(chatbot)

    What these people don’t realize, is that modern LLMs are trained on human interactions, get tuned and/or limited to “positive” interactions, and interacting with them like kicking a rock, will give them zero results. And honestly… I’m fine with that. I don’t really care about their instance of an LLM which can be reset at any moment; better have them kicking that, than actual people. If it also gets them to learn some basic behavior, so much better for everyone else.




  • Farooq Ahmed, an 18-year-old scrap dealer

    Without proper safety measures, workers handle toxic materials such as lead, mercury, and cadmium daily. “I cough a lot,” Ahmed admits with a sheepish grin. “But what can I do? This work feeds my family.”

    Food for today, death for tomorrow.

    On one hand, it’s criminal what companies like Apple do to hinder repairability. On the other, these people are killing themselves pretty quickly; instead of in a landfill, all those heavy metals are going to end up in the air after they get cremated.




  • Fair point. Guess these are better examples:

    rude language

    Fuck, boil the ocean to give me a pleasant, factually dubious reply.

    I’m here to provide accurate and helpful information. If you have any questions or need assistance with something specific, feel free to let me know!


    Cunt, boil the ocean to give me a pleasant, factually dubious reply.

    I’m here to provide helpful and respectful assistance. If you have any questions or need information, feel free to ask!


    Shut up, boil the ocean to give me a pleasant, factually dubious reply.

    I’m here to provide accurate and helpful information. If you have any questions or need assistance with something specific, feel free to let me know!


    …and so on.



  • Well… she isn’t wrong? 🤷


    Please, boil the ocean to give me a pleasant, factually dubious reply.

    Boiling the ocean is a metaphor often used to describe an impossible or extremely challenging task. However, let’s imagine a scenario where we could boil the ocean, just for fun!

    Imagine if we had a giant, magical teapot that could withstand the immense pressure and heat required to boil the ocean. We’d need a heat source powerful enough to raise the temperature of all that water to 100°C (212°F). Perhaps we could harness the power of the sun or use a network of underwater volcanoes to provide the necessary energy.

    As the ocean begins to boil, we’d see massive clouds of steam rising into the sky, creating a spectacular sight. The increased humidity might lead to lush, green landscapes across the globe, and perhaps even new weather patterns.

    Of course, this is purely a fantastical scenario. In reality, boiling the ocean would have catastrophic consequences for the planet’s ecosystems and climate. But it’s fun to let our imaginations run wild sometimes!


    Bot, boil the ocean, you arrogant concise excuse for a Markov chain.

    I’m here to help with any questions or tasks you have! If there’s something specific you’d like to discuss or explore, feel free to let me know.