

“Social” isn’t part of the title. Meta is the company that acquired the site.
I also fail to see the ROI for buying a social media site for AI. There’s no advertising revenue to be made. At best you’re just charging a subscription fee.


“Social” isn’t part of the title. Meta is the company that acquired the site.
I also fail to see the ROI for buying a social media site for AI. There’s no advertising revenue to be made. At best you’re just charging a subscription fee.


why haven’t you bragged about using Arch yet?
Well Manjaro is Arch-based, but it feels like cheating to say that. Anyway, I used Manjaro, btw.


Hey even I use Linux daily.
Actually, I’m not really sure why “even I” should be shocking. I write code for a living. Surely I should be using Linux once in a while.
Anyway RHEL is probably the only Linux distro I can think of that costs money and comes with support. The major cloud providers sometimes have their own Linux distros they use as well (looking at you, Amazon) and you can argue they are selling Linux, but not as directly as RHEL does.


Red Hat.
The other distros? No idea.


It also affects subjects like atheism, as the various religious cultures generally do not want people contemplating the idea that there isn’t a god, especially not while they’re young, they want you long indoctrinated into belief before you can explore different ideas.
This reminds me of a Pakistani person I don’t personally know, but someone I know talks to them.
In their hometown, people recite verses from the Quran as part of their religious activities. There’s only one problem: the Quran they use is written in Arabic, but everyone there speaks Urdu. People don’t actually know what the passages say, just how to say them.
So this person asked them once what the passages say. Why do we read the passages in Arabic instead of Urdu? People here don’t know Arabic.
Anyway, he got belted shortly after that.


It looks like this was briefly touched in the article, but LLMs don’t learn shit.
If I tell you your use of a list is dumb and using a set changes the code from O(n) to O(1) and cuts out 15 lines of code, you probably won’t use a list next time. You might even look into using a deque or heap.
If your code was written by a LLM? You’ll “fix” it this time (by telling your LLM to do it) and then you’ll do it again next time.
I’m sorry, but in the latter case, not only are you mentally handicapping yourself, but you’re actively making the project worse in the long term, and you’ve got me sending out resumes because, and I mean this in the politest way possible, but go fuck yourself for wasting my time with that review.


Right now it’s no big deal to any AI company because more code means more training for the AI, but will we get to the point that they’re happy with code output enough and then turn around claiming they own those?
At least in the US:
The vast majority of commenters agreed that existing law is adequate in this area and that material generated wholly by AI is not copyrightable.
So it seems unlikely that they would be able to claim any ownership.
As for the rest of your comment (the parts around ownership): you always own the copyright for any copyrightable work you create, including code. When you post on a website, according to the ToS of that site, you’re licensing your comment/code/whatever to the website (you need to for them to be able to publish your work on their website).
Some (many, most depending on what you use) websites overlicense your work and use it for other purposes as well (like GitHub), but in the US the judges have basically ruled that AI companies can pirate whatever works they want without any attempt to license them and still be fine, so the “overlicense” bit is more of a formality at this point anyway.
there should be a fork of dotnet.
Dotnet is maintained by the .NET Foundation and is entirely open source. There are thousands of forks and local clones of the repos under that organization. Rather than hoping someone does this, it’d actually be a huge benefit to everyone for you to create a local clone of the repo and update it now and then, assuming you’re worried it might go down anyway.
telemetry being totally removed
DOTNET_CLI_TELEMETRY_OPTOUT=1, though it’s lame that it’s an opt-out and not opt-in. The CLI does give a fat warning on first use at least (which hilariously spams CI output). Opt-in would be so much better though, and opt-out by default is really not great.
an alternative to nuget.org
You can specify other package sources as well, so nothing technically stops someone from making their own alternative. That being said, you’d have to configure it for each project/solution that wants to use that registry.
Setting such a thing up could be insurance in case they pull anything in the future, too.
The main thing I’d be worried about here is nuget.org getting pulled. As far as I can tell, it’s run by MS, not the foundation. That’d be basically the entire ecosystem gone all at once. Fortunately, it’s actually super easy to create private registries that mirror packages on nuget.org, and it’s actually standard practice to do this at many companies. This means that at the very least it would be possible to recover some of the registry if this happened.
For a fork, I would think these would be the main goals I’d look for:
Please cite one example of Microsoft ever giving a fuck about users.
There aren’t many examples, but one that comes to mind is the adaptive controller. It’s not cheap, but it’s also presumably low volume, and it’s unbelievably configurable.
Outside of that, I’m out of ideas. Usually every good change comes in response to user backlash, from my experience anyway. I’ve moved over to Linux by now because I’m tired of dealing with what Windows has become.


The way it was presented with regards to search engines was that it was supposed to pull data that was more up-to-date than when the model was trained. It does do that, actually, and provides better results too, on average anyway.
But that’s just one domain, and “better” doesn’t mean “good” or “accurate”. In most domains, at least where I work, we’ve found that RAG overcomplicates things for little benefit, unfortunately.


The way the current systems are trained simply doesn’t allow for accepting and adopting new information continuously.
As further evidence of this, RAG was supposed to enable this. Instead, we’ve found that RAG was nothing more than an overused buzz-term that has limited applications, and often results in hallucination anyway.


No idea who told you this, but MS employees use Teams exclusively.
As for it being terrible, it’s unfortunately hard to find a competitor that does better with the same feature set (video/screen sharing/text channels/sso/tenants/etc). Many get close (like Slack) but none have the whole package.


Since the bottom of an article is usually the least visible, I’ll paste this here to make it more visible:
“The Copilot Discord channel has recently been targeted by spammers attempting to disrupt and overwhelm the space with harmful content not related to Copilot. Initially, this spam consisted of walls of text, so we added temporary filters for select terms to slow this activity. We have since made the decision to temporarily lock down the server while we work to implement stronger safeguards to protect users from this harmful spam and help ensure the server remains a safe, usable space for the community,” a Microsoft spokesperson told Windows Latest.
Microsoft added that blocking terms such as “Microslop,” along with other phrases in the spam campaign, was not intended as a permanent policy but a short-term mitigation while the company manages to put additional protections in place.
Whether it’s true or not that the policy was temporary, I guess we’ll see.


In some cases, it appears to be the opposite: CEOs want to do mass layoffs, so they blame AI rather than taking accountability themselves. The Amazon layoffs reek of this.


had people understood from the start the limitations of it, investment would’ve been more modest and cautious
People did understand from the start. Those who do the investing just didn’t listen, or they had a different motive. These days it’s impossible to tell which.
And by “people” I’m not referring to random people, but those who have been closer than most to the development of these models. There has been an unbelievable amount of research done on everything from the effectiveness of specific models in niche fields to the ability to use an LLM as the backend for a production service. Again, no amount of negative feedback going up the chain has made a difference in the direction, so that only leaves a few explanations on why the investment continues to be so high.


Could also do this:
#[expect(lint, reason = "TODO: #issue")]
Edit: to clarify, #issue is an issue number that points to a related issue or task. Could also just explain it inline, but if you have a task tracker, better to make a task instead.


More complicated Tor, but super cool. It uses garlic routing rather than onion routing to further anonymize packets.
It’s worth reading into what it is (and especially those two terms) to get a better understanding.


Not exactly. Thinking models just inflate the context window to point the model closer to your target. GANs have two models which compete against each other, both training each other, with the goal of one (or both) of those models being improved over time.


This is unironically what I’ve seen people try to do, except they assume the second AI is correct.
Unrelated, but this is how GANs work to some extent. GANs train during the back-and-forth though, while LLMs do not.
Is there a point to this? Back to the Future isn’t 2001: A Space Odyssey. It doesn’t have to predict everything.
Cars crash enough already for reasons spanning from shit driving to shit manufacturing. I don’t see the value in making them even more guaranteed to be lethal on failure, especially when innocent pedestrians and people’s roofs are downrange from these things.