The -force-d3d12
option is a param to the actual game, so it would go after command%
(or by itself if no prefixes are being added).
The -force-d3d12
option is a param to the actual game, so it would go after command%
(or by itself if no prefixes are being added).
Yeah, unfortunately I never use yt from a browser, and grayjay doesn’t have dearrow support afaik.
No idea what the state of this topic is in Japan.
Just faxed them about it. Will get back to you as soon as they respond.
I’m sure that it’s just the marketing dept changing hands over time. Marketing teams are like a Scott’s Tots situation: they are just trying to say whatever makes the product numbers look good in the near term. Fulfilling on any promises is a future marketing team’s job.
“Of all the empty promises I have made, this one is by far the most generous”
- Michael Scott/Microsoft’s marketing team
Assuming C:S2 uses DX and you’re running it through proton/dxvk, it’s ultimately the Vulkan driver’s job to page to system memory correctly. This honestly sounds like you’re seeing a bug. In that circumstance, it shouldn’t crash, it should just hurt performance from all the paging. I see a couple of older issues where people were seeing exactly this kind of issue with DXVK+Nvidia.
One other thing to try is, idk if you’re running the game in dx11 or dx12 mode, but apparently both exist. If it’s currently running in dx11 mode, try the launch flag -force-d3d12
. If you’re already using dx12, maybe try swapping back to dx11. Good luck!
Shared GPU memory (as described in that article) is just how Windows decided to solve the problem of oversubscription of VRAM. Linux solves it differently (looks like it just allocates what it needs in demand and uses GART to address it, but I would like to know more).
So I’m curious what you mean when you say you miss it. Are you having programs crash OOM when running on Linux? Because that shouldn’t be happening.
It’s not ideal to be relying on shared gpu mem anyway (at least in a dgpu scenario). Kinda like saying you have a preference on which crutches to use.
he’s big into the clickbait game
Don’t hate the player, hate the game.
Smarter Every Day did a video on using clickbait titles and thumbnails. The data is clear: everyone complains about it, but it performs far better than anything else on YT. And if the goal is to most efficiently spread educational videos to the largest number of people, then unfortunately, it’s really the only option.
TBH, the tone isn’t that different from Bill Nye. Wacky colors, loud obnoxious personality, gotta get kids excited about science somehow.
Let’s Encrypt is good practice, but IMO if you’re just serving the same static webpage to all users, it doesn’t really matter.
Given that it’s a tiny raspi, I’d recommend reducing the overhead that WordPress brings and just statically serve a directory with your site. Whether that means using wp static site options, or moving away from wp entirely is up to you.
The worst case scenario would be someone finding a vulnerability in the services that are publicly exposed (Apache), getting persistence on the device, and using that to pivot to other devices on your network. If possible, you may consider putting it in a routing DMZ. Make sure that the pi can only see the internet and whatever device you plan to maintain it with. That way even if someone somehow owns it completely, they won’t be able to find any other devices to hack.
I promise you in a year you’ll be asking the same question about the same group of people.
What client are you using? Those all sound like complaints about the client, not mastodon.
I’m all for it. All publicity is good publicity in this space. Open criticism is the first step to better open software.
Lol that’s actually hilarious. So but, why not comment on your posts too? Each post is just sitting there with an empty comments section.
Or your OS* is doing something wrong 😆
But if you’re filling up all RAM and swap, either you needed to upgrade a while ago, or you’re doing something wrong.
Btw it looks like you accidentally quoted the same sentence twice.
I disagree that it’s the same for multiple reasons: first off the project and telemetry were never profit-driven. Their goal was always to use modern methods of software development to make the software better.
The fact is, these days all for-profit projects gather a ton of info without asking, and then use that data to inform their development and debugging (and sell, but that’s irrelevant to my point). To deny open source software the ability to even add the option of reporting telemetry is to ask them to make a better product than for-profit competition, with fewer tools at their disposal, and at a fraction of the pay (often on a voluntary basis). That’s just unreasonable.
Which is why the pushback wasn’t that they were using telemetry, it was that they were going to use Google Analytics and Yandex, which are “cheap” options, but are obviously for-profit and can’t be trusted as middlemen. They heard the concern over that and decided to steer away to a non-profit solution.
But as a software dev and a Linux user, I often wish I could easily create bug reports using open source, appropriately anonymized telemetry reporting tools. I want to make making a better system for me to use as easy as possible for the saints that are volunteering their time.
As for the issues in tenacity, it was likely specific to what I was doing. I was rapidly opening and closing a lot of small audio clips, and saving them to network mounted dirs under different names. I remember I had issues with simple stuff like keyboard shortcuts to open files, and I had to manually use the mouse to select a redundant option every single time (don’t recall what it was), and I think it would just crash trying to save to the network mounted dir, so I had to always save locally and copy over manually. So I just switched back and continued my work.
Afaik, back when it all went down, they heard the public reaction about the telemetry thing and completely reversed course. On top of that, many distros would be sure to never distribute a build with telemetry enabled anyway. So there has never been any cause for concern. Would love to be proven wrong, though.
Also, Audacity is handy, but it’s not perfect, and I’ll gladly use a better alternative. But the last time I tried Tenacity, it had a bunch of little differences that made the tool just a bit harder to use. So I still default to audacity.
Which is a good reminder to everyone to support your local Lemmy instances.
“Runs like shit” is expected when you’re relying on paging to system memory every frame, step 1 is to avoid a crash from oom/failed alloc.
The next step is to reduce paging if possible. I see C:S2 has a min spec of a 4GB GPU. Assuming they actually tuned their game for such a card on windows, the unfortunate reality of proton/DXVK is that there’s a bit of a memory overhead and lack of knowledge about residency priority, especially when translating a dx11 game.
DX12 maps to Vulkan more closely, so my hope is that the
-force-d3d12
flag would give DXVK better info to work with (ex. hopefully the game makes use of dx12 heaps and placed resources, which are 1:1 with vulkan concepts, and dxvk can make use of that to better ensure the most important resources don’t get paged out).