• 0 Posts
  • 203 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle



  • Theres someone I sometimes encounter in a discord Im in that makes a hobby of doing stuff with them (from what I gather seeing it, they do more with it that just asking them for a prompt and leaving them at that, at least partly because it doesnt generally give them something theyre happy with initially and they end up having to ask the thing to edit specific bits of it in different ways over and over until it does). I dont really understand what exactly it is this entails, as what they seem to most like making it do is code “shaders” for them that create unrecognizable abstract patterns, but they spend a lot of time talking at length about technical parameters of various models and what they like and dont like about them, so I assume the guy must find something enjoyable in it all. That being said, using it as a sort of strange toy isnt really the most useful use case.




  • The first bit of that is exactly what I was trying to say, indeed almost exactly the same as an example I considered giving but didn’t to avoid extra length, so we’re in agreement there.

    The second, though, I think misses that there is a distinction between physical possibility and practical ability. In theory, it breaks no physical laws for me to become richer than Jeff Bezos by the end of next year. In practice, though, the fact that most pathways to achieving that level of wealth, especially quickly, involve a whole lot of luck on very low likelihood (but not impossible events), means that there is probably no sequence of actions that I can actively decide to take that stand any reasonable chance of me achieving it. There are technically sequences like “buy a lot of winning lottery tickets in a row” that might do it, but because they rely on abilities I don’t have (like knowing which tickets win in advance), I can’t actually attempt to take those paths.


  • Maybe, I suspect we’re just disagree on semantics without much meaningful difference, but I guess a simpler way of putting what I was saying is more “if you think that the “means” aren’t justified by the “ends” when all is said and done, then you haven’t actually achieved the “ends” at all, so if they would have been a good thing or not is now a moot point.”


  • I’ve always thought arguments about “do the ends justify the means”, or the somewhat rarer reverse form of “is x the right thing to do regardless of the consequences it has”, to be a bit of a false distinction. The means are part of the ends, and achieving some goal is the entire reason to take or not take any action. If you wish to achieve a certain end state, whatever state you end up with in the attempt includes the consequences of whatever you did to get there. If those consequences result in an end state that you find undesirable, then it doesn’t mean that your desired end state is actually bad, it means that what you desired is unachievable via that path. If you can’t find an end state that is likely to equal what you desire once those consequences are included in it, then it may just be that what you want is something that you are unable to achieve.











  • to be fair, understanding something well enough to automate it probably requires learning it in the first place. Like obviously an AI that just tells you the answer isnt going to get you anywhere, but it sounds more like the user you were replying to was suggesting an AI limited enough that it couldnt really tell you the answer to something, unless you yourself went through the effort to teach it that concept first. Im not sure how doable this is in practice, My suspicion is that to actually be able to be useful in that regard, the AI would have to be fairly advanced and just pretend to not understand a concept until adequately “taught” by the student, if only to be able to tell if it was taught accurately and tell the student that they got it wrong and need to try again, rather than reinforce an incomplete or wrong understanding, and that theres a risk that current AI used for this could instead be “tricked” by clever wording into revealing answers that its supposed to act like it doesnt know yet (on top of the existing issues with AI spitting out false information by making associations that it shouldnt actually make), but if someone actually made such a thing successfully, I could see it helping with some subjects. I’m reminded of my college physics professors who would both let my class bring a full page of notes and the class textbook to refer to during tests- under the reasoning that a person who didnt understand how to use the formulas in the text wouldnt be able to actually apply them, but someone who did but misremembered a formula would have the ability to look them up again in the real world. These were by far some of the toughest tests I ever had. Half of the credit was also from being given a copy of the test to do again for a week as homework, where we were as a class encouraged to collaborate and teach eachother how so solve the problems given, again on the logic that explaining something to someone else helped teach the explainer that thing too.