• 0 Posts
  • 58 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • You can’t just use an audio file by itself. It has to come from somewhere.

    The courts already have a system in place that if someone seeks to introduce a screenshot of a text message, or a printout of a webpage, or a VHS tape with video, or just a plain audio file, needs to be able to introduce that as evidence, with someone who testifies that it is real and that it is accurate, with an opportunity for others to question and even investigate where it came from and how it was made/stored/copied.

    If I just show up to a car accident case with an audio recording that I claim is the other driver admitting that he forgot to look before turning, that audio is gonna do basically nothing unless and until I show that I had a reason to be making that recording while talking to him, why I didn’t give it to the police who wrote the accident report that day, etc. And even then, the other driver can say “that’s not me and I don’t know what you think that recording is” and we’re still back to a credibility problem.

    We didn’t need AI to do impressions of people. This has always been a problem, or a non-problem, in evidence.





  • I wonder if someone could set up some form of tunneling through much more mundane traffic, perhaps even entirely over a legitimate encrypted service through a regular browser interface (like the browser interface for services like Discord or slack or MS Teams or FB Messenger or Zoom or Google Chat/Meet) where you can just literally chat with a bot you’ve set up, and instruct the bot to do things on its end, and then forward the results through file sending in that service. From the outside it should look like encrypted chat with a popular service over that https connection.



  • Sometimes the identity of the messenger is important.

    Twitter was super easy to set up with the API to periodically tweet the output of some automated script: a weather forecast, a public safety alert, an air quality alert, a traffic advisory, a sports score, a news headline, etc.

    These are the types of messages that you’d want to subscribe to the actual identity, and maybe even be able to forward to others (aka retweeting) without compromising the identity verification inherent in the system.

    Twitter was an important service, and that’s why there are so many contenders trying to replace at least part of the experience.


  • Functionally speaking, I don’t see this as a significant issue.

    JPEG quality settings can run a pretty wide gamut, and obviously wouldn’t be immediately apparent without viewing the file and analyzing the metadata. But if we’re looking at metadata, JPEG XL reports that stuff, too.

    Of course, the metadata might only report the most recent conversion, but that’s still a problem with all image formats, where conversion between GIF/PNG/JPG, or even edits to JPGs, would likely create lots of artifacts even if the last step happens to be lossless.

    You’re right that we should ensure that the metadata does accurately describe whether an image has ever been encoded in a lossy manner, though. It’s especially important for things like medical scans where every pixel matters, and needs to be trusted as coming from the sensor rather than an artifact of the encoding process, to eliminate some types of error. That’s why I’m hopeful that a full JXL based workflow for those images will preserve the details when necessary, and give fewer opportunities for that type of silent/unknown loss of data to occur.


    • Existing JPEG files (which are the vast, vast majority of images currently on the web and in people’s own libraries/catalogs) can be losslessly compressed even further with zero loss of quality. This alone means that there’s benefits to adoption, if nothing else for archival and serving old stuff.
    • JPEG XL encoding and decoding is much, much faster than pretty much any other format.
    • The format works for both lossy and lossless compression, depending on the use case and need. Photographs can be encoded in a lossy way much more efficiently than JPEG and things like screenshots can be losslessly encoded more efficiently than PNG.
    • The format anticipates being useful for both screen and prints. Webp, HEIF, and AVIF are all optimized for screen resolutions, and fail at truly high resolution uses appropriate for prints. The JPEG XL format isn’t ready to replace camera RAW files, but there’s room in the spec to accommodate that use case, too.

    It’s great and should be adopted everywhere, to replace every raster format from JPEG photographs to animated GIFs (or the more modern live photos format with full color depth in moving pictures) to PNGs to scanned TIFFs with zero compression/loss.







  • this seems really dangerous for anyone who might get stranded.

    I’d take a step back and say no, this isn’t actually as bad as some of the comments seem to suggest.

    The vast, vast majority of building emergencies are safe to shelter in place. Modern building codes generally prevent fires from spreading too far, and isolate smoke to a specific place in the building.

    Then, for certain types of catastrophic disasters, being able bodied doesn’t actually help, as people can still get stuck and need rescue from firefighters anyway.

    You need some kind of disaster Goldilocks zone where things are bad enough to where quick evacuation is helpful and things aren’t so bad that evacuation isn’t feasible, before it starts making a difference.

    And in those situations, many buildings do have evacuation chairs in the stairwells. And stronger people can assist carrying down the stairs, too. There are a lot of variations on two-person or single person carries that depend on exactly what mobility limitation there is. If you live or work with or around people with mobility issues, it’s worth looking them up, maybe taking a first aid/survival class or something.




  • reaching those limits requires more manpower in creating assets to populate these larger worlds

    Yeah, when a character design consisted of like 30 sprites of 32x32 pixel resolution, that could basically be done by an artist in a day. Then the “physics” of game play could be simply defined in a two dimensional space.

    Going to 3D, with different models of clothing, armor, weapons, hair, requires a lot more conscious artistic choices within a broad but consistent visual design language. Each time resolution or polygon count or frame rate or hit box number go up, the complexity of the visual design, gameplay design, etc. go up accordingly.

    Physical realism in games increases game development cost exponentially with each generation in tech. A lot of studios simply stepped off of that rat race and went towards cartoony visuals and physics that don’t even pretend to be realistic.


  • After XP, though, the work in the core OS was basically done

    There were a lot of big things happening in computer hardware: migration to 64-bit instruction sets and memory addressing, multicore processors, the rise of the GPU. The security paradigm also shifted to less trust between programs, with a lot of implementation details on encryption and permissions.

    So I’d argue that Windows has some pretty different things going on under the hood from what it was 20 years ago.