I am also ‘Andrew’, the admin of this server. I’ll try to remember to only use this account for posting stuff.

  • 0 Posts
  • 74 Comments
Joined 11 months ago
cake
Cake day: February 17th, 2025

help-circle

  • It’s straight-forward enough to do in back-end code, to just reject a query if parameters are missing, but I don’t think there’s a way to define a schema that then gets used to auto-generate the documentation and validate the requests. If the request isn’t validated, then the back-end never sees it.

    For something like https://freamon.github.io/piefed-api/#/Misc/get_api_alpha_search, the docs show that ‘q’ and ‘type_’ are required, and everything else is optional. The schema definition looks like:

    /api/alpha/search:
        get:
          parameters:
            - in: query
              name: q
              schema:
                type: string
              required: true
            - in: query
              name: type_
              schema:
                type: string
                enum:
                  - Communities
                  - Posts
                  - Users
                  - Url
              required: true
            - in: query
              name: limit
              schema:
                type: integer
              required: false
    

    required is a simple boolean for each individual field - you can say every field is required, or no fields are required, but I haven’t come across a way to say that at least one field is required.


  • PieFed has a similar API endpoint. It used to be scoped, but was changed at the request of app developers. It’s how people browse sites by ‘New Comments’, and - for a GET request - it’s not really possible to document and validate that an endpoint needs to have at least one of something (i.e. that none of ‘post_id’ or ‘user_id’ or ‘community_id’ or ‘user_id’ are individually required, but there needs to be one of them).

    It’s unlikely that these crawlers will discover PieFed’s API, but I guess it’s no surprise that they’ve moved on from basic HTML crawling to probing APIs. In the meantime, I’ve added some basic protection to the back-end for anonymous, unscoped requests to PieFed’s endpoint.


  • This is the kind of thing that apps handle well - I viewed your post from Voyager, and just had to click the sopuli.xyz link to get is resolved to my instance.

    For the web browser experience: that link used to be a bit more visible (you can currently also get it from community sidebars, but it used to also be in post sidebars too). Someone complained though, and it was removed from post sidebars, so I assume they’d have the same complaint if it was re-surfaced again. You could just bookmark it, of course.

    The page itself shouldn’t be slow to load (it’s a very lightweight page that’s not doing anything until you click ‘Retrieve’). It doesn’t immediately redirect you to the post because the assumption was that you might want to retrieve more than one post at a time.

    That said, if you’re already viewing a page on the ‘wrong’ instance, then being able to change ‘https’ to ‘web+pf’ and have it work sounds cool (although it looks like Chrome makes highlighting ‘https’ into a 2-click experience).





  • No, I was suggesting that peertube.wtf should have asked piefed.zip for the details of the comment. That would be the most authoritative place to ask, and that’s what PieFed, MBIN, and Friendica do.

    For the comment that you made, piefed.zip would’ve signed it with your private key, and sent out 2 copies - one to technics.de and one to tilvids.com. After receiving it, technics.de is no longer involved, but tilvids.com would’ve sent to comment out to all the subscribers of ‘The Linux Experiment’. We can tell they did in fact do that, because the comment you made on piefed.zip is visible on piefed.social.

    It doesn’t have your private key though, and it additionally doesn’t sign it with the channel’s private key, so the question is then not ‘was the data sent out?’, but rather ‘how do remote instances know to trust that this comment was actually made by this person?’. If the author was also on tilvids.com, then it has access to the private key, so it can be signed when it’s sent out. If the author was from Mastodon, their comments include a cryptographic hash inside the JSON, so that can be used. For all other authors, the best thing to do - I would think - is grab it from the source.

    I don’t actually know what other PeerTube instances do in this circumstance though. Comparing the amount of comments on the host instance, vs. other PeerTube instances, vs. PieFed, reveals no discernible pattern. For ‘The Linux Experiment’, piefed.social has comments from misskey, from piefed, and from mbin that are absent from remote PeerTube instances. Hopefully, someone who’s familiar with their code can shed more light on their internal federation - if there’s something we can do to guarantee comment visibility on remote PeerTube instances, then we’ll do it if it’s feasible.

    EDIT: just been digging through my server logs for requests of comments I made from PeerTube instances, and discovered tube.alphonso.fr - they have your comment: https://tube.alphonso.fr/w/eSYuduJSbZ9s7K4pFT3Ncd - so how fully PeerTube instances federate comments might be a policy decision that admins set, or it might just be buggy behaviour.





  • For this particular case, it’s more an instance of the software not interacting (in the sense of not changing things they don’t understand).

    If Lemmy doesn’t implement flairs, then community updates from them won’t over-write flairs set on PieFed’s copy of those comms. Also, when a PieFed user sends a comment to a Lemmy community, it will just attach an ‘Announce’ header to it and send it out to all followers. It would be against their own spec to change the content of anything they’re Announcing, so followers who receive the comment and happen to be on PieFed instances will interpret it fully, whereas Lemmy will just ignore any fields in the JSON that it doesn’t have a use for.









  • I’ll just remove the ‘freamon’ one when the auto-generated one is up to date.

    The manually-generated one had 5 missing routes, which I’ve since added.

    The auto-generated one at crust has about 48 missing routes. It’s the right approach, and I’ll help out with it when I can, but - for now at least - it makes no sense to redirect people to it (either automatically or via a comment).


    Some thoughts for @wjs018@piefed.social

    /site/instance_chooser probably doesn’t need to be a route. It’s just the data format returned by /site/instance_chooser_search. As a route, it’s returning the instance info for the site you’re querying, so if you want to keep it as a route, it should probably be called /site/instance_info or something.

    In the query for /site/instance_chooser_search, nsfw and newbie are both booleans. With the rest of the API, these are sent as ‘true’ or ‘false’, but they are ‘yes’ and ‘no’ for this route. The newbie query should probably be newbie_friendly In the response, monthsmonitored should probably be months_monitored

    There’s no way to exclude communities for the response to /topic/list and /feed/list: If you don’t put ‘include_communities’ in the query, it’s defaults to True, but if you put ‘include_communities=false’ in the query it ends up being True also (because the word ‘include_communities’ is in the data).