All the public Piped instances are getting blocked by YouTube but do small selfhosted instances, that are only used by a handful of users or just yourself, still working? Thinking of just selfhosting it.

On a side note, if I do it, I’d also like to install the new EFY redesign or is that branch too far behind?

Edit: As you can see in the replies, private instances still work. I also found the instructions for running the new EFY redesign here

  • Lucy :3@feddit.org
    link
    fedilink
    English
    arrow-up
    26
    ·
    3 months ago

    Yes. My private Instance works perfectly, and I’m so happy that I chose to selfhost it rn lol. And currently I’m on the quest to selfhost even more piped components, eg. RYD_Proxy, sponsorblock-mirror, make them buildable as a PKGBUILD and compatible with Unix sockets: https://git.30p87.de/piped

    • Fisch@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      Thanks, I’m gonna selfhost it then. What you said about the piped components sounds interesting, is there like a list of them?

      • Lucy :3@feddit.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 months ago

        Usually you’ll just need the official Docker. It contains the frontend, backend and proxy. Technically, you only need the backend right now. But lifting the load of the official servers by using a selfhosted frontend is probably beneficial too, with no drawbacks. However, the proxy is the thing anonymizing you - accessing youtube always through it, even on the go, will deanonymize you further (kinda, it’s still better than the official app). If you have a list of proxies you can rotate through, you would be anonymous again.
        Then there are other components, that you can find through the larger config file of the main Piped-Backend or TeamPiped’s GH profile itself. But again: Stuff like RYD-proxy will only be anonymous and beneficial with a rotating IP/proxy. But https://github.com/TeamPiped/region-restriction-checker is good to selfhost, as (afaik) it will help decentralize to circumvent censorship (zB. uHrHEbErRecHTsvERletZUnG die Ratten). But in order to configure your instance to use the region-restriction-checker, you’ll need the larger and more complete config file.

        ZL;NG:

        https://github.com/TeamPiped/Piped-Docker reicht eigentlich. Sonst einfach die einzelnen Repos angucken, was sich gut anhört. Proxy Zeug nur selber hosten wenns nicht um Privacy geht oder die IP verschleiert wird.

        • Fisch@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Wegen dem letzten Part, es geht zwar schon um privacy, aber dadurch, dass ich momentan YouTube direkt nutzen muss, weil die Piped instances nicht funktionieren, ist es halt immer noch besser. Außerdem hab ich angefangen nen GTK 4 Piped client zu schreiben und ich muss das halt auch iwie testen können.

      • Lucy :3@feddit.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 months ago

        I have dozens of services, and most of them start their own http server, using a regular websocket binding to localhost and a port. As most of them are web services, I run out of standard ports pretty fast - 80, 8000, 8080, and then 8069, 8070 etc. Keeping tracks is a pain. Docker just makes it worse. Also, all non-web services have standard ports - 25, 456 for smtp/smtps, which nmap identifies. In my current state, an attacker could just open a random port on my server and I couldn’t notice.

        Unix sockets are basically just regular files, where http traffic is written to and read from. So eg. gitlab-puma or piped-proxy creates the file /run/gitlab/gitlab.socket respectively /run/piped/proxy.socket, and my reverse proxy (nginx) communicates through that socket with the service, just as it would through a regular websocket using localhost and a port. Except unix sockets are easily identifiable (they are named and put in dirs dependent on their service) and can be access controlled much better - instead of any service in the whole network (assuming no firewall is present on the device, usually behind a consumer grade router) being able to communicate with the service, only members of the group http (nginx) or the services’ user can read/write to the socket, assuming nginx is save and root, http and the services’ user are not compromised, not even an attacker with access to the server can read any traffic, as it’s encrypted (https) to nginx, and not readable to other users through the socket file. It’s also a bit more performant. The catch is: Very few programs support it, and many of the ones that do implement it incorrectly. Usually I would create a specific user for a service (or sysusers.conf file would), under which the service runs in systemd, and which therefore owns the socket file. The http user is then added to the group of the services user, or the file’s group is set to http. With 770 (or 660) permissions (Read and write for the user and all users in the group, including http) everything would be fine, however, they’re usually 755, so only actually writeable for the owner, and not the group, so http, which makes communication impossible. And as just creating the file with correct ownership and permissions beforehand results in the service believing the socket is already in use, I usually have to patch the actual program itself. Maybe I can do something with systemd’s PostExec etc. tho.

        And piped-backends library does not support unix sockets at all - so I will need to extend the incredibly complicated library itself to get what I want. Damn.

        • LainTrain@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          3 months ago

          So basically you’re using Unix sockets on your LAN level between nginx and internal machines for finer grained access control and because you’re running out of ports. That’s really cool! I’ll have to read into this myself.