deleted by creator
deleted by creator
Telegram’s server side software is closed source, owned and ran by them exclusively so they really have no room to talk. WhatsApp doesn’t even have OSS clients so they’re even worse in that regard
Here’s one I have saved in my shell aliases.
nscript() {
local name="${1:-nscript-$(printf '%s' $(echo "$RANDOM" | md5sum) | cut -c 1-10)}"
echo -e "#!/usr/bin/env bash\n#set -Eeuxo pipefail\nset -e" > ./"$name".sh && chmod +x ./"$name".sh && hx ./"$name".sh
}
alias nsh='nscript'
Admittedly much more complicated than necessary, but it’s pretty full featured. first line constructs a filename for the new script from a generated 10 character random hash and prepends “nscript” and a user provided name.
The second line writes out the shebang and a few oft used bash flags, makes the file executable and opens in in my editor (Helix in my case).
The third line is just a shortened alias for the function.
Seems he’s revealing that he is either Bruce Wayne or Bane. As they’re the only two to ever escape from the pit; historically speaking.
Probably not exactly what you’re looking for, but for my personal use I just set up a repo in my git forge (gitea in my case) with a bunch of markdown files in various folders and a Hugo theme.
Every time I want to update a document I can click the link at the bottom of the “Wiki” page and edit it in Gitea’s WYSIWYG editor. Similar process if I want to make a new document. When I save the changes I have a CI job (native to Gitea/Github) that uses Hugo to build the markdown docs into a full website and sync it to a folder on one of my servers where it’s picked up by a web server.
Sounds complicated when I type it all out, but the only thing that I can reasonably expect to be a deal breaker is the Hugo software, of which there are archived versions, and even if there wasn’t Hugo’s input is just markdown, so I can repurpose however I see fit.
You could probably do something similar with other SSG’s or even use Github’s pages feature, though that does add a failure point if/when they decide to sunset or monetize the feature.
It’s because the original image macro that this is based on was about piracy, saying something along the lines of “I bring a certain ‘just torrent it’ vibe to the conversion that the riaa just doesn’t like.”
Their reuse of the macro is indirectly an answer or a continuation of it that can be seen as acknowledging the original message.
Even if you need something just once, just install it and then uninstall it, takes like 10 seconds.
apt install foo && apt remove foo
That’s essentially what nix-shell -p
does. Not a special feature of nix, just nix’s way of doing the above.
Actually using it though is pretty convenient; it disappears on its own when I exit the shell. I used it just the other day with nix-shell -p ventoy
to install ventoy onto an ssd, I may not need that program again for years. Just used it with audible-cli to download my library and strip the DRM with ffmpeg. Probably won’t be needing that for a while either.
The other thing to keep in mind is that since Nix is meant to be declarative, everything goes in a config file, which screams semi-permenant. Having to do that with ventoy and audible-cli would just be pretty inconvenient. That’s why it exists; due to how Nix is, you need a subcommand for temporary one-off operations.
If you’re ok with just file storage sftpgo has been solid for me for years now. Does sftp ftp and WebDAV (like nextcloud). Webui isn’t as pretty but it’s fast. Mobile apps will be various sync apps with sftp or WebDAV support. On Android folder sync pro is pretty good for keeping documents and pictures backed up
They did. Its called airmessage. Has been around for almost 3 years now
Just this one https://wallhaven.cc/w/2y2wg6
Have it hard-coded in my config so I’m less likely to faff around with my wallpaper instead of doing something productive. Less cognitive load IMO
East of the West
Wow thanks for mentioning this! I just got the first chapter and it looks to be amazing! I don’t know if it’s just Image comics style in general or not but the art style reminds me strongly of Brian K. Vaughan and Fiona Staples’s Saga as well.
That looks so nice!
Just another option. If you know already or are willing to learn how to write documents in markdown format (like how lemmy supports), and learn some of infrastructure set-up and it can be between free and very cheap to have a blog on something like netlify.app, github pages or others. There are plenty of static site generators out there that can be both relatively easy and very powerful.
I currently have a private blog set up on a cloud provider that just takes markdown documents and builds those along with some templates and webpage code to create a site like this. Although I have mine hosted on a VPS with my own domain, it’s completely possible to use something like github pages, netlify.app, etc. for that. They’re both free afaik to host on, but if you want to pay for a dedicated service they are usually between 2 and 5 USD per month.
Edit: The option above isn’t activitypub software, sorry for not realizing that immediately, but it is federated in a way I suppose.
Raft by Stephen Baxter. Part of the Xeelee sequence series.
Currently on book 2: Timelike Infinity and I’m liking it quite a bit.
Well that’s disappointing. I’ll have to investigate further I guess. I was really hoping to set it up (at least initially) without any type of media storage.
Oh I see, I definitely misunderstood what you were asking. How is your caddy server set up? Is it serving one site per subdomain (site.your.domain) or is it one site per path (your.domain/site/)? I am running traefik so I probably won’t be able to help with specifics, but it’s worth a shot.
The way I have my monitoring set up is to poll the containers from behind the proxy layer. Ex. if I’m trying to poll Portainer for example:
---
services:
portainer:
...
with the service name portainer
from uptime-kuma within the same docker network it would look like this:
Can confirm this is working correctly to monitor that the service is reachable. This doesn’t however ensure that you can reach it from your computer, because that depends on if your reverse proxy is configured correctly and isn’t down, but that’s what I wanted in my case.
Edit: If you’re wanting to poll the http endpoint you would add it before like http://whatever_service:whatever_port
I believe the Pictrs is a hard dependency and Lemmy just won’t work without it, and there is no way to disable the caching
I’ll have to double check this but I’m almost certain pictrs isn’t a hard dependency. Saw either the author or one of the contributors mention a few days ago that pictrs could be discarded by editing the config.hjson to remove the pictrs block. Was playing around with deploying a test instance a few days ago and found it to be true, at least prior to finalizing the server setup. I didn’t spin up the pictrs container at all, so I know that it will at least start and let me configure the server.
The one thing I’m not sure of however is if any caching data is written to the container layer in lieu of being sent to pictrs, as I didn’t get that far (yet). I haven’t seen any mention that the backend even does local storage, so I’m assuming that no caching is taking place when pictrs is dot being used.
Edit: Clarifications
Thanks for sharing! I’ll definitely be looking into adding this to my infra alerting stack. Should pair well with webhooks using ntfy for notifications. Currently just have bash scripts push to uptime-kuma for disk usage monitoring as a dead man trigger, but this should be better as a first-line method. Not to mention all the other functionalities it has baked in.
Edit: Would also be great if there was an already compiled binary in each release so I can use bare-metal, but the container on ghcr.io is most-likely what I’ll be using anyway. Thanks for not only uploading to docker hub.
You don’t. It works perfectly fine OOTB. Can’t speak for the Pinecil v2 with Bluetooth and the companion app but I have v1 and the software been stable and bug-free enough I’ve never even given a thought to updating the firmware on it