

Screw them! We’ll build our own peertube, with blackjack, and hookers
Screw them! We’ll build our own peertube, with blackjack, and hookers
That means Bedrock unless you use the Geyser tool someone else mentioned to allow Bedrock to connect to Java but I have no experience with that and am not sure how reliably it would actually work as they are quite different versions of the game. I have no idea how it would handle mods that are not supported by the Bedrock clients for example.
First you need to understand the difference between Bedrock edition and Java edition. Bedrock is for consoles, phones and Windows, it’s the default version that Microsoft pushes now. It’s not compatible with Java clients or Java servers. So if you’re planning to have the kid play on Switch or something like that, it’s not going to work.
Assuming you’re clear on all that, you have a few options for Java servers, you can run a plain jane vanilla server (the one that Microsoft provides) fairly easily but it has some limitations, and it’s not the most manageable solution. Modded servers are much more capable and flexible but also can be a little more complex in some cases. Overall, I’ve found Purpur the easiest and most sustainable choice at least a few years ago when I was looking for the right choice it seemed like most people agreed this was the best option. Fabric is another great option, especially if you want to use mods! Fabric has a huge modding ecosystem, second only to Forge.
However I also need to mention that I’ve got a heavily modded Forge-based server running right now and I really didn’t find that any more difficult to set up than any of the others. Even though people usually complain about forge being “difficult” somehow. So take that for what it’s worth. I think it doesn’t really matter THAT much which server software you use unless you have specific requirements around things like mods, spawn protection, and other kinds of configuration that are probably most useful for large, public servers.
If you do want to run a bedrock server, it gets a little more complicated as you might have to break some things out of the walled garden. I haven’t had a lot of success with that but I understand it is possible.
Florida is basically the unofficial US Capitol now, so it would be confusing and ambiguous to have it associated with the traditional forms of unexpected insanity. Now it’s going to be an entirely new kind of unexpected insanity, so Ohio has been selected to represent the old kind of unexpected insanity that Florida used to represent.
Matrix and its implementations like Synapse have a very intimidating architecture (I’d go as far as to call most of the implementations somewhat overengineered) and the documentation ranges from inconsistent to horrific. I ran into this particular situation myself, Fortunately this particular step you’re overthinking it. You can use any random string you want. It doesn’t even have to be random, just as long as what you put in the config file matches. It’s basically just a temporary admin password.
Matrix was by far the worst thing I’ve ever tried to self-host. It’s a hot mess. Good luck, I think you’re close to the finish line.
While it sounds a bit hacky, I think this is an underrated solution. It’s actually quite a clever way to bypass the whole problem. Physics is your enemy here, not economics.
This is kind of like trying to find an electric motor with the highest efficiency and torque at 1 RPM. While it’s not theoretically impossible, it’s not just a matter of price or design, it’s a matter of asking the equipment to do something it’s simply not good at, while you want to do it really well. It can’t, certainly not affordably or without significant compromises in other areas. In the case of a motor, you’d be better off letting the motor spin at its much higher optimal RPM and gear it down, even though there will be a little loss in the geartrain it’s still a much better solution overall and that’s why essentially every low speed motor is designed this way.
In the case of an ammeter, it seems totally reasonable to bring it up to a more ideal operating range by adding a constant artificial load. In fact the high precision/low range multimeters and oscilloscopes are usually internally doing almost exactly the same thing with their probes, just in a somewhat more complex way behind the scenes.
The end result is exactly the same.
The difference is that you can install an iso on a computer without an internet connection. The normal iso contains copies of most or all relevant packages. Although maybe not all of the latest and most up to date ones, the bulk are enough to get you started. The net install, like the name suggests, requires an internet connection to download packages for anything except the most minimal, bare-bones configuration. The connection would hopefully be nearly as fast if not faster than the iso and be guaranteed to have the latest updates available which the iso may not. While such a fast connection is usually taken for granted nowadays, it is not always available in some situations and locations, it is not always convenient, and some hardware may have difficulty with the network stack that may be difficult to resolve before a full system is installed or may require specialized tools to configure or diagnose that are only available as packages.
In almost all cases, the netinst works great and is a more efficient and sensible way to install. However, if it doesn’t work well in your particular situation, the iso will be more reliable, with some downsides and redundancy that wastes disk space and time.
Things like windows updates and some large and complex software programs and systems often come with similar “web” and “offline” installers that make the same distinctions for the same reasons. The tradeoff is the same, as both options have valid use cases.
To be fair, in the case of something like a Linux ISO, you are only a tiny fraction of the target or you may not even need to be the target at all to become collateral damage. You only need to be worth $1 to the attacker if there’s 99,999 other people downloading it too, or if there’s one other guy who is worth $99,999 and you don’t need to be worth anything if the guy/organization they’re targeting is worth $10 million. Obviously there are other challenges that would be involved in attacking the torrent swarm like the fact that you’re not likely to have a sole seeder with corrupted checksums, and a naive implementation will almost certainly end up with a corrupted file instead of a working attack, but to someone with the resources and motivation to plan something like this it could get dangerous pretty quickly.
Supply chain attacks are increasingly becoming a serious risk, and we do need to start looking at upgrading security on things like the checksums we’re using to harden them against attackers, who are realizing that this can be a very effective and relatively cheap way to widely distribute malware.
I still use Nextcloud for syncing documents and other basic stuff that is relatively simple. But I started getting glacial sync times consuming large amounts of CPU and running into lots of conflicts as more and more got added. For higher performance, more demanding sync tasks involving huge numbers of files, large file sizes, and rapid changes, I’ve started using Syncthing and am much, much happier with it. Nextcloud sync seems to be sort of a jack of all trades, master of none, kind of thing. Whereas Syncthing is a one trick pony that does that trick very, very well.
The setting you described sounds like a motherboard manufacturer issue. There’s no reason for it not to default to “auto” unless that somehow limits something else they wanted to have on by default. They choose the defaults, and they chose that one, even if it’s stupid. Either that, or you set it somehow previously and didn’t realize or forgot you did.
I feel like you are the one who is confusing a “NAS device” or “NAS appliance” as in a device that is specifically designed and primarily intended to provide NAS services (ie, its main attribute is large disks, with little design weight given to processing, RAM or other components except to the extent needed to provide NAS service), and a NAS service itself, which can be provided by any generic device simultaneously capable of both storage and networking, although often quite poorly.
You are asserting the term “NAS” in this thread refers exclusively to the former device/appliance, everyone else is assuming the latter. In fact, both are correct and context suggests the latter, although I’m sure given your behavior in this thread you will promptly reply that only your interpretation is correct and everyone else is wrong. If you want to assert that, go right ahead and make yourself look foolish.
You can also automate this with autossh which is designed for exactly this kind of persistent tunnel. Although a simple “while” loop might seem like the intuitive way to keep it running, autossh is very reliable and takes care of all the corner cases for you.
It is mostly a myth (and scare tactic invented by copyright trolls and encouraged by overzealous virus scanners) that pirated games are always riddled with viruses. They certainly can be, if you download them from untrustworthy sources, but if you’re familiar with the actual piracy scene, you have to understand that trust is and always will be a huge part of it, ways to build trust are built into the community, that’s why trust and reputation are valued higher than even the software itself. Those names embedded into the torrent names, the people and the release groups they come from, the sources where they’re distributed, have meaning to the community, and this is why. Nobody’s going to blow 20 years of reputation to try to sneak a virus into their keygen. All the virus scans that say “Virus detected! ALARM! ALARM!” on every keygen you download? If you look at the actual detection information about what it actually detected, and you dig deep enough through their obfuscated scary-severity-risks-wall-of-text, you’ll find that in almost all cases, it’s actually just a generic, non-specific detection of “tools associated with piracy or hacking” or something along those lines. They all have their own ways of spinning it, but in every case it’s literally detecting the fact that it’s a keygen, and saying “that’s scary! you won’t want pirated illegal software on your computer right?! Don’t worry, I, your noble antivirus program will helpfully delete it for you!”
It’s not as scary as you think, they just want you to think it is, because it helps drive people back to paying for their software. It’s classic FUD tactics and they’re all part of it. Antivirus companies are part of the same racket, they want you paying for their software too.
That’s what LCARS means, it’s the name of the computer console in Star Trek. In the show, it stands for “Library Computer Access and Retrieval System” although it’s often used for stuff other than the library computer too.
Almost like the context matters and the world isn’t entirely made up of black and white binary choices because we’re not robots or computers and discrete logic does not apply to human moral arguments.
Never had a single functional problem with Nextcloud, other than the fact that it’s oppressively slow with the amount of files I’ve shoved into it. Mind you I also don’t use MySQL/MariaDB which I consider a garbage-tier DB. Despite Postgres not being the “Recommended DB” for Nextcloud it works perfectly for me. Maybe that’s the difference.
Nah, I wanted to love NixOS, and granted it seems like a perfect fit for my recommendation, but a bunch of things about it rub me the wrong way. It’s just not for me. I’ve always been most comfortable with Debian and that’s what my setup script is designed for. Lots of apt.
I would need to factory reset the whole server for that, which would be … highly inconvenient for me. It took me quite a long time to get everything working, and I don’t wanna loose my configuration.
This is your actual problem you need to solve. Reinstalling your server should be as convenient as installing a basic OS and running a configuration script. It needs to be reproducible and documented, not some mystery black box of subtle configurations you’ve forgotten about ages ago. A nice, idempotent configuration script is both convenient and a self-documenting system for tracking all the changes you’ve ever implemented on your server.
Once you can do that, adding whatever encryption you want is just a matter of finding the right sequence of commands, testing it (in another docker perhaps) and then running your configuration script to migrate your server into the desired state.
$400 is pretty steep. That is probably more than a lot of the computers this would be plugged into.
I don’t use arch (shocking I know), so I can’t help you directly, but I will recommend instead that you invest some effort in learning about the Linux networking stack. It’s very powerful and can be very complicated, but usually the only thing you need to do to get it working is something very simple. Basically all distributions use the Linux kernel networking stack under the hood, usually with only a few user-interface sprinkles on top. Sometimes that can get in your way, but usually it doesn’t. All the basic tools you need should be accessible through the terminal.
The most basic things you can check are
ip a
which should show a bunch of interfaces, the one you’re particularly interested in is obviously the wired interface. This will tell you if it’s considered <UP> and whether it has an “inet” address (among other things). If it doesn’t, you need to get the interface configured and brought up somehow, usually by a DHCP broadcast. Network Manager is usually responsible for this in most distributions. Arch seems to have some information here.If those things look good, next step is to look at
ip r
which will tell you the routes available. The most important one is the default route, this will tell your system where to send traffic when it isn’t local, and usually sends traffic to an internet gateway, which should’ve been provided by DHCP and is usually your router, but could also be a firewall, the internet modem itself, or something else. The route will tell it what IP the gateway has, and what interface it can be found on.Assuming that looks good, see if you can
ping
the gateway IP. If your packets aren’t getting through (and back) that suggests something is wrong on a lower level, the kernel firewall might be dropping the packets (configuring the kernel firewall is a whole topic in itself) or one of the IPs is not valid and is not registered properly on the network, or the physical (wiring) or the hardware on either end is not functioning or misconfigured.If you can ping the gateway successfully, the next step is to see if you can ping the internet itself by IP.
ping 8.8.8.8
will reach out to one of Google’s DNS servers which is what I usually use as a quick test. If you get no response then it’s either not forwarding your traffic out to the internet, or the internet is not able to get responses back to it, and ultimately back to you. Or Google is down, but that’s not very likely.If you’ve gotten this far and 8.8.8.8 is responding to you, then congratulations, you HAVE internet access! What you might NOT have is DNS service, which is what translates names into IP addresses. A quick test for DNS is simply to
ping google.com
and like before, if that fails either your DNS is broken or Google is down, which is still not very likely.Hopefully this will help you at least start to find out where things are going wrong. From there, hopefully you can at least steer your investigation in the right direction. Good luck!