

A simple rm -rf says hello
A simple rm -rf says hello
Idk, seems like it’s sticking out of the back and isn’t symmetrical like the camera bar, so it might wobble or lay uneven on a table. That’s way more obnoxious to me.
Obviously you need some redundancy in case the script gets corrupted. 5000 repetitions seems reasonable for such a high quality work
Only true if you don’t know what you’re doing. The only reason any network is safe at all is NAT and Firewalls that come with it.
I don’t have to worry about devices on a local network in as far as firewalls go, I can expose anything I want, in fact I delete iptables at first sight on any new distro install or VM, so long as none of it is port forwarded and everything is behind NAT it’s all okay. My network is my castle. Thanks technology! Thanks smart people for figuring this out!
NAT is not a security mechanism. If you set up a NAT with an otherwise permissive firewall, your router will happily forward any incoming packets destined for RFC 1918 addresses inside, no questions asked. I use this for a “lab” network that I sometimes want accessible from the bigger LAN - the lab router doesn’t have any rules for dropping incoming packets (only blocks some outgoing traffic), and all I have to do on the main router to get this working is to set a static route to the internal lab network through the lab router’s “external” IP.
And yes, practically it’s a security nightmare to have any IP of any computer accessible from the internet. If you go around configuring firewalls forever you might get it right but oh boy one mistake and you’re done for. Instead, consider NAT, the solution to all problems. I’m writing this behind quadruple NAT rn and it’s honestly fairly easy to manage, I’ve been too lazy to change it, not that I’d advise anything more than 1 necessarily.
accept established, related; drop incoming. That’s all you need to get the same security as a NAT with a proper firewall. Outgoing connections will get marked and have return traffic allowed, everything incoming without related outgoing traffic gets dropped. Want to “port forward”? Add a rule that allows incoming traffic to a specific IP/port/protocol triplet. Done. Don’t know how to make sure a client stays reachable on a specific address? Give it that specific IP address in addition to the one it autogenerates. This was always possible with IPv4, too, it’s just that the tiny address space made it impractical to use.
How do you get the equivalent of NAT punchthrough (which is unreliable with many NAT implementations) when you want to do a VoIP call without having to bounce all the data through a central server? Simple, you can just tell both clients the other one’s IP and port, have them spam each other for a tiny while with messages and eventually a message gets through both firewalls. It is very similar to NAT punchthrough, except you don’t have to guess how the NATs work and it’ll reliably connect.
Yikes! That’s a lot to type to hammer in a nail that sticks out (Android). Thanks but no thanks. I’ll find some way to cripple mDNS on the non-compliant device instead.
Not sure why you’d regularly need to type out the whole thing. Also not sure why you picked .local when .lan is also incorrectly used for this purpose and is shorter (and isn’t yet assigned to any conflicting technology)
So are you saying you run some sort of mDNS server(not sure what the word would be there)/provider? Why? How?
The point of mDNS is that devices auto discover each other on a network without a central authority. The word multicast in multicast DNS is the key. And the reason I use it is because… it just works. There’s no need to configure it, it works like this by default on pretty much every OS. Set the hostname and you’re done, .local now works. You can even bridge it across networks with a mDNS repeater available on many routers.
Given the ambiguity of certificates everywhere, malicious devices on the local network posing as a different server are not an issue (and it’s not like they couldn’t hijack the IP address in any flat network anyway).
Well don’t build your network around unassigned TLDs.
Also NAT does literally nothing other than being a massive PITA, so… yeah, I don’t think there’s much I can agree with in your rant.
Like, oh no, fully functional point to point connectivity across the internet, how terrible
Edit: .home.arpa is actually designated as local TLD, and is what I use for a crappy old tablet that doesn’t support mDNS
I’m not a native English speaker so 🤷♂️, but the term disk quota is commonly used to refer to this kind of limit.
The first few generations that were sold with the promise of unlimited photo backups still get that deal - if you find an old Pixel / Pixel XL and use it to upload your photos, they will not count towards your storage. A few more models then get unlimited uploads in “high” quality, and everything since I think Pixel 5 is completely out of luck.
The free storage limit in Google Drive
That’s a reasonable per-core size, and it doesn’t make much sense to add all the cores up if your goal is to fit your data within L2 (like in the article)
Please don’t pretend as if OpenSource Devs don’t constantly complain about pesky PRs😅
<i>I</i>'ve <u>seen</u> much <b><u>more</u> complaints</b> about <a href=“https://0.0.0.0/random_img.tiff”>people</a> constantly <marquee>demanding</marquee> their specific <h1>annoyances</h1> to be fixed without ever <i>submitting <u>a single <b>line of code</b></u></i>. <i>Maintainers</i> are pretty much <b>universally</b> welcoming to code <h2>contributions</h2> <br><br><br><br><br><br>
I soooo hope this does something funky with someone’s Lemmy client
Maybe the management hasn’t decided on the exact promises they’re willing to make? Also there’s two years left before it becomes important, while previously there was always a generation going out of support within a year.
That’s more of a storage thing, RAM does a lot smaller transfers - for example a DDR5 memory has two independent 32bit (4 byte) channels with a minimum of 16 transfers in a single “operation”, so it does 64 bytes at once (or more). And CPUs don’t waste memory bandwidth than transferring more than absolutely necessary, as memory is often the bottleneck even without writing full pages.
The page size is relevant for memory protection (where the CPU will stop the program execution and give control back to the operating system if said program tries to do something it’s not allowed to do with the memory) and virtual memory (which is part of the same thing, but they are two theoretically independent concepts). The operating system needs to make a table describing what memory the program has what kind of access to, and with bigger pages the table can be much smaller (at the cost of wasting space if the program needs only a little bit of memory of a given kind).
Or when you have the audacity to take a picture with it
This is referring to the recent news about Google gaining huge market share. This new drop simply means there was no dramatic change and last month’s data was flawed.
But it’s made by ✨Google✨, obviously it’s worth the superior manufacturing and quality assurance /s
My two cents: the only time I had an issue with Btrfs, it refused to mount without using a FS repair tool (and was fine afterwards, and I knew which files needed to be checked for possible corruption). When I had an issue with ext4, I didn’t know about it until I tried to access an old file and it was 0 bytes - a completely silent corruption I found out probably months after it actually happened.
Both filesystems failed, but one at least notified me about it, while the second just “pretended” everything was fine while it ate my data.
I don’t think overheating would cause random corruptions (it should throttle down when overheating, and then shut down if the temperature gets too high even when throttled, but there should never be an incorrect result of any computation), and surely the RAM will run at the standard 2133 speed on default settings - OP says they reset the BIOS settings to default between CPU swaps.