• 0 Posts
  • 530 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • I’ve been using cryptpad.fr (the “flagship instance” of CryptPad) for years. It’s…fine. Really, it’s fine. I’m not thrilled with the experience, but it is functional and I’m not aware of any viable alternatives that are end-to-end encrypted.

    It’s based on OnlyOffice, which is basically a heavyweight web-first Microsoft Office clone. Set your expectations accordingly.

    No mobile apps, and the web UI is not optimized for mobile. I mean, it works, but does using the desktop MS Office UI on a smartphone sound like fun to you?

    Performance is tolerable but if you’re used to Google Sheets, it’s a big downgrade. Some of this is just the necessary overhead involved in an end-to-end encrypted cloud service. Some of it is because, again, this is a heavyweight desktop UI running in a web browser. It’s functional, but it’s not fast and it’s not pretty.







  • DNS over HTTPS. It allows encrypted DNS lookup with a URL, which allows for url-based customizations not possible with traditional DNS lookups (e.g. the server could have /ads or /trackers endpoints so you can choose what to block).

    DNS Over TLS (DoT) is similar, but it doesn’t use URLs, just IP addresses like generic DNS. Both are encrypted.




  • Honestly, that sounds great.

    My biggest problem with Flatpak is that Flathub has all sorts of weird crap, and depending on your UI it’s not always easy to tell what’s official and what’s just from some rando. I don’t want a repo full of “unverified” packages to be a first-class citizen in my distro.

    Distros can and should curate packages. That’s half the point of a distro.

    And yes, the idea of packaging dependencies in their own isolated container per-app comes with real downsides: I can’t simply patch a library once at the system level.

    I’m running a Fedora derivative and I wasn’t even aware of this option. I’m going to look into it now because it sounds better than Flathub.


  • In my experience, this is more a problem if you are fully running your own mail servers, not so much if you are using an established email service. My MX record reflects my email provider, and my outgoing mail goes through their servers. So I’m as trusted as they are, in general. Your mail provider should have instructions on how to set up DNS for verification.


  • If you’re willing to pay money for it, you can get your own domain for $2-$15 per year, then use it with pretty much any commercial email service. That way you can change email providers without changing your address.

    This is my plan going forward. I’m going to suffer the inconvenience of changing my address, but only one more time, not every time I want to change providers.



  • Yeah. If you want to be on the cutting edge of storage, look for a mobo that has PCIe gen5 m.2 slots. But really, PCIe gen4 m.2 drives are still pretty darned fast. You can get some with >7GB/sec transfer rates. Do you need >12GB/sec transfer from disk? Probably not. Is it cooooool? Sure. :)

    This is a popular SSD these days, very good for the price: https://us-store.msi.com/PC-Components/Storage-Devices/Solid-State-Drive/M482-NVMe-M2-2TB-Bulk . If you want something high-end, look for an SSD with DRAM cache. Useful if you’re writing massive amounts of data regularly, like video mastering or something like that, generally overkill otherwise.

    I’ve been on the Ryzen x700 line for a long time now, first the 1700 and now on the 7700. No complaints, they rock. So I’d start by looking at the 9700. 9900 has more cores (and uses significantly more power), 9600 has fewer cores. Single-core performance is basically the same across the board, so it just depends on whether your workload can use a lot of cores or not. The “X3D” chips have additional CPU cache that supposedly improves performance in some workloads (notably in gaming). So if that’s important to you, the 9800X3D is the natural choice.


  • Fortunately, my worn is not LLM related but just simple neural networks, but I don’t know how that might affect best practices for hardware.

    Okay. If this is something you already do on existing machines, you’ll be in good position to know how much memory you actually need, and then maybe give yourself a little room to grow. My guess would be that you’re not working on massive models so you’ll probably be fine with a mid-range card.

    At the same time, a lot of AI/ML stuff is becoming mainstream and requires a ton of VRAM to get good performance. If you do any work with graphics, audio, or video, you might find yourself running large models without really thinking about it. There are lots of use cases for speech recognition models, for example, which are quite large. Photoshop already has some medium-sized models for some tasks. Noise reduction for audio can also be quite demanding (if you want to do a really good job).

    As for system RAM…the world of DDR5 is indeed complicated. I don’t think there’s a huge need to go over 6000MHz RAM, and faster RAM brings some compatibility issues with some mobos/CPUs. It’s also usually faster to use two sticks than four. So 2x32GB would be better than 4x16 in general.

    For GPUs in particular, new gens with more VRAM are on the way, so buying the high-end now might leave you with something that feels obsolete by the time you grow into it. If you spend $750 now and $750 again in 2-3 years, you might end up better off than if you spent $1500 today and waited twice as long to upgrade. Particularly if you are able/willing to sell your old equipment to offset upgrade costs.


  • VRAM is king for AI workloads. If you’re at all interested in running LLMs, you want to maximize VRAM. RTX 3090 or 4090 are your options if you want 24GB and CUDA. If you get a 4090, be sure you get a power supply that supports the 12V HPWR connector. Don’t skimp on power. I’m a little out of the loop but I think you’ll want a PCIe 5.0 PSU. https://www.pcguide.com/gpu/pcie-5-0-psus-explained/

    If you’re not interested in LLMs and you’re sure your machine learning tasks don’t/won’t need that much VRAM, then yes, the 4070 Ti is the sweet spot.

    logicalincrements.com is aimed at gaming, but it’s still a great starting point for any workload. You’ll probably want to go higher on memory and skew more toward multi-core performance compared to gaming builds, IMO. Don’t even think about less than 32GB. A lot of build guides online skimp on RAM because they’re only thinking about gaming.


  • FYI, GrapheneOS no longer supports the Pixel 4a. The oldest phone they support is the Pixel 6.

    GrapheneOS basically matches Google’s support window, because they consider it critical to include firmware and driver security updates, which they cannot reverse-engineer. Once the hardware is out of support, GrapheneOS drops support as well. Security is their top priority, and they will generally remove something entirely if they cannot implement it securely, rather than ship it half-baked and potentially leave users with a false sense of security.

    LineageOS has a different philosophy, and will continue to release new software with OS-level security patches beyond the hardware support window for as long as it is technically viable. This is obviously better than running outdated software on outdated hardware, just be aware that there is nothing the LineageOS folks can do about firmware/driver-level security vulnerabilities.

    Both approaches are totally valid, and it’s great that we have the choice! Just wanted to explain that for anyone unfamiliar.


  • But any 50 watt chip will get absolutely destroyed by a 500 watt gpu

    If you are memory-bound (and since OP’s talking about 192GB, it’s pretty safe to assume they are), then it’s hard to make a direct comparison here.

    You’d need 8 high-end consumer GPUs to get 192GB. Not only is that insanely expensive to buy and run, but you won’t even be able to support it on a standard residential electrical circuit, or any consumer-level motherboard. Even 4 GPUs (which would be great for 70B models) would cost more than a Mac.

    The speed advantage you get from discrete GPUs rapidly disappears as your memory requirements exceed VRAM capacity. Partial offloading to GPU is better than nothing, but if we’re talking about standard PC hardware, it’s not going to be as fast as Apple Silicon for anything that requires a lot of memory.

    This might change in the near future as AMD and Intel catch up to Apple Silicon in terms of memory bandwidth and integrated NPU performance. Then you can sidestep the Apple tax, and perhaps you will be able to pair a discrete GPU and get a meaningful performance boost even with larger models.