• 8 Posts
  • 425 Comments
Joined 2 years ago
cake
Cake day: August 4th, 2023

help-circle

  • Hey! A truly unpopular opinion that I wholeheartedly agree with.

    But I’ll take it one step further: I unironically believe that if you prefer your chocolate sweetened, you don’t really like chocolate. You like sugar.

    And, yes, I love just sucking on 100% unsweetened baker’s chocolate.

    …except for two things. 1) caffeine, even in the quantities in which it exists in an ounce of baker’s chocolate per day, doesn’t agree with me, so I don’t do any form of chocolate any more (but I often crave unsweetened baker’s chocolate) and 2) I have a severe, violent sensitivity to sugar, so I basically never eat anything even a little sweet.










  • Aside from what everyone else is saying, don’t use dependencies that you don’t have to. Particularly don’t use big “frameworks”. If you use any dependencies, use tiny, focused ones that do one thing. The more code there is underneath what you’re writing, the more likely it will cause problems that you will internalize. I’ve seen it many times. Spring (Java), for instance, will do something not as advertised, and devs will think they’re bad coders because they “can’t write code that works as it’s supposed to.” Avoiding that vicious cycle will make you a better coder in the long run.

    Also, when things aren’t working with your dependencies, do google for fixes, but don’t google too long. If you haven’t got a solution after an hour of no progress, look at your dependencies’ source code until you understand why and how to fix it.





  • I… doubt it?

    I took the liberty of looking in the developer tools as it failed, and there was a 500 response. The connection to Hulu’s servers was all over HTTPS and I didn’t get any certificate warning, so unless my ISP managed to get Hulu’s private key or got with a corrupt registrar willing to issue a valid replacement certificate, no ISP should be able to change response codes on a man-in-the-middle basis or a redirecting-traffic-to-a-hostile-server basis.

    And given how many people have reported issues, I doubt it’s specific to any particular ISPs.

    Net neutrality being dead is a huge bummer, but I don’t think this can be blamed on that.



  • Does it really do any good for the drive to be encrypted if it doesn’t require a password (or Yubikey or retinal scan or other authentication factor) on boot? If you’re just going to put the plaintext key/password on the same drive but in a partition that’s not encrypted, there’s no point encrypting the drive, right?

    So maybe “it asks for a password on boot” is more of a “works as intended” thing?

    How will I access the encrypted devices after installation? (System Startup) During system startup you will be presented with a passphrase prompt. …

    The quote above is from Fedora documentation here

    This is your root FS that’s encrypted that we’re talking about, correct?

    If you really want an encrypted root but no password on boot and the plaintext decryption password/key on the same drive, there are ways to do it. (It would probably require customizing the initramfs somehow. But it’s Linux, and Linux certainly isn’t going to prevent you from doing such things. Just try to dissuade you.)

    If we’re not talking about a root filesystem, that would likely change some things. If it’s Luks, I’m pretty sure it wouldn’t matter particularly where on your filesystem the key was so long as your /etc/crypttab refers to it. I’d say that sort of setup would probably only provide additional security if the encrypted drive is an external drive that you might worry could be stolen or physically accessed when the attacker doesn’t have physical access to your root filesystem.

    Also, if you shared what encryption scheme was in use (Luks, Anaconda, etc), that would probably help as well.

    Edit: Ah. Ok. You gave more info while I was typing the above response. What you want is unlocking via ssh. For sure.


  • TootSweet@lemmy.worldtoOpen Source@lemmy.mlIf we had libre AI
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    2 months ago

    The GPL family of licenses was designed to cover code specifically. AI engines are code and are covered in most jurisdictions by copyright. (Disclaimer: I know a lot less about international intellectual property law than about U.S. intellectual property law. But I’m pretty confident what I’ll say here is at least true of the U.S…) But you don’t really have a functional generative AI system without weights. And it’s not clear that weights are covered by any particular branch of intellectual property in any particular jurisdiction. (And if they are, it’s not clear that the legal entity who trained the engine owns those rights on those weights rather than the rights holders who hold rights to the materials being used as training data.) It’s the weights that would make for any biases or purposefully nefarious output. Nothing that isn’t covered by intellectually property can meaningfully be said to be “licensed”, really. Under the AGPLv3 or any other license. To speak of something not covered by any intellectual (or non-intellectual, I suppose) property as “licensed” is just kindof nonsensical.

    Like, since Einstein’s General Relativity isn’t covered by any intellectual property, it’s not possible for General Relativity to be “licensed”. Similarly, unless some law is passed making LLM weights covered by, say, copyright law, one can’t speak of those weights being “licensed”.

    By the way, there are several high-profile cases of companies like Meta releasing LLMs that you can run locally and calling them “Open Source” when there’s nothing “Open Source” about them. As in, they don’t distribute the source code of LLaMa at all. That’s exactly the opposite of “Open Source” and the weights aren’t code and can’t really be said to be “Open Source”. More info here.

    Now, all that said, I don’t think there’s actually any inherent benefit to LLMs, AGPLv3 or otherwise, so I don’t have any interest even in AGPLv3 engines. But I’m all for more software being licensed AGPLv3. I just don’t think AGPLv3 is a concept that applies to any portion of LLMs aside from the engine.


  • AIs (well, LLMs, at least) aren’t coded, though. The engine is coded, but then they just throw training data at it until it starts parrotting the training data.

    Humans can create scripts around the LLMs. Scripts that filter certain stuff out of the training data (though that can involve some pretty tricky natural language processing and can never really account for everything) or scripts that watch responses for certain keywords or whatever and either preempt the response from getting to the user or try to get the LLM to generate a different, more acceptable answer.

    I think for poisoning to work well, we’d have to be creative, keep shifting our tactics, and otherwise do things in ways that can sneak past the LLMs’ babysitters. It would be a bit of an arms race, but I don’t think it’s as doomed from the start as you seem to think it is.


  • When I bought my current car, I read the privacy policy and it says that they’ll record anything in the cabin of the car they damned well please and upload it to the mothership(/car manufacturer/Subaru).

    For a while, I adopted the practice of repeating disparaging things about Subaru while I drove. I’ve kindof gotten away from the practice lately. What I really ought to do is find and unplug the OnStar MOBO to kill its internet connection. I’ll do that one of these days.

    As for what you’re talking about, I don’t think LLMs (typically?) learn by your interaction with them, right? Like, they take a lot of data, churn it through the algorithm, and produce a set of weights that are then used with the ending to produce hallucinations. And it’s very possible (probable, actually) that for the next generation of the LLM, they’ll use the prompts you used in the previous generation as more training data. So, yeah, what you’re getting at would work, but I don’t think it would work until the release of the next major version of the LLM.

    I dunno. I could be wrong about some of my assumptions in that last paragraph, though. Definitely open to correction.


  • Great question!

    So, first off, if I knew what app(s) specifically you have in mind, that’d help me answer better, but in general:

    • First off, I’ll say it’s mostly less that they support pacman than that Arch supports them. There may be some exceptions, but it’s usually Arch developers building the pacman packages rather than the author of the software in question. Even when it’s a proprietary application, it’s usually bundled into a packman package (or at least a PKGBUILD for it lives on AUR, but we’ll get to that) by one of the Arch developers or other volunteers.
    • Your first option for software that isn’t in the official Arch repositories is AUR. It’s for packages that aren’t as officially supported as those in the official Arch repositories. Anything in AUR, you’ll have to build the package itself (which usually involves compiling, unless it builds the package from an already-compiled binary), but the process is usually pretty straightforward. (Download and unzip the tarball from AUR somewhere, cd into the directory, and then makepkg -sf && sudo pacman -U <something>.tar.xz. You can also get some helper scripts that do some of those steps for you for convenience. Definitely worth having the experience of doing it manually a few times first, though, I’d say.) Even if the only way to get the software in question from the publisher is in .deb form, you may still find a package on AUR that will unpackage the .deb and package the result up into an Arch package.
    • You can run the software in an isolated way. There are snap packages and flatpacks, but… well, there are good reasons why they get a bad rap. Lol. Another option is to build an Ubuntu chroot in which to install the package in there. The cleanest and most straightforward option for running software isolated like this is probably Docker. (Running graphical apps in Docker can definitely be done, though it is a little tricky.)
    • You can grab the software in question and install it manually somewhere in your home directory. Somewhere like $HOME/install/<softwarename>. This can work even if the software is only available as a .deb file. You can just extract the .deb without installing it with the command ar x <blah>.deb and a tar -xf data.tar.gz and then put the files from within that .deb file where you want them.
    • There are some other options that I’ve never tried (and only learned of just now by googling) that… aren’t recommended. Here’s a link for reference, but to somewhat explain the problems that those approaches there can cause, when you install a package from any particular package manager, the package manager installs dependencies. (Typically those dependencies are shared libraries. Think ".dll"s.) And those dependencies live in specific places. But if, say, you had pacman and apt-get installed on the same system and install the same dependency (including if that dependency is installed automatically as part of installing something else) via both package managers, it’s likely to get one version of the dependency from one package manager and another from the other. Either one package manager is going to error out (“these files already exist in the filesystem, I think something’s wrong that a human should fix”) or overwrite files potentially breaking things. (Now, all that said, I know that pacman is actually in the official Ubuntu repositories and can be installed on Ubuntu alongside apt get. I have to admit I don’t know if and if so how it goes about avoiding problems if both are installed. Maybe it’s not a good idea to install pacman on Ubuntu for the same reason. Who knows!)
    • So, there is also the option to write your own PKGBUILD. (A “PKGBUILD” is a script that tells your Arch system how to build a Pacman package. They’re what you download from AUR when you want to build/install something from AUR. A couple of reference links.) It does require doing a little Bash programming, but It’s not as hard as it sounds. (And it’s easier than building Ubuntu packages.) I’ll talk about what to do if you truly can’t get the package in question anyhow except in .deb form. But first, if you can get the source code, you can typically grab a PKGBUILD from the official Arch repositories or from AUR that installs something “similar” to what you’re wanting to install and modify it. (Like, for a simple example, if you’re trying to install something written in C, you can look for a PKGBUILD for something written in C that would probably have similar dependencies.) If you can’t get the source code but can get a compiled distribution of the software in question, you can still write a PKGBUILD, and you’ll probably be able to find some PKGBUILDs to start from that would be pretty similar to what you need.
    • If you truly can’t get the software in question except in .deb form and you want to write a PKGBUILD for it, that can be done. It just involves writing a PKGBUILD that extracts the files from the .deb file and then packages them back up into an Arch package. I’ve done this before and I have a funny personal story about that. More info here in an answer to another question in this same Lemmy community.

    Just in case it’s useful to you, I’ll share the PKGBUILD I wrote for converting the Ubuntu kernel into an Arch package. It demonstrates how you’d go about extracting files from a .deb file in order to build them into an Arch package.

    pkgname='linux-ubuntu'
    pkgdesc='The Ubuntu kernel, modules, and headers'
    pkgver='5.15.0'
    _pkgver="$(cut '-d.' -f 1,2 <<< "${pkgver}")"
    _firmware_ver='1.187.29'
    _suffix_ver='20.04.2'
    pkgrel='25'
    arch=('x86_64')
    options=('!strip')
    url='http://ubuntu.com/'
    source=(
        'http://archive.ubuntu.com/ubuntu/pool/main/l/linux-firmware/linux-firmware_'"${_firmware_ver}"'_all.deb'
        'http://archive.ubuntu.com/ubuntu/pool/main/l/linux-hwe-'"${_pkgver}"'/linux-headers-'"${pkgver}"'-'"${pkgrel}"'-generic_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_amd64.deb'
        'http://archive.ubuntu.com/ubuntu/pool/main/l/linux-hwe-'"${_pkgver}"'/linux-hwe-'"${_pkgver}"'-headers-'"${pkgver}"'-'"${pkgrel}"'_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_all.deb'
        'http://archive.ubuntu.com/ubuntu/pool/main/l/linux-signed-hwe-'"${_pkgver}"'/linux-image-'"${pkgver}"'-'"${pkgrel}"'-generic_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_amd64.deb'
        'http://archive.ubuntu.com/ubuntu/pool/main/l/linux-hwe-'"${_pkgver}"'/linux-modules-'"${pkgver}"'-'"${pkgrel}"'-generic_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_amd64.deb'
        'http://archive.ubuntu.com/ubuntu/pool/main/l/linux-hwe-'"${_pkgver}"'/linux-modules-extra-'"${pkgver}"'-'"${pkgrel}"'-generic_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_amd64.deb'
        'linux.preset'
    )
    noextract=(
        'linux-firmware_'"${_firmware_ver}"'_all.deb'
        'linux-headers-'"${pkgver}"'-'"${pkgrel}"'-generic_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_amd64.deb'
        'linux-hwe-'"${_pkgver}"'-headers-'"${pkgver}"'-'"${pkgrel}"'_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_all.deb'
        'linux-image-'"${pkgver}"'-'"${pkgrel}"'-generic_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_amd64.deb'
        'linux-modules-'"${pkgver}"'-'"${pkgrel}"'-generic_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_amd64.deb'
        'linux-modules-extra-'"${pkgver}"'-'"${pkgrel}"'-generic_'"${pkgver}"'-'"${pkgrel}"'.'"${pkgrel}"'~'"${_suffix_ver}"'_amd64.deb'
    )
    sha256sums=(
        '22697f12ade7e6d6a2dd9ac956f594a3f5e2697ada3a29916fee465cc83a34a1'
        '595794e8ad28ed130af60e6ec8699313e1935ae70f7530a00b06dff67fb4d40e'
        '22dbdc1895f91d3ad9d4c5b153352f1cc8359291dba6ea1a0e683cc6871b0f58'
        '5705cefab39dd5512bcc515918d09153715c7bb365d6bc29cc9b0580e5723eef'
        '3d207388812e957447162c067fb637b4d06eccb4f303b801e8402046a7d3cf48'
        '2f1214dbb04cb47ce8d096bff969fca9c78c26ec21a395c12922eca43cc18e26'
        '75d7d4b94156b3ba705a72ebbb91e84c8d519acf1faec852a74ade2accc7b0ea'
    )
    
    package() {
        for f in "${noextract[@]}" ; do
            ar x "${f}"
            tar -xf "data.tar.xz" -C "${pkgdir}"
        done
        rm -r "${pkgdir}"'/usr/share'
        rm -r "${pkgdir}"'/usr/lib'
        mv "${pkgdir}"'/lib' "${pkgdir}"'/usr'
        install -Dm644 'linux.preset' "${pkgdir}"'/etc/mkinitcpio.d/linux.preset'
    }
    

    (I omitted the linux.preset file. It’s just in the same directory with the PKGBUILD and it gets bundled into the Arch package. But it’s not really important for what you’re doing unless you’re trying to install a different kernel than the official Arch kernel on an Arch system.)

    The part that extracts the files from the .deb packages is the ar x command and the tar -xf command. The package() function there is what decides exactly what files will be in the Arch package and where. And makepkg builds the package archive after running package().

    That covers all the options for installing software not in the Arch repos that I can think of.


  • Yes I am op.

    Ha! That’s what I get for posting on Lemmy at 2:00 am. Lol.

    So I guess I should just skip anything with a desktop environment like manjaro and just figure out how to install bare arch?

    You can certainly start with a bare Arch install and install on top of that a graphical environment. (Without a graphical environment, you wouldn’t be able to run a full-featured browser like Firefox or Chromium or whatever, for instance. I’d think if you intend to use this system as your daily driver – and I’d recommend you do for learning sake – you’ll probably want a graphical environment.) But, yeah. I’d say Arch isn’t that unapproachable to install without going the Manjaro route or the “archinstaller” route.

    With Arch, everything’s just packages. The difference between non-graphical Arch and graphical Arch is just that non-graphical Arch doesn’t have any graphical system packages installed.

    Now, I keep talking about “graphical systems”. There are two ways to go with that. There is X11 which is mature but a bit dated. And there’s Wayland which is the new hotness but support for it is still a bit lacking, so some features like screen grab may not be supported by all programs and some programs won’t work as straightforwardly on Wayland. (Basically, any time a program grabs an image or video of any portion of the screen of your graphical environment, that uses the “screen grab” API. Wayland does that differently than X11, so a lot of programs aren’t updated to use Wayland’s way yet.)

    I guess I’d probably lean toward recommending X11 at this point. I personally use a Wayland compositor (Sway, specifically), but I don’t think running Wayland is going to teach you much that X11 won’t, and running Wayland at this point is likely to introduce frustrating wrinkles. If after you have your Linux “sea legs” you want to try switching, that’s always an option as well.

    As for minimal X11 environments, first off, I’d say avoid things that describe themselves as “desktop environments”. They’re likely to hide details from you. Prefer “window managers.” Tiling window managers tend to be more minimal, but if you want to go with a more draggy-droppy, mouse-driven window manager that feels more like what you’re probably used to (but also doesn’t hide details), I’d recommend IceWM.

    And, finally, as far as a “bare Arch install”, the place to start is the install guide on the Arch Wiki. It goes step-by-step on how to do things. And take the time to understand the commands you’re running as you’re running them. There are a lot of links in the install guide to more in-depth articles. For instance, the “partitioning” section links to an article called “partitions” that goes in depth on what a “partition” even is.

    There’s a lot to learn, but it also pays off. Both in terms of just having the power to do the stuff you want with your own systems and in terms of benefits to your career. And it’s just plain fun!