• 0 Posts
  • 61 Comments
Joined 1 year ago
cake
Cake day: February 10th, 2024

help-circle
  • I was only talking about high core count and high (relatively speaking) single-core performance. The DeepComputing Framework board is neither. Its JH7110 is only 4 cores and a rather old processor, which seems like an odd choice for a product releasing in 2025. At least the software support is great since distros have been working with VisionFive 2 and Milk-V Mars for years.

    It’s also the only currently-available Framework 13 board with fewer than 6 cores, though core count isn’t remotely comparable between architectures. At this price ($209 for lone board with 8GB RAM, $799 for full laptop) I’d prefer to see something at the very least comparable to SpacemiT K1, which has 8 cores and vector support, and is on the Banana Pi BPI-F3 (8GB version is $95).


  • I’m only aware of one RISC-V system where I can say the core count is there: the Milk-V Pioneer board and its 64-core SG2042 processor from two years ago. It’s comparable in price to a 64-core ARM Ampere CPU+motherboard (USD$1500 for the board), which seems somewhat reasonable when not considering the performance of each core. Hopefully the C930 core described in this article leads to more systems that aim for multi-core performance.

    Most RISC-V development boards are only 4 cores or fewer, with just a few popping up in the last year with 8 cores and nothing higher besides the SG2042. The best single-core RISC-V performance so far is on the SiFive P550 but it’s only 4 cores and comes on a development board that costs USD$500 (plus another $150 for tariffs if shipping to the US). You could easily get a 12-core AMD CPU and motherboard combo for less than that.




  • Unfortunately, DMCA takes an extreme stance when it comes to anti-circumvention. Even personal backup doesn’t have a strong legitimacy case under it, especially not when it comes to the tools that enable it.

    Very related to this, LockpickRCM is a tool whose entire purpose is to extract your own Switch keys for the titles you own, and in turn is far more useful for people who want personal backups than those who are pirating the games. Still got a DMCA takedown two years ago, and though it never went to court it’s extremely unlikely any court would have ruled in their favor if it did.


  • For what it’s worth, the “Download & transfer via USB” feature was applying DRM locked to the key of the specific Kindle device you select, giving you a file that’s incompatible with other devices even if they’re kindles linked to the same Amazon account. For many publishers it also gives files with drastically lower image quality than the Kindle app: about one-fourth to one-third the file size. For a couple examples, a 368MB KFX manga volume has a 125MB AZW3 file and an 8.0MB KFX light novel has a 2.2MB AZW3 file. Those smaller AZW3 files are also similar in size to DRMed EPUB files of the same books from other markets like Kobo and Google Play, so I expect it’s a deliberate choice to limit the quality of formats that are more trivial to strip DRM from.

    The best way I’ve found to make personal backups of owned Kindle content is to use a rooted Android device to download everything through the Kindle app, copy the KFX files to a computer, extract the key in a root shell, and then use DeDRM tools on those files with that key.

    A quick and dirty shell command I’ve used for that purpose is egrep -ao 'dsn[0-9a-f]{32}' /data/data/com.amazon.kindle/databases/map_data_storage.db. The key is 32 hex characters.

    Having a rooted Android device in the first place is the biggest hurdle for being able to do that. This new jailbreak should make it possible to do something similar with e-ink kindles instead.


  • Don’t assume Qualcomm’s general hostility to user control and freedom is representative of all non-x86 systems.

    This system isn’t like that at all. It’s usable with mainline Linux and mainline U-Boot and has no proprietary driver blobs. Granted, RISC-V has some more progress to make in terms of boot image standardization, and this board in particular uses an old SoC from three years ago (JH7110) which predates a lot of improvements that have been happening to various intercompatibility-focused RISC-V standards.

    For some of the most recent ARM systems (notably excluding Qualcomm junk), I can write a single installation image for a Linux distro of my choice to a USB drive and then boot that single USB drive through UEFI on several completely different systems by completely different vendors. Ampere, Nvidia, and more. ARM’s SystemReady spec results in exactly the same user-friendly process you’re used to on x86.

    The RISC-V ecosystem isn’t there yet though its very recent RISC-V BRS (Boot and Runtime Services) spec promises to bring that for near-future hardware. But this DeepComputing board doesn’t have that and doesn’t have some other features (vector instructions, RVA22/23, etc) that are very likely to become the minimum requirements for several RISC-V Linux distros in the not too distant future.


  • I think the messaging is clear this time: Steam Deck is the defacto and flagship SteamOS device that represents the platform, and it has a strong established mindshare already, while other options are now available as well. It had a headstart of three years that gave it plenty of time to shine, and the handheld form-factor still stands out as something the competition (Windows) treats as an afterthought at best with poor UX.

    The Steam Machines effort tried to position Alienware Alpha as its focus but the press coverage including all of the other options at the same time confused people. Steam Machines also had awful timing and pricing, with the Alienware being outdated hardware whose Windows version had already been out for a year for the same price or lower by the time the SteamOS version released, and the SteamOS version offering absolutely no advantage in pricing, power, features, or UX for most gamers. All of those factors are different this time. Plus game compatibility was much worse than it is now.


  • Most of the “Is open source software safe?” section of this post seems to advocate for what’s conventionally called Security Through Obscurity, which is widely considered very ineffective at preventing exploitation and at best a minor hurdle.

    There are a lot of differences between Android and iOS in terms of security, attack surface, and exploitation, but attributing that to open vs closed-source completely misunderstands the entire subject. For just two of the countless reasons: Many of the worst vulnerabilities that affect Android devices are in closed-source proprietary Qualcomm firmware. A platform being open in the sense of allowing users to install any application they want to (like Windows and Android to a limited extent) or closed off to prevent installation of unapproved software (iOS, PlayStation, Toyota cars, TiVo, etc.) is completely separate from whether that platform is open-source or not. GPLv3 has license terms that try to tie the two concepts but I chose examples that don’t use it at all. Also, iOS has public kernel source code.



  • There should have been a simple way to label them for usage that was baked into the standard.

    There is. USB IF provides an assortment of logos and guidelines for ports and cables to clearly mark data speed (like “10Gbps”), power output (like “100W” or “5A”), whether the port is used for charging (battery icon), etc. But most manufacturers choose not to actually use them for ports.

    Cables I’ve seen usually are a bit better about labeling. I have some from Anker and ugreen that say "SS”, “10Gbps”, or “100W”. If they don’t label the power it’s probably 3A and if they don’t label the data speed it’s usually USB 2.0, though I have seen a couple cables that support 3.0 and don’t label it.


  • I’ve been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

    I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

    One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn’t do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.


  • This argument is even more ridiculous than it seems. During the copyright office hearing for this exemption request (back in April), the people arguing in favor of libraries talked about the measures they have in place. They don’t just let people download a ROM to use in any emulator they please. It’s not even one of those browser-based emulators where you can pull the ROM data out of your browser cache if you know how. It’s a video stream of an emulator running on a server managed by the library, with plenty enough latency to make it very clearly a worse gaming experience.

    It’s far easier to find ROMs of these games elsewhere than it is to contact a librarian and ask for access to a protected collection, so there’d be no reason to redistribute the files even if they were offered, which they aren’t.

    On top of that, this exemption request was explicitly limited to old games that have been long unavailable on the market in any form, which seems like an insane limitation to put on libraries, places that have always held collections of books both new and old.

    All of that is still not enough to sate the US Copyright Office, the ESA, AACS, or DVD CSS. Those three were the organizations that fought against this.



  • Anbernic devices in particular are known to ship with an SD card that’s preloaded with a fairly large game library. I own a RG351M which did indeed include a cheap card loaded with both the OS and a collection of games by Nintendo, Sega, and many others, plus some strange rom hacks. I immediately swapped that card out for a better one with a better CFW and my own files.

    Most other notable names in the emulation handhelds space like Retroid, Ayn, and Ayaneo expect users to be able to provide their own files instead, which I’d say makes more sense.


  • USB-C video is usually DisplayPort Alt Mode, which uses a completely different data rate and protocol from USB.

    Even using old 2016 hardware, a computer and USB-C cable that both only support 5 Gbps USB (such as USB 3.1 Gen 1) can often easily transmit an uncompressed 4K 60Hz video stream over that cable, using about 15.7Gbps of DisplayPort 1.2 bandwidth. Could go far higher than that with DP 2.0.

    Some less common video-over-USB devices/docks use DisplayLink instead, which is indeed contained within USB packets and bound by the USB data rate, but it uses lossy compression so those uncompressed numbers aren’t directly comparable.


  • zarenki@lemmy.mltoTechnology@lemmy.worldSome basic info about USB
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    6 months ago

    For that portable monitor, you should just need a cable with USB-C plugs on both ends which supports USB 3.0+ (could be branded as SuperSpeed, 5Gbps, etc). Nothing more complicated than that.

    The baseline for a cable with USB-C on both ends should be PD up to 60W (3A) and data transfers at USB 2.0 (480Mbps) speeds.

    Most cables stick with that baseline because it’s enough to charge phones and most people won’t use USB-C cables for anything else. Omitting the extra capabilities lets cables be not only cheaper but also longer and thinner.

    DisplayPort support uses the same extra data pins that are needed for USB 3.0 data transfers, so in terms of cable support they should be equivalent. There also exist higher-power cables rated for 100W or 240W but there’s no way a portable monitor would need that.


  • The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.

    I feel like that purpose has already been undermined by various changes to copyright law since its inception, such as DMCA and lengthening copyright term from 14 years to 95. Freedom to remix existing works is an important part of creative expression which current law stifles for any original work that releases in one person’s lifespan. (Even Disney knew this: the animated Pinocchio movie wouldn’t exist if copyright could last more than 56 years then)

    Either way, giving bots the ‘right’ to remix things that were just made less than a year ago while depriving humans the right to release anything too similar to a 94 year old work seems ridiculous on both ends.