• 2 Posts
  • 127 Comments
Joined 1 year ago
cake
Cake day: December 7th, 2023

help-circle



  • Ah, I thought this was in regards to AFK tracking when discord isn’t focused (which this plugin still won’t fix due to the mentioned Wayland restrictions) - I didn’t realize that it still didn’t work even when discord was focused, which is strange.

    I’ve been using Vesktop since screen share wasn’t working on Wayland, and it already seems to do what this plugin does hence my confusion.



  • Primarily I use Arch on my desktop (and by proxy, my Steam Deck which runs SteamOS), which is what I’ve landed on after a ton of distro hopping. The idea of Atomic distros catches my eyes, but for me in its present state there are too many steps needed in order to make deeper changes (for example, installing a kernel module) - but I quite like SteamOS on my Deck since I know it will always be in a “consistent” state, for example.

    On servers I run a mix of Rocky Linux and Debian.




  • Try resetting your Firefox profile. Sometimes a weird setting can break browsers in spectacular ways.

    This was a big one for me, for the longest time I could not figure out why I couldn’t get YouTube to play videos over 1080p for me in Firefox on my PC, it ended up being some weird setting that I changed in about:config (I sadly cannot recall which one) a long time ago - but I’d always copied my Firefox profile with me so that bad setting stuck around.







  • Hmm, gotcha. I just tried out a fresh copy of text-gen-webui and it seems like the latest version is borked with ROCM (I get the CUDA error: invalid device function error).

    My next recommendation then would be LM Studio which to my knowledge can still output an OpenAI compatible API endpoint to be used in SillyTavern - I’ve used it in the past before and I didn’t even need to run it within Distrobox (I have all of the ROCM stuff installed locally, but I generally run most of the AI stuff in distrobox since it tends to require an older version of Python than Arch is currently using) - it seems they’ve recently started supporting running GGUF models via Vulkan, which I assume probably doesn’t require the ROCM stuff to be installed perhaps?

    Might be worth a shot, I just downloaded the latest version (the UI has definitely changed a bit since I last used it) and just grabbed a copy of the Gemma model and ran it, and it seemed to work without an issue for me directly on the host.

    The advanced configuration settings no longer seem to directly mention GPU acceleration like it used to, however I can see it utilizing GPU resources in nvtop currently, and the speed it was generating at (the one in my screenshot was 83 tokens a second) couldn’t have possibly been done on the CPU so it seems to be fine on my side.