

Huh? What do you mean “if”? Such a PDF vulnerability literally did happen a few months ago; fixed in Firefox v.126: https://codeanlabs.com/blog/research/cve-2024-4367-arbitrary-js-execution-in-pdf-js/.
Huh? What do you mean “if”? Such a PDF vulnerability literally did happen a few months ago; fixed in Firefox v.126: https://codeanlabs.com/blog/research/cve-2024-4367-arbitrary-js-execution-in-pdf-js/.
There’s no real need for pirate ai when better free alternatives exist.
There’s plenty of open-source models, but they very much aren’t better, I’m afraid to say. Even if you have a powerful workstation GPU and can afford to run the serious 70B opensource models at low quantization, you’ll still get results significantly worse than the cutting-edge cloud models. Both because the most advanced models are proprietary, and because they are big and would require hundreds of gigabytes of VRAM to run, which you can trivially rent from a cloud service but can’t easily get in your own PC.
The same goes for image generation - compare results from proprietary services like midjourney to the ones you can get with local models like SD3.5. I’ve seen some clever hacks in image generation workflows - for example, using image segmentation to detect a generated image’s face and hands and then a secondary model to do a second pass over these regions to make sure they are fine. But AFAIK, these are hacks that modern proprietary models don’t need, because they have gotten over those problems and just do faces and hands correctly the first time.
This isn’t to say that running transformers locally is always a bad idea; you can get great results this way - but people saying it’s better than the nonfree ones is mostly cope.
Incredibly weird that this thread was up for two days without anyone posting a link to the actual answer to OP’s question, which is g4f.
Damn, I didn’t know things were so bad there in Canada.
Even in the detailed info? If so that’s weird; probably something along the lines of “the seller messed up the weight, fixed it, but for some insane reason the site doesn’t recalculate the price”.
What happened there? These are presumably calculated automatically, so does the second item has its mass listed as 2kg?
Security as in cybersecurity, yes. Security as in not getting caught violating government bans, not so much - if you’re in a country where getting repressed by your government is a real possibility, it helps a lot for it to not be possible to see exactly what sites you visit. Reminder: even over HTTPS, the domain name (like lemmy.world
) is normally not encrypted. Encrypted Client Hello can solve this, but it has only started being commonly used a year ago or so, and more importantly requires the host to support it.
Note that openai’s original whisper models are pretty slow; in my experience the distil-whisper project (via a tool like whisperx) is more than 10x faster.
Really? This is the opposite of my experience with (distil-)whisper - I use it to generate subtitles for stuff like podcasts and was stunned at first by how high-quality the results are. I typically use distil-whisper/distil-large-v3, locally. Was it among the models you tried?
How’s musk related to this one?
My point is just that nobody really thinks it should be a free for all.
Don’t made judgements about everybody based on one guy. I’m on an instance that doesn’t defederate lemmygrad or lemmy.ml, so I commonly see utterly insane tankie takes in popular, and of course also in various comments - and yet I don’t want those people to not have a platform. Because I trust just about noone to decide whether my opinions should be censored, and if that means also not censoring the opinions of people who I think are very wrong, I’m willing to take that trade.
What’s so hilarious about it?
I’m very happy Servo exists but if they want, like, a working browser, it’s no wonder they chose Chromium.
For comparison, from a recent Servo blogpost: “Servo can now run Discord well enough to log in and read messages, though you can’t send messages yet. […] We now support enough of XPath to get htmx working.”.
Servo has been in development for 7+ years and it’s still not able to render modern web. Maybe it never will, since it’s impossible to build a new web browser.
I use Firefox (and forks) myself but wouldn’t donate to it. It’s like Wikipedia - a great project with a shitty parent company which’ll spend all of your donations on shit projects.
I’m not aware of how exactly blocking works there, but if it’s similar to China and Russia, consider subscribing to a VPN provider that supports stealth proxies (e.g. Shadowsocks or VLESS); that’s harder to block.
There are feature differences there’s also a convenience factor: youtube-dl people for some reason stopped doing releases, so you can’t get a fresh version from pypi (only installing from github or their site). Yt-dlp is on pypi, including nightly builds.
Works much better with fzf, but even just default bash it’s useful.
“I’ve seen it first-hand” isn’t significant evidence because the frequency illusion effect is a thing. If you see dozens of ads a day and ignore them unless you notice them matching something you talked about, you’ll end up thinking ads can track what you talk about whether or not it’s true.
Invidious alone has been working quite badly this year (stopped working for months until inv-sig-helper was invented, etc), but combined with FreeTube it almost always works; can recommend.
Sure, in Firefox itself it wasn’t a severe vulnerability. It’s way worse on standalone PDF readers, though: