• 0 Posts
  • 104 Comments
Joined 2 years ago
cake
Cake day: April 3rd, 2024

help-circle

  • At the time people welcomed it; Trident really was terrible. However, since then Gecko’s marketshare has fallen into the single digits on account of Mozilla’s terrible governance. WebKit isn’t exactly a big alternative, either (and is often regarded as the new Trident in terms of web standard adherence). Opera used to have Presto but nope, that’s also Chromium now.

    That means we’re now stuck in a situation where an advertising company controls how the web works for 75% of all users. And they’re happily abusing that power.

    I’m rooting for Servo and Ladybird as new entrants into the market but both are small projects trying to challenge a multi-billion dollar industry titan who wants the web to be as complex as possible so that only they and their token competitors can exist.

    We might actually have been better off with Microsoft trying to keep Trident relevant.




  • I don’t know if they get a share or if they get a flat payment for every device that has crap preinstalled. Either way, not doing it would reduce profits and therefore go against the interest of the shareholders who would then have grounds to the CEO for failing to do their job.

    I’m very much unhappy with how that works but it’s a consequence of how publicly traded companies work. Companies that make it their legally binding goal to maximize shareholder gains attract more investors, have more money, and are thus more effective in increasing their market share. Over time they outcompete their rivals until the market is dominated by maximally profitable companies.

    At that point, shit-free products are only available if there is a clear indication that they will generate more profit than shitty products. And the handful of major players will happily collude to make sure only shitty products enter the market, increasing profits for everyone. Welcome to cartelville, population: the three companies that make up 95% of the world market.



  • Would be great but the manufacturer would be at a disadvantage because that bundled bullshit effectively subsidizes the device. So you’d have to either raise prices or accept a lower profit margin.

    Due to the high barrier of entry (e.g. because of patents) it’s unlikely that a privately owned company can make a big market entry, especially across countries. And a public company will be forced by the shareholders to maximize profit so either you bundle crapware or they fire you as CEO.

    Of course if you look outside the TV market such devices already exist. High-quality digital signage devices can easily be had – for about three times the price of an equivalently-sized TV.





  • Oh, AI can be very useful. Just not the generative stuff that is currently trying to consume all resources of the entire solar system for nebulous potential benefits.

    A good example of AI that just works is document scanning. Get a picture of a document, locate text, OCR it, figure out which parts of the text correspond to entry fields, auto-populate the fields. That works pretty well and can greatly speed up manual data entry. It’s not perfect but the success rate is pretty good due to the constrained problem space and even if you have to check all fields and manually correct 10% of them you still save a lot of time.

    An early example of this is the automated parsing of hand-written postal codes. That iteration of the tech has been in productive use since the 90s! (Yes, that’s just OCR but OCR is considered a field of AI.)

    It’s one of those unexciting applications of tech that don’t make major waves but do work.


  • I predict incremental quality increases. Qwen4 will probably be a somewhat better Qwen3 (and a dud if we’re unlucky). I do agree that it’ll probably come out; there’s not enough life left in this AI boom for a Qwen5, though.

    The biggest change will probably come from figuring out where LLM use will actually benefit us. Right now the industry zeros to answer that with “everywhere” and concludes that it’s prudent to spend money equivalent to the GDP of an industrial nation on compute-only data centers.

    For example, I expect the use case for coding to be more like “autocomplete a code block based on known patterns” rather than “build a public-facing web application from a prompt”.




  • Except if they then have to run it on their machine and the setup instructions start with setting up a venv. I find that a lot of Python software in the ML realm makes no effort to isolate the end user from the complexities of the platform. At best you get a setup script that may or may not create a working venv without manual intervention, usually the latter. It might be more of a Torch issue than a Python one but it still means spending a lot of time messing with the Python environment to get things running.

    This may color my perception but the parts of the Python ecosystem I get exposed to as an end user these days feel very hacky. (Not all of it is, though; I remember from my Gentoo days that Portage was rock solid.)





  • It’s true that LLMs (and GANs) are taking over a term that contains a lot of other stuff, from fuzzy logic to a fair chunk of computer linguistics.

    If you look at what AI does, however, it’s mostly classification. Whether it’s fitting imprecise measurements into categories or analyzing waveform to figure out which word it represents regardless of diction and dialect; a lot of AI is just the attempt at classifying hard to classify stuff.

    And then someone figure out how to hook that up to a Markov chain generator (LLMs) or run it repeatedly to “recognize” an image in pure noise (GANs). And those are cool little tricks but not really ones that solve a problem that needed solving. Okay, I’ll grant that GANs make a few things in image retouching more convenient but they’re also subject to a distressingly large number of failure modes and consume a monstrous amount of resources.

    Plus the whole thing where they’re destroying the concept of photographic and videographic evidence. I dislike that as well.

    I really like AI when used for what it’s good at: Taking messy input data and classifying it. We’re getting some really cool things done that way and some even justify the resources we’re spending. But I do agree with you that the vast majority of funding and resources gets spent on the next glorified chatbot in the vague hope that this one will actually generate some kind of profit. (I don’t think that any of the companies who are invested in AI still actually believe their products will generate a real benefit for the end user.)