• 1 Post
  • 68 Comments
Joined 6 months ago
cake
Cake day: August 30th, 2025

help-circle

  • Thorry@feddit.orgtoProgramming@programming.devWe mourn our craft
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Writing code with an LLM is often actually less productive than writing without.

    Sure for some small tasks it might poop out an answer real quick and it may look like something that’s good. But it only looks like it, checking if it is actually good can be pretty hard. It is much harder to read and understand code, than it is to write it. And in cases where a single character is the difference between having a security issue and not having one, it’s very hard to spot those mistakes. People who say they code faster with an LLM just blindly accept the given answer, maybe with a quick glance and some simple testing. Not in depth code review, which is hard and costs time.

    Then there’s all the cases where the LLM messes up and doesn’t give a good answer, even after repeated back and forth. Once the thing is stuck in an incorrect solution, it’s very hard to get it out of there. Especially once the context window runs out, it becomes a nightmare after that. It will say something like “Summarizing conversation”, which means it deletes lines from the conversation that are deemed superfluous, even if those are critical requirement descriptions.

    There’s also the issue where an LLM simply can’t do a large complex task. They’ve tried to fix this with agents and planning mode and such. Breaking everything down into smaller and smaller parts, so it can be handled. But with nothing keeping the overview of the mismatched set of nonsense it produces. Something a real coder is expected to handle just fine.

    The models are also always trained a while ago, which can be really annoying when working with something like Angular. There are frequent updates to Angular and those usually have breaking changes, updated best practices and can even be entire paradigm shifts. The AI simply doesn’t know what to do with the new version, since it was trained before that. And it will spit out Stackoverflow answers from 2018, especially the ones with comments saying to never ever do that.

    There’s also so much more to being a good software developer than just writing the code. The LLM can’t do any of those other things, it can just write the code. And by not writing the code ourselves, we are losing an important part of the process. And that’s a muscle that needs flexing, or skills rust and go away.

    And now they’ve poisoned the well, flooding the internet with AI slop and in doing so destroying it. Website traffic has gone up, but actual human visits have gone down. Good luck training new models on that garbage heap of data. Which might be fine for now, but as new versions of stuff gets released, the LLM will get more and more out of date.



  • Good to hear it! I was afraid you’d just gone off the headline and not the contents, with the author being someone who works in the AI field and the article being pro-AI in my opinion. I apologize, you have obviously done your homework.

    I agree, it’s fucking crazy what this dude says. He’s like sure LLMs are flawed to the bone, but you just have to accept that and work around it. Just build a fence around it, and you can even build that fence using AI! I mean WTF…

    One of the reason I think the article is pro-AI is because of lines like this:

    This is not at all an indictment of AI. AI is extremely useful and you/your company should use it.


  • Have you actually read the article? It isn’t anti-AI it’s actually very much pro-AI. All it says is that there are a lot of companies duping people at other companies (that use AI) to sell their shit.

    He argues so called AI security companies sell solutions for problems that are inherent to the technology and thus can never be fixed. But by showing there is a problem and then offering the solution to that problem, people think they are actually fixing something. In reality it only fixes that one specific problem, but leaves open the almost infinite of other very similar issues.

    His argument is to actually handle AI security by getting someone that really knows what is what (how one would get that person or distinguish them from bullshitters is a mystery to me). Some issues are just a part of the deal with AI, so they have to be accepted and managed where possible. Other issues should be handled upstream or downstream and he argues AI could be implemented on those parts as well.

    I agree with his argument, it is total bullshit to show the flaws in LLM models and then claim to fix those with expensive software that doesn’t actually solve the core issue (because that is impossible). However in my experience this has always happened in the past more or less. I’m not sure it’s happening more now? Or because understanding of AI is so low usually, so it’s easier?



  • This reminds me of a thought experiment where a super human level AI who runs everything is itself actually run by humans. The humans just have a regular job, they wake up, do their human morning things and get to work. There they do stuff, maybe on a computer or maybe with paper or something, this doesn’t really matter. As long as the work is mysterious and important. Every day they do pretty much the exact same thing, just like most jobs. Then at the end of the day they go home and live a regular normal life. The idea is what these humans do is how the machine AI actually works or “thinks”. The humans don’t know what exactly they are doing, as each task is only a very small part of a greater whole. So they don’t control it or influence it exactly. The work is done by enough people to make the AI smart and fast enough to be useful. The AI needs the humans to work and in turn the AI runs everything for the humans, so they need each other.







  • I’m pretty sure it isn’t that easy to do what that dude did. It’s a multi step process. It doesn’t say: “This will delete your data, are you sure you want to continue?”, but it also isn’t like he clicked on the x top right and all of the data was gone. The language of the function is also pretty clear and there are a lot of ways to find out what it does. The dude even admits himself he wanted to know if he could toggle that and still have access to his data, but instead of asking the chatbot beforehand he just tried it and then cried foul when it actually locked him out.


  • I’m sorry but WHAT? how do you work on stuff for 2 years and have NO BACKUPS? Like dude WTF. I have backups of backups, I have version histories of everything I do. I have physical backups, cloud backups, off site backups, you name it. If I put effort into creating something, it’s worth putting in effort to keep it safe.

    This dude was outsourcing his brain to a dumb chat service and lost the ability to think. His brain was so fried he actually tried the feature that said this will lock you out of your data, just to see if he actually got locked out.

    Not the mention the dude worked on grants, papers and as a professor. And now the bullshit generator has tainted all of that. Imagine getting a huge student loan to go to get a good education and your professor just outsourced it to fucking ChatGPT. I would be hella mad. Not to mention things like grants and early research is often covered by an NDA and my man just uploaded all of that to some shitty US company.




  • Machine learning has niches it’s useful in (although it has sometimes been the hammer that makes everything look like a nail). Large language models on the other hand? I haven’t seen them be useful in anything. Any results they generate are questionable at best and a lot of the time just plain wrong. With all the costs and other downsides, I think we should classify them as a failed technology and stop using them. The only reason anybody uses them today is the huge amount of marketing behind it and the fact the technology is being given away for almost nothing. If the actual cost would be charged, I doubt there would be a lot of interest. We’ve seen all those AI companies struggling to generate any revenue, their costs are in the billions and their revenue from AI products is in the millions, it doesn’t add up.


  • And if you look up tech support on this, it’s all unrelated nonsense tips. Even in the official sources it doesn’t go much beyond turn it off/on, reset the settings, reinstall the app or reinstall the OS. While this might “solve” stuff, it doesn’t fix the core issue and the issue might re-occur without a fix. Why go through all the trouble of wiping and reinstalling, if there is often a very focused and simple fix that truly solves the issue. But Microsoft has started making it harder for users to have ownership over their systems for years now. Keep your users dumb, then you can control them.



  • It seems like the future is pretty chill about privacy. Like in one episode they know down to the millisecond when Riker was abducted from the ship. However that information only became available when a ranking officer asked directly for that information in the course of an investigation. It’s like sure we have all that stuff spying on people all the time, but it’s not like we are going to use that data except for in an emergency.

    That’s the big difference between that fictional Sci fi future and the future we are heading for. These data companies don’t even try to hide it. They are like yeah we want you data to sell to our 1094 partners. They have all of the worst features and none of the good ones.