Congratulations, you read the headline.
Learn how to have a conversation
Congratulations, you read the headline.
Learn how to have a conversation
That’s only article-worthy because it is a rare occurrence and an increasingly controversial opinion. And even that maintainer didn’t abandon TS completely—he said that would be “daft”—he just moved to types via JSDoc which is run through the TS compiler, as well as to .d.ts files.
Well, yes. TypeScript mitigates one big problem with JavaScript (type safety). That’s why it exists. It’s a dumb idea to choose vanilla JS over TS if you’re starting a new project today, IMO.
Whether or not you should use TS as your core language is debatable and situational, but in terms of using TS instead of JS, yeah, that’s a no brainer.


violates licenses
Not a problem if you believe all code should be free. Being cheeky but this has nothing to do with code quality, despite being true
do the thinking
This argument can be used equally well in favor of AI assistance, and it’s already covered by my previous reply
non-deterministic
It’s deterministic
brainstorming
This is not what a “good developer” uses it for


We have substantially similar opinions, actually. I agree on your points of good developers having a clear grasp over all of their code, ethical issues around AI (not least of which are licensing issues), skill loss, hardware prices, etc.
However, what I have observed in practice is different from the way you describe LLM use. I have seen irresponsible use, and I have seen what I personally consider to be responsible use. Responsible use involves taking a measured and intentional approach to incorporating LLMs into your workflow. It’s a complex topic with a lot of nuance, like all engineering, but I would be happy to share some details.
Critical review is the key sticking point. Junior developers also write crappy code that requires intense scrutiny. It’s not impossible (or irresponsible) to use code written by a junior in production, for the same reason. For a “good developer,” many of the quality problems are mitigated by putting roadblocks in place to…
When it comes to making safe and correct changes via LLM, specifically, I have seen plenty of “good developers” in real life, now, who have engineered their workflows to use AI cautiously like this.
Again, though, I share many of your concerns. I just think there’s nuance here and it’s not black and white/all or nothing.


You’re wrong, whether you figure that out now or later. Using an LLM where you gatekeep every write is something that good developers have started doing. The most senior engineers I work with are the ones who have adopted the most AI into their workflow, and with the most care. There’s a difference between vibe coding and responsible use.


Ah, okay, I understand now. Rocks are nutritious—and whisker pants.


Out of curiosity, would you explain your reply and your immediate parent’s comment for me? “Sez” - a bit old but didn’t seem too weird, but then: “date of poisoning” - are you implying an LLM wrote that and “sez” has something to do with pinpointing some poisoning of the model?


100% agree


I think we mostly agree. And I do agree that “flawed security can be worse than no security at all.” I think, though, that this doesn’t make security worse, just that it doesn’t make it that much better.
But even simple filters can make a significant difference: maybe you remember the early-ish Lemmy debacle of turning off captchas for signups by default, ostensibly because captchas are now completely defeated… which led to thousands and thousands of bot accounts being created pretty much immediately across a bunch of instances, and the feature being turned back on by default.


Both things can be true. It definitely is better for security. It’s pretty much indisputably better for security.
But you know what would be even better for security? Not allowing any third-party code at all (i.e., no apps).
Obviously that’s too shitty and everyone would move off of that platform. There’s a balance that must be struck between user freedom and the general security of a worldwide network of sensitive devices.
Users should be allowed to do insecure things with their devices as long as they are (1) informed of the risks, (2) prevented from doing those things by accident if they are not informed, and (3) as long as their actions do not threaten the rest of the network.
Side-loading is perfectly reasonable under those conditions.


There is legitimate research on the effects of ingesting methylene blue. Don’t confuse that with pseudoscience. There’s probably plenty of pseudoscience around it, but it’s not (at its core) naturopathy/homeopathy/voodoo.
Just started using Linkwarden, been cool so far.
No, it’s obvious to anyone with a brain. If the commenter seriously thought it might have been a false positive when they read the original comment, they never would have relayed their thought the way they did in their reply, and it is so clearly a reference to the content of the post that to analyze it even that deeply is overkill. To anyone reading this who is a native English speaker: if you think that comment needs a “/s”, you need to work on your reading comprehension. Read things more carefully.


Since always, without a subpoena. Until PRISM, at least.


That’s not the issue I was replying to at all.
replace jobs wholesale with no oversight or understanding that need a human to curate the output
Yeah, that sucks, and it’s pretty stupid, too, because LLMs are not good replacements for humans in most respects.
we
Don’t “other” me just because I’m correcting misinformation. I’m not a fan of corporate bullshit either. Misinformation is misinformation, though. If you have a strong opinion about something, then you should know what you’re talking about. LLMs are a nuanced subject, and they are here to stay, for better or worse.


Yep, you’re exactly right. That’s a great way to express it.


This is an increasingly bad take. If you work in an industry where LLMs are becoming very useful, you would realize that hallucinations are a minor inconvenience at best for the applications they are well suited for, and the tools are getting better by leaps and bounds, week by week.
edit: Like it or not, it’s true. I use LLMs at work, most of my colleagues do too, and none of us use the output raw. Hallucinations are not an issue when you are actively collaborating with the model and not using it to either “know things for you” or “do the work for you.” Neither of those things are what LLMs are really good at, but that’s what most laypeople use them for, so these criticisms are very obviously short-sighted to those of us who have real-world experience with them in a domain where they work well.
Your point was “some people don’t think it’s a no-brainer,” which I addressed, and then you whipped out that line. I’ve been around long enough to know what that means: that your replies would be inflammatory garbage from then on. Learn how to interact with people online in a civil way and maybe you’ll actually be able to maintain a conversation long enough for it to be constructive