• 93 Posts
  • 151 Comments
Joined 8 months ago
cake
Cake day: July 18th, 2024

help-circle

  • So… I just today had this realization, that a lot of the software engineering that I grew up with is not something that a lot of people now know how to do.

    When I was growing up, you had to make your own data structures. This was during the time when almost the whole of this chart was C. You had to do linked lists, sometimes you had to do hash tables. If you knew B-trees and graph algorithms, you were a super-fancy person and probably could make the big bucks if you wanted to.

    I don’t think that is what we need to go back to. I like being able to do big things with code super-quickly (including with LLM help) and not have to worry about trivial details or craft my own hash tables for every project. But also, I worry a little bit that a lot of the scope of what software is able to accomplish, outside of special environments or projects, is starting to be limited to “what can I accomplish by pasting together some preexisting libraries in a pretty straightforward way”. And, because so much of what’s out there is assembled with that approach, we get these teetering mountains of dependencies underneath every single project. I have a strong feeling that there is some kind of mathematics which implies that the number of dependencies attached to the average project is growing exponentially year by year, up to the limit of what people have the patience to put up with in their build process, which keeps increasing as computers get faster.

    I’m not even trying to say it as good or bad, although there are worrying aspects to me. I’m just saying it didn’t occur to me (again, literally until earlier today when I had this random realization) how much “being a software engineer” has changed. This idea, that most of what someone does can be duplicated by the fairly stupid capabilities of the currently available LLMs, is a big confirmation for that.








  • Honestly, I think OpenAI messed up by making their service available for free. They were following the normal silicon valley model of providing it free and then figuring out the revenue stream later, often by offering an additional paid tier of questionable value which very few people sign up for. That mostly doesn’t even work when your costs are limited to some not-trivial-but-not-exorbitant web hosting. When your costs are as astronomical as it takes to run an LLM, it’s a really bad idea which I think was just born out of imitation.

    If they’d offered GPT-3 as a subscription service that cost $50/month, for use by serious professionals or people with enough cash to spend that on playing around with it, people would have been impressed as hell that it was so cheap. IDK how many people would have signed up, but I can pretty well assure you that they would not be hemorrhaging money like they currently are. Of course, now that they’ve set the expected price point at “free,” there’s no going back.



  • Copilot is awful. It is clearly optimized to be able to be cost-effective while still running thousands of queries a day literally every time you touch the keyboard (which doesn’t mean they aren’t losing money hand over fist about it, just not as much as they would be).

    Just pay your $20/month to claude.ai, and copy and paste code back and forth with it instead. It still can’t really understand, or work on problems above a certain size, but it is at least fairly capable within the limits of what modern LLMs can do. It also has fairly nice features like being able to upload big chunks of code to a project as the “context” for what you were working on, and then having all chats be within the frame of reference of that context, and it actually works.

    Of course, Anthropic will now hold all your employer’s code which depending on what you’re working on and how you feel about your employer might not be ideal. But that was true anyway.





  • Inb4 “Substack Nazis boo Nazis Nazis Nazis.” That is incorrect. There were only like 3 neo-Nazis, with about 10 followers apiece, but anyway they kicked them all out some time back now, because the entire internet was yelling at them.

    I think it’s highly likely that someone who is screaming at you about how Substack is full of Nazis and officially “bad” probably is either just looking for an opportunity to yell and virtue-signal about how their purity test is more pointlessly pure than yours is, and not up to date on anything about the reality they’re talking about, or else they’re motivated by not wanting you and I to read Tim Snyder and Robert Reich. Either way the Nazis are mostly just an excuse.

    I’m not planning to post any Nazi blogs. Everyone can relax.










  • I get your point but I don’t quite think that the dream is that they can have the AI make changes to the world to create an immersive experience which is generated from first principles, and responds realistically to what you’re doing.

    I think the dream is that they can finally get rid of all those artists, level designers, playtesters, and so on, who have been hoovering up salaries that could be getting spent on blowjobs for the executives or something else that’s really productive, and replace it all with a box that has one button that says “make game now.”


  • So I observed this talk, which I thought was quite good:

    https://www.youtube.com/watch?v=Jeb_mSOgrVg

    And I started thinking that it would be cool to put the magic system of a game in the control of an LLM. Pivotal decisions of what’s going to happen at some given point get passed to the LLM in a particular type of markup, with the answer of what happens in the world coming back in the same markup. You can give it guidelines, but skilled players can get around the guidelines or learn how to trick the system, and also, there is inherent fuzziness to the edges of how things happen, and inherent jank and risk to the system because you can never completely sure what is going to happen based on what you thought would happen.

    All the things that make modern AI not a good thing to put into your systems suddenly become advantages from the perspective of immersing the player into the magic system without it being completely systematized. The magic goes back to being magical. You can literally have someone type something for what the incantation they’re using is, and it may get more powerful if they put a lot of emoting into the box, or they can literally try something that isn’t anywhere in the system and just “see if it works”… but also, you can’t quite be sure even what common stuff is going to do, 100% of the time.

    I never got serious enough about the idea to try it out but I thought it had the potential to be super-cool.



  • I think it just feels more comfortable and “professional” also. I’ve talked with people for whom the jank associated with decentralized services is a total deal-breaker. They started a Mastodon account, but they picked the wrong instance and it got defederated because it had Nazis, they got annoyed and moved to a different instance, and then a few months later the admin for that instance evaporated without warning and rather than find a third one they turned their back on the whole endeavor as a hopeless kids-lemonade-stand waste of time.



  • I’m using an LLM architecture that’s better suited to summarisation meaning it won’t invent false facts like traditional gpts do.

    What architecture is that? If you have an LLM that doesn’t hallucinate, there will surely have been papers written about the breakthrough.

    The ai has no more misinformation than a human journalist.

    And that dear reader was when the work of foolishness became something much more sinister.

    Humans, and trust in humans, are important. The internet divorced the human face and the accumulation of trust from the news, which has allowed engineered alternative facts to enter the mainstream consciousness, which might be the single biggest harmful development in a modern age which has no shortage of them. I am not trying to tell you that your summarizer project is automatically responsible for that. But be cautious about what future you’re constructing.


  • If u don’t like this please just block the community no need to complain or downvote.

    Best of luck with that!

    I’m actually not trying to poop on some cool new thing you’re setting up, but I think it is pretty clear at this point that building a system that uses an LLM to produce factual information for people, is a recipe for your system getting well-deserved criticism.

    Also, pay your journalists. Anything that takes them out of the equation will at some point lead to X and Youtube being the only sources of news, sending everybody anything that somebody feels like paying to produce and distribute for “free.”