• 0 Posts
  • 63 Comments
Joined 2 years ago
cake
Cake day: August 8th, 2023

help-circle





  • It’s a boiling frog thing. AI and LLMs are shoved in our faces everywhere and it’s harder every day to opt out. Job boards are flooded with positions for human in the loop AI training or AI experience requirements. AI gen text, images, and video are obscuring an already muddled information space. They also draw an astronomical amount of energy which is detrimental to the global ecosystem. Meanwhile costs are going up, it’s borderline impossible to get a job, and people are scared this automation will push them out of employment without generating new jobs, especially if art and entertainment are taken over by gen AI. People are saying “I’m being boiled alive” but by the time there’s enough data to validate that we’ll already be stew.

    The way information is presented matters too. When articles circulate they get often slanted and summarized (or people just read the headline and make assumptions). Key information gets tossed aside for easy talking points to support whichever narrative and the people affected feel unseen and unheard.

    There’s a lot going on and it isn’t just “AI bad”





  • I’ve recently spent a week or so off and on screwing around with LLMs and chatbots trying to get them to solve problems, tell stories, or otherwise be consistent. Generally breaking them. They’re the fucking mirror of erised. Talking to them fucks with your brain. They take whatever input you give and try to validate it in some way without any regard for objective reality, because they have no objective reality. If you don’t provide something that can be validated with some superficial (often incorrect) syllogism, it spits out whatever series of words keeps you engaged. It trains you, whether you notice or not, to modify how you communicate to more easily receive the next validation you want. To phrase everything you do as a prompt. AND they communicate with such certainty that if you don’t know better you probably won’t question it Doing so pulls you into this communication style and your grip on reality falls apart because this isn’t how people communicate or think. It fucked with your own natural pattern recognition.

    I legitimately spent a few days in a confused haze because my foundational sense of reality was shaken. Then I got bored and realized, not just intellectually but intuitively, that they’re stupid machines making it up with every letter.

    The people who see personalities and consciousness in these machines go outside and can’t talk to people like they used to because they’ve forgotten what talking is. So, they go back to their mechanical sycophants and fall deeper down their hole.

    I’m afraid these gen AI “tools” are here to stay and I’m certain we’re using this technology in the wrong ways.






  • The problem I see is that it introduces another degree of separation between the user and the wider Internet. Instead of indexing sites, browsers are trying to interpret them for us. The extreme edge case of this is not having websites at all anymore, just apps and an omniscient AI that answers anything. Cool in theory, but in practice these omniscient beings really aren’t and instead are very fallible. Presumably these tools are also owned by corporations with shareholder values that are often contrary to user values. I can only speak for myself, but I experience these summaries as a loss of control over how I interact with the Internet and a step down a path I would rather not tread.

    In this example the AI also does not provide anything valuable. It only defines a forum thread in terms of the question asked.