• 0 Posts
  • 99 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • I actually seem to remember that back in ~wrath of the lich king (world of warcraft) Blizzard WASN’T doing this.

    While blizzard had enough capacity to handle 12+million people trying to download the update because they prepped for it, the internet itself did not, and I want to say Verizon basically got its backbone DDoS’d and taken down.

    Needless to say, Blizzard started breaking out it’s updates, using CDNs and cache servers, etc etc because Verizon had some very choice words (possibly coming from their legal department.)




  • Looks like I mixed up two different cases- the cause of one, and the duration of another.

    weev (who apparently is a giant asshole) was the one who got sent to jail for accessing a completely public URL AT&T wished he didn’t in 2010. The EFF took up his case. His sentence was later vacated by another court because so many civil rights lawyers kept joining his team pro-bono so the court tossed it out on a blatant technicality to get the issue to go away, so he only served ~2y.

    As for the CFAA being used to slap people with life sentences, there’s too many examples to know which one I was mixing it up with. Aaron Swartz is the classic example.








  • The problem is, who do you define as professionals? I’m a professional software engineer. I argue that there is no responsible way to use AI at the moment- it uses too many resources for a far too worthless result. Everything useful that an AI can do is currently better (and cheaper) to do another way, save perhaps live transcription.

    Do you define Sam Altman as a professional? Because his guidance wants the entire world to give up 10% of the worldwide GDP to his company (yes, seriously!) He’s clearly touched in the head, or on drugs. Should we follow his advice?



  • ysjet@lemmy.worldtoFuck AI@lemmy.worldOn Exceptions
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    These days, they’re usually racks and racks and racks of specialized rackmount servers with all kinds of hardware, hilarious amounts of ram, networked storage, tensor cores, etc stuffed inside, all networked together via fiber optics to run in parallel as one big PC with many CPUs.




  • ysjet@lemmy.worldtoFuck AI@lemmy.worldOn Exceptions
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I think you’ve misunderstood what I was saying- I don’t have spreadsheets of statistics on requests for LLM AIs vs non-LLM AIs. What I have is exposure to a significant amount of various AI users, each running different kinds of AIs, and me seeing what kind of AI they’re using, and for what purposes, and how well it works or doesn’t.

    Generally, LLM-based stuff is really only returning ‘useful’ results for language-based statistical analysis, which NLP handles better, faster, and vastly cheaper. For the rest, they really don’t even seem to be returning useful results- I typically see a LOT of frustration.

    I’m not about to give any information that could doxx myself, but the reason I see so much of this is because I’m professionally adjacent to some supercomputers. As you can imagine, those tend to be useful for AI research :P




  • ysjet@lemmy.worldtoFuck AI@lemmy.worldOn Exceptions
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    3 months ago

    Again with the conflation. They clearly mean GPTs and LLMs from the context they provide, they just don’t have another name for it, mostly because people like you like to pretend that AI is shit like chatGPT when it benefits you, and regular machine learning is AI when it benefits you.

    And no, GPTs are not needed, nor used, as a base for most of the useful tech, because anyone with any sense in this industry knows that good models and carefully curated training data gets you more accurate, reliable results than large amounts of shit data.


  • ysjet@lemmy.worldtoFuck AI@lemmy.worldOn Exceptions
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    8
    ·
    3 months ago

    Using chatGPT to recall the word ‘verisimilar’ is an absurd waste of time, energy, and in no way justifies the use of AI.

    90% of LLM/GPT use is a waste or could be done with better with another tool, including non-LLM AIs. The remaining 10% are just outright evil.