• 6 Posts
  • 367 Comments
Joined 3 years ago
cake
Cake day: August 15th, 2023

help-circle

  • I am making a slightly different point and have a bias to this perspective: https://www.legis.iowa.gov/docs/publications/SD/19230.pdf

    I am saying that an SSN can be part of a larger validation scheme, not the only key to the castle. Specifically for government sites, SSNs can be linked to IRS data to verify places of last residence. A person generally needs to verify multiple items that are referenced by the SSN before basic authentication can be established and set by the user. (This is part of the full Authentication, Authorization and Access Control triad.)

    An SSN is just a broad level identifier. If you look at many laws around the release of SSNs, the redaction is usually in place to prevent the linking of different documents and other data points.

    If I released my SSN in this chat, I could be fully doxxed in a matter of seconds. It’s mainly because there are many legal systems in place that use an SSN as a primary key, of sorts. (It’s a bit more than that, as SSNs can be duplicated in some circumstances.)

    So to say, at a high level, an SSN is considered private is absolutely correct. However, it’s so easily referenced and obtainable it really isn’t fully private either.

    If I was to generate a full list of every possible SSN in the US (which I have done, multiple times), that list is effectively useless to anyone who obtains a copy of it. So, by itself, an SSN is effectively public.










  • These findings suggest LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply mimicking training data patterns

    lulzwut? LLMs aren’t internalizing jack shit. If they exhibit a bias, it’s because of how they were trained. A quick theory would be that the interwebs is packed to the brim with stories of “all in” behaviors intermixed with real strategy, fiction or otherwise. I speculate that there are more stories available in forums of people winning doing stupid shit then there are of people losing because of stupid shit.

    They exhibit human bias because they were trained on human data. If I told the LLM to only make strict probability based decisions favoring safety (and it didn’t “forget” context and ignored any kind of “reasoning”), the odds might be in its favor.

    Sorry, I will not read the study because of that one sentence in its summary.


  • When I use it, I use it to create single functions that have known inputs and outputs.

    If absolutely needed, I use it to refactor old shitty scripts that need to look better and be used by someone else.

    I always do a line-by-line analysis of what the AI is suggesting.

    Any time I have leveraged AI to build out a full script with all desired functions all at once, I end up deleting most of the generated code. Context and “reasoning” can actually ruin the result I am trying to achieve. (Some models just love to add command line switch handling for no reason. That can fundamental change how an app is structured and not always desired.)








  • Good luck with that, I suppose. Botnets can have thousands, if not hundreds of thousands of infected hosts that will endlessly scan everything on the interwebs. Many of those infected hosts are behind NAT’s and your abuse form would be the equivalent of reporting an entire region for a single scan.

    But hey! Change the world, amirite?