I’ve been using it for a long time now, personally never heard anyone else use it to describe them. Always thought it to be a more appropriate term to describe how they actually work
It is more appropriate for llms, but not for diffusion models (imagegen). Those are more throw shit at a wall and refine it a thousand times (whereas llms just grab shit that looks similar to what they want). It’s why generated images usually look normal at a glance and fall apart the moment you pay attention to details, because the AI judges the whole image to be close enough to training images that match the prompt instead of having any intent behind individual parts.
I’ve been using it for a long time now, personally never heard anyone else use it to describe them. Always thought it to be a more appropriate term to describe how they actually work
It is more appropriate for llms, but not for diffusion models (imagegen). Those are more throw shit at a wall and refine it a thousand times (whereas llms just grab shit that looks similar to what they want). It’s why generated images usually look normal at a glance and fall apart the moment you pay attention to details, because the AI judges the whole image to be close enough to training images that match the prompt instead of having any intent behind individual parts.