-------------------------------------------------------------------
Your collective defense against AI-generated spam and content farms
-------------------------------------------------------------------
We made it our mission to prevent the web from becoming useless and a harmful space.
"As we collectively identify and validate slop across the web, Kagi’s SlopStop initiative will help transform those insights into a comprehensive, structured dataset of AI slop – an invaluable resource for training AI models.
Access to the database will be shared soon. Use this form to express your interest if you’d like to receive updates.
especially the part “an invaluable resource for training AI models”, and the absence of any community-focused language, and the fact that Kagi iirc is still operating at losses and are looking for ways to become profitable, I fear it will be essentially commercial. But who knows, and even if it will be so, that still is not to say it’s all bad.
As for the AI models, I see this as only really being a good thing. Either existing AI models (i.e. LLMs) get trained to produce less sloppy content, and thus less content generated by them is actually inaccurate and damaging to the web’s integrity as a source in the first place, or it’s used to train AI models made to detect slop, like they’ve said they want to do, in which case it becomes easier to block out low-quality slop without even needing human reports for the majority of site blocks.
Interesting, thank you. From the text here,
especially the part “an invaluable resource for training AI models”, and the absence of any community-focused language, and the fact that Kagi iirc is still operating at losses and are looking for ways to become profitable, I fear it will be essentially commercial. But who knows, and even if it will be so, that still is not to say it’s all bad.
@Novocirab
Small side note:
Kagi isn’t at a loss, they’re profitable. They said so earlier this year
@lazycouchpotato
Kagi is already profitable. A good sign, imo.
See: https://blog.kagi.com/what-is-next-for-kagi where they say “We are also thrilled to report that we have achieved profitability” on May 30 of last year.
As for the AI models, I see this as only really being a good thing. Either existing AI models (i.e. LLMs) get trained to produce less sloppy content, and thus less content generated by them is actually inaccurate and damaging to the web’s integrity as a source in the first place, or it’s used to train AI models made to detect slop, like they’ve said they want to do, in which case it becomes easier to block out low-quality slop without even needing human reports for the majority of site blocks.