

I haven’t played since before Wizards started pushing Commander, at that time called EDH.
So you haven’t played in two decades. Hard to take your opinion seriously. You can play for free with their app now.
I haven’t played since before Wizards started pushing Commander, at that time called EDH.
So you haven’t played in two decades. Hard to take your opinion seriously. You can play for free with their app now.
The AI is learning on its own.
Doesn’t for me, maybe the mobile version is different.
deleted by creator
They killed him because of his strongly worded blog post.
You guys both posted it within a few seconds of each other judging by my app updating the time. Impressive.
All LLMs and Gen AI use data they don’t own. The Pile is all scraped or pirated info, which served as a starting point for most LLMs. Image gen is all scraped from the web. Speech to text and video gen mainly uses YouTube data.
So either you put a price tag on that data, which means only a handful of companies can afford to build these tools (including Meta), or you understand that piracy is the only way for most to aquire this data but since it’s highly transformative, it isn’t breaching copyrights or directly stealing from them as piracy “normally” is.
I’m being pragmatic.
The existence hinges on the rewriting and strengthening of copyright laws by data brokers and other cancerous tech companies. It’s not Meta vs us, but opensource vs Google and Openai.
They are being sued for copyright infringement when it’s clearly highly transformative. The rules are fine as is, Meta isn’t the one trying to change them. I shouldn’t go against my own interests and support frivolous lawsuits that will negatively impact me just because Meta is a boogeyman.
Maybe a bug? Last post was 60 minutes ago.
Don’t give me that slop. No one except the biggest names are getting a dime out it once OpenAI buys up all the data and kills off their competition. It’s also highly transformative, which used to be perfectly legal.
Copyright laws have been turned into a joke, only protecting big money and their interests.
Meta has open sourced every single one of their llms. They essentially gave birth to the whole open llm scene.
If they start losing all these lawsuits, the whole scene dies and all those nifty models and their fine-tunes get removed from huggingface, to be repackaged and sold to us with a subscription fee. All the other domestic open source players will close down.
The copyright crew aren’t the good guys here, even if it’s spearheaded by Sarah Silverman and Meta has traditionally played the part of the villain.
I don’t consider it immoral regardless tbh
This is valid just on taste alone. The thing was ugly even before Elon started his descent into madness.
Huge pet peeve of mine as well. No one needs my phone number.
The context only mattered because you were talking about the bot missing the euphemism. It doesn’t matter if the bot is invested in the fantasy, that is what it’s suppose to do. It’s up to the user to understand it’s a fantasy and not reality.
Many video games let you do violent things to innocent npcs. These games are invested in the fantasy, as well as trying to immerse you in it. Although It’s not exactly the same, it’s not up to the game or the chatbot to break character.
Llms are quickly going to be included in video games and I would rather not have safeguards (censorship) because a very small percentage of people with clear mental issues can’t deal with them.
The advertising team will just blame it on something else. They have the numbers showing their ads are being watched, everything else is conjecture.
I don’t really follow. Open source is still a net benefit regardless of the goverments investment in closed source, especially for the consumer.
I agree it’s highly likely there’s a scam going on but healthy competition will probably force them to actually use some of the fund. If they had a monopoly, it would be easier to give us a minimal viable product and call it a sound investment. They can’t be too blatant about it after all.
I think there’s place for regulation in case of gross negligence or purposefully training it to output bad behavior.
When it comes to mistakes, I don’t really believe in it. These platforms always have warnings about not trusting what the AI says.
I like to compare it to users on social media for example. If someone on lemmy told you to use peanut butter, he wouldn’t really be at fault, nor the instance owner.
AI systems don’t present themselves as scientific papers. If you are taking for truth things random redditors and auto complete bots are saying, that’s on you so to speak.
Those conversations didn’t happen at the same time from what I gather. These things don’t have infinite context size and at the rate he seemed to be using it, the conversation probably “resets” every few days.
No actual person would be charged for these kinds of messages in any case, pure exaggeration imo.
It most likely won’t be shared since Israel has a history of letting these attacks happen so their slaughter of innocent civilians can appear justified.