

Poor moderator probably had a foot fetish
Poor moderator probably had a foot fetish
This would also effectively ban the use of any research produced by a Chinese national. Any papers which cite the work of Chinese labs (most of them) would be illegal, as this could be interpreted as aiding Chinese AI research.
The most important part to note is that 80% is held in a single wallet. The rug pull potential is high even by crypto standards.
The llama-1 paper acknowledged the use of the books dataset, libgen isn’t mentioned in any of the papers so this is new info.
And I declare the US to be “South Canada”
My solution:
The outer square lines in the third column/row is the result of the difference between what exists in the first two items in that row/column. Only outer lines appearing only once will be in the 3rd shape. The center lines seem to be only center lines that appear in both shapes. Therefore x is 52, since all outer shapes cancel and there are no shared center lines. The rest is fairly simple.
The second derivative of f(x) is 78x + 22, so the answer is 78(52) + 22 + 52 = 4130
I’m not completely confident in this solution but it seems to be consistent with the known columns and rows.
Tech bros have ruined the prestige of a lot of titles. Software “Engineer”, Systems “Architect”, Data “Scientist”, Computer “Wizard”, etc.
For a 16k context window using q4_k_s quants with llamacpp it requires around 32GB. You can get away with less using smaller context windows and lower accuracy quants but quality will degrade and each chain of thought requires a few thousand tokens so you will lose previous messages quickly.
Perfect AI boyfriends are the bigger threat to young men
exFAT is still the best format for multiplatform compatibility so it’s good to see that it’s still getting maintained.
Now everyone gets to hand over their ids to the tech companies.
If everyone has access to the model it becomes much easier to find obfuscation methods and validate them. It becomes an uphill battle. It’s unfortunate but it’s an inherent limitation of most safeguards.
Of course it was political retribution and not the whole unregistered securities and gambling market thing.
More sympathy for squirrels than human beings
Anthropic released an api for the same thing last week.
This is actually pretty smart because it switches the context of the action. Most intermediate users avoid clicking random executables by instinct but this is different enough that it doesn’t immediately trigger that association and response.
All signs point to this being a finetune of gpt4o with additional chain of thought steps before the final answer. It has exactly the same pitfalls as the existing model (9.11>9.8 tokenization error, failing simple riddles, being unable to assert that the user is wrong, etc.). It’s still a transformer and it’s still next token prediction. They hide the thought steps to mask this fact and to prevent others from benefiting from all of the finetuning data they paid for.
The role of biodegradable materials in the next generation of Saw traps
It’s cool but it’s more or less just a party trick.
In their human choice benchmarks it was only chosen 59% of the time compared to 4o. That’s a 15-20x cost increase for 9% difference.