

Good point… I guess I’m still curious if there could be a way he gets taken to court for this, but probably now.
Good point… I guess I’m still curious if there could be a way he gets taken to court for this, but probably now.
Are there laws in the US about such conflicts of interest or was is basically tradition until this point not to do shit like this?
Had a similar struggle with the German layout, but in the meantime I have moved to the “EURKey” layout. It is built in to many distros and available for Windows and Mac. It mimics a US layout while still having all the äüöß (and much more) I could ever need. Though I will say it’s only really worth it if you’re in IT or similar where you frequently need certain symbols.
As far as I know, the Deepmind paper was actually a challenge of the OpenAI paper, suggesting that models are undertrained and underperform while using too much compute due to this. They tested a model with 70B params and were able to outperform much larger models while using less compute by introducing more training. I don’t think there can be any general conclusion about some hard ceiling for LLM performance drawn from this.
However, this does not change the fact that there are areas (ones that rely on correctness) that simply cannot be replaced by this kind of model, and it is a foolish pursuit.
Out of curiosity, forgetting that the US is also pursuing something similar, what motive would Google have to comply with a Japanese ruling? Could Japan just… Ban Google if they don’t comply?
The fact that OpenAI have waived the threat of using the clause implies to me that they’ve defined it relatively loosely… or just that they’re really stupid, which may also be possible.
I did a little bit of looking and couldn’t find a figure on how much OpenAI spends on AGI compared to GenAI research, but in looking, I found this interesting:
https://openai.com/index/scale-the-benefits-of-ai/
Which begins with the following:
We are making progress on our mission to ensure that artificial general intelligence benefits all of humanity. Every week, over 250 million people around the world use ChatGPT to enhance their work, creativity, and learning…
Which seems like a shady placement of two unrelated things next to one another. Makes me wonder if texts like this have the goal of selling GenAI as AGI one day.
I’m actually surprised AGI isn’t better defined in the contract, or that there isn’t a burden of proof so that they can’t lie, but that was definitely on purpose. I really can’t imagine them severing that tie though; OpenAI simply isn’t financially stable enough, especially in the long term, and I’m sure they know it too.
That repo is much more active than I would have thought 😅 Can’t wait to see this somewhere in production
I will never understand why this place is idolized by so many people…
Maybe there is a language barrier, but I didn’t understand if a legal framework is implied by “ethical restraints”. Can one be taken to court for violating an ethical restraint?