

Lucky you! I need to check my university’s current GPU power but sadly my thesis won’t be needing that kind of horsepower, so I won’t be able to give it a try unless I pay AWS or someone else for it on my own dime.
Lucky you! I need to check my university’s current GPU power but sadly my thesis won’t be needing that kind of horsepower, so I won’t be able to give it a try unless I pay AWS or someone else for it on my own dime.
nVidia’s new Digits workstation, while expensive from a consumer standpoint, should be a great tool for local inferencing research. $3000 for 128GB isn’t a crazy amount for a university or other researcher to spend, especially when you look at the price of the 5090.
In fairness, unless you have about 800GB of VRAM/HBM you’re not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.
I’m really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.
I did notice this news article that mentions:
New Yorkers who live within the Congestion Relief Zone will not be charged to drive or park around the area. They will only be charged once they leave and cross back into the zone.
I believe OP is talking about NYC’s Congestion Pricing.
Linux is a lot, lot, lot easier to use now than the 90s.
Mobile phones in the era before smartphones had cameras, email clients, games, music players, and even web browsers. They just weren’t very good at those functions and their core feature was being a phone for voice calls. Texting was barely a feature on some of them (the first camera phone in the United States, the Sanyo SCP-5300, didn’t have a two way text messaging client - the user had to go to a website on the phone to send texts, which was inconvenient even on a 1xRTT 3G connection.)
The e-ink phone seems closer to a dumbphone than a smartphone, IMO, largely because it lacks access to an app store.
Source: I sold mobile phones before smartphones and during the early smartphone years (BlackBerry and Palm Treo, for example.)
Edit: calling it a feature phone instead of a dumb phone might be more accurate.
I hate to break it to you, but if you’re running an LLM based on (for example) Llama the training data (corpus) that went into it was still large parts of the Internet.
The fact that you’re running the prompts locally doesn’t change the fact that it was still trained on data that could be considered protected under copyright law.
It’s going to be interesting to see how the law shakes out on this one, because an artist going to an art museum and doing studies of those works (and let’s say it’s a contemporary art museum where the works wouldn’t be in the public domain) for educational purposes is likely fair use - and possibly encouraged to help artists develop their talents. Musicians practicing (or even performing) other artists’ songs is expected during their development. Consider some high school band practicing in a garage, playing some song to improve their skills.
I know the big difference is that it’s people training vs a machine/LLM training, but that seems to come down to not so much a copyright issue (which it is in an immediate sense) as a “should an algorithm be entitled to the same protections as a person? If not, what if real AI (not just an LLM) is developed? Should those entities be entitled to personhood?”
I think they used to wax the cardboard. Maybe they still do?
How rough was the switch? I can’t imagine changing all my accounts that use my Gmail address, but I’d like to move to something else.