• 0 Posts
  • 28 Comments
Joined 2 years ago
cake
Cake day: August 2nd, 2023

help-circle
  • Yup this is the real world take IME. Code should be self documenting, really the only exception ever is “why” because code explains how, as you said.

    Now there are sometimes less-than-ideal environments. Like at my last job we were doing Scala development, and that language is expressive enough to allow you to truly have self-documenting code. Python cannot match this, and so you need comments at times (in earlier versions of Python type annotations were specially formatted literal comments, now they’re glorified comments because they look like real annotations but actually do nothing).


  • Any chance you have an nvidia card? Nvidia for a long time has been in a worse spot on Linux than AMD, which interestingly is the inverse of Windows. A lot of AMD users complain of driver issues on Windows and swap to Nvidia as a result, and the exact opposite happens on Linux.

    Nvidia is getting much better on Linux though, and Wayland+explicit sync is coming down the pipeline. With NVK in a couple years it’s quite possible that nvidia/amd Linux experience will be very similar.


  • Nevoic@lemm.eetoTechnology@lemmy.worldHello GPT-4o
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    10 months ago

    “they can’t learn anything” is too reductive. Try feeding GPT4 a language specification for a language that didn’t exist at the time of its training, and then tell it to program in that language given a library that you give it.

    It won’t do well, but neither would a junior developer in raw vim/nano without compiler/linter feedback. It will roughly construct something that looks like that new language you fed it that it wasn’t trained on. This is something that in theory LLMs can do well, so GPT5/6/etc. will do better, perhaps as well as any professional human programmer.

    Their context windows have increased many times over. We’re no longer operating in the 4/8k range, but instead 128k->1024k range. That’s enough context to, from the perspective of an observer, learn an entirely new language, framework, and then write something almost usable in it. And 2024 isn’t the end for context window size.

    With the right tools (e.g input compiler errors and have the LLM reflect on how to fix said compiler errors), you’d get even more reliability, with just modern day LLMs. Get something more reliable, and effectively it’ll do what we can do by learning.

    So much work in programming isn’t novel. You’re not making something really new, but instead piecing together work other people did. Even when you make an entirely new library, it’s using a language someone else wrote, libraries other people wrote, in an editor someone else wrote, on an O.S someone else wrote. We’re all standing on the shoulders of giants.


  • Nevoic@lemm.eetoTechnology@lemmy.worldHello GPT-4o
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    5
    ·
    edit-2
    10 months ago

    18 months ago, chatgpt didn’t exist. GPT3.5 wasn’t publicly available.

    At that same point 18 months ago, iPhone 14 was available. Now we have the iPhone 15.

    People are used to LLMs/AI developing much faster, but you really have to keep in perspective how different this tech was 18 months ago. Comparing LLM and smartphone plateaus is just silly at the moment.

    Yes they’ve been refining the GPT4 model for about a year now, but we’ve also got major competitors in the space that didn’t exist 12 months ago. We got multimodality that didn’t exist 12 months ago. Sora is mind bogglingly realistic; didn’t exist 12 months ago.

    GPT5 is just a few months away. If 4->5 is anything like 3->4, my career as a programmer will be over in the next 5 years. GPT4 already consistently outperforms college students that I help, and can often match junior developers in terms of reliability (though with far more confidence, which is problematic obviously). I don’t think people realize how big of a deal that is.


  • Nevoic@lemm.eetoA Boring Dystopia@lemmy.worldMay 13, 1985
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    10 months ago

    Yeah, that was my point. I can’t believe I didn’t see what my own point was until you cleared it up for me. It wasn’t about how “terrorist was a loaded word” even though that’s what I said.

    I’m glad you’re here to clear up the difference between what I said and what I meant, otherwise I’d be genuinely lost.

    Keep it coming.



  • Nevoic@lemm.eetoA Boring Dystopia@lemmy.worldMay 13, 1985
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    10 months ago

    Yup, you can also make comparisons to irrelevant things. Not all comparisons are fallacious.

    The way the CIA/IDF behave compared to other “terrorist” organizations is relevant to the etymology of the word. I don’t see how the Grand Canyon relates to any point you or I made.




  • Nevoic@lemm.eetoA Boring Dystopia@lemmy.worldMay 13, 1985
    link
    fedilink
    arrow-up
    2
    arrow-down
    5
    ·
    edit-2
    10 months ago

    Calling this whataboutism is like responding to the claim “people have a biological urge to reproduce” as a naturalistic fallacy.

    You’re using the word in sorta the right ballpark (I did make a comparison, e.g a “what about”), however not every time someone says “what about X” are they committing a fallacy.

    My entire point was how terrorist is a loaded word, that we only use it to describe one side (the side not in power), even though the technical definition obviously fits organizations in power. Making a comparison to demonstrate my literal only point isn’t fallacious.

    There were native american terror groups, yet the U.S government that literally genocided millions of native Americans isn’t a terror organization, despite their use of terror and violence to achieve political goals. It’s a word with clear problematic etymology.


  • Nevoic@lemm.eetoA Boring Dystopia@lemmy.worldMay 13, 1985
    link
    fedilink
    arrow-up
    4
    arrow-down
    4
    ·
    edit-2
    10 months ago

    This misses the point. If we’re being technical, Hamas/MOVE is obviously a terrorist organization. Trying to convince me that they are isn’t going to change my position, because I already believe that.

    It’s just that in-so-far as Hamas/MOVE etc. are terrorist organizations, the CIA/IDF are far larger ones. They inflict terror and use violence for political gain, the only difference is they’re the ones in power so they decide who is a terrorist.

    That’s the problem with the word. The IDF and Hamas are both violent terror groups that shouldn’t exist, but Hamas only exists as a result of the IDF’s genocidal campaign, and yet we only call Hamas a terror group. It’s deeply problematic.


  • Nevoic@lemm.eetoA Boring Dystopia@lemmy.worldMay 13, 1985
    link
    fedilink
    arrow-up
    10
    arrow-down
    11
    ·
    10 months ago

    Terrorist is just a loaded word. Like Hamas is a “terrorist organization” but the state of Israel isn’t.

    Terrorism often boils down to “enacting violence against systems of oppression”. Is the IDF a terrorist organization? What about the DoD? These organizations use violence to perpetuate existing systems of oppression, causing vastly more harm than any domestic “terrorist” organization ever will.

    While these 11 people were being killed by the state for being “terrorists”, the CIA was backing fascists (contras) to overthrow democratically elected socialists in Nicaragua. Is the CIA a terrorist organization?




  • Nevoic@lemm.eetoA Boring Dystopia@lemmy.worldGet rid of landlords...
    link
    fedilink
    arrow-up
    23
    arrow-down
    15
    ·
    11 months ago

    The USSR and China were much more widespread examples of eliminating landlords, both were vastly successful. They were third world countries with terrible conditions, both became global superpowers, all-the-while providing housing at a vastly more reasonable rate.

    China has since regressed on this, and they’re starting to feel housing troubles as landlords destroy the housing market through scalping, but for a long period they were improving.

    The best example is probably the USSR during the 70s. Still a country with vastly less wealth than America, recently developed into a global superpower, but still was providing housing to citizens at an average of 5% of their income. America has been operating very consistently in the 30-80% range for a long time.


  • Nevoic@lemm.eetoTechnology@lemmy.world*deleted by creator*
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago

    825,000 chickens per year in the U.S are accidentally boiled alive or drowned before their intended slaughter. https://animalclock.org/ this isn’t prevented because prevention mechanisms cost money, as in they eat into profits.

    It’s standard practice for male pigs to have their tails and testicles ripped out without pain relief https://www.vox.com/future-perfect/23817808/pig-farm-investigation-feedback-immunity-feces-intestines this link also showcases how people abuse pigs for fun. Objectifying animals you kill is a coping mechanism for humans, engaging in that much killing is unnatural and unhealthy for humans, it also leads to vastly higher rates of domestic violence and crime, as it normalizes violence as a solution.

    It’s normal for foxes to have their skin ripped off while they’re alive. Animals have their beaks ripped off so they can’t kill each other in distress, as they go literally insane, abandon normal social hierarchies, and start simply trying to kill each other given the lack of space. http://www.nationearth.com/

    I understand ignorance of how horrible the conditions are is a normal part of how humans justify our atrocities. However what always baffles me is people who appear genuinely concerned about animal welfare can be so absurdly uninformed on the practices that they directly support with their purchases, while criticizing practices that you have absolutely no influence over in a place on the other side of the planet.



  • Nevoic@lemm.eetoTechnology@lemmy.worldTesla scraps its plan for a $25,000 Model 2 EV
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    6
    ·
    edit-2
    11 months ago

    Depends on what you’re looking for. I had a high paying tech job (layoffs op), and I wanted a fun car that accelerates fast but also is a good daily driver. I was in the ~60k price range, so I was looking at things like the Corvette Stingray, but there are too many compromises for that car in terms of daily driving.

    The Model 3 accelerates faster 0-30, and the same speed 0-60. Off the line it feels way snappier and responsive because it’s electric, and the battery makes its center of gravity lower, so it’s remarkably good at cornering for a sedan, being more comparable to a sports car in terms of cornering capabilities than a sedan.

    Those aren’t normally considerations for people trying to find a good value commuter car, so you would literally just ignore all those advantages. Yet people don’t criticize Corvette owners for not choosing a Hyundai lol

    On the daily driving front, Tesla wins out massively over other high performance cars in that price range. Being able to charge up at home, never going to a gas station, best in class driving automation/assistance software, simple interior with good control panel software, one pedal driving with regen breaking.

    If you’re in the 40k price range for a daily commuter, your criteria will be totally different, and I am not well versed enough in the normal considerations of that price tier and category to speak confidently to what’s the best value. Tesla does however, at the very least, have a niche in the high performance sedan market.


  • Nevoic@lemm.eetoTechnology@lemmy.worldTesla scraps its plan for a $25,000 Model 2 EV
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    9
    ·
    edit-2
    11 months ago

    Like sure fuck Elon, but why do you think FSD is unsafe? They publish the accident rate, it’s lower than the national average.

    There are times where it will fuck up, I’ve experienced this. However there are times where it sees something I physically can’t because of either blindspots or pillars in the car.

    Having the car drive and you intervene is statistically safer than the national average. You could argue the inverse is better (you drive and the car intervenes), but I’d argue that system would be far worse, as you’d be relinquishing final say to the computer and we don’t have a legal system setup for that, regardless of how good the software is (e.g you’re still responsible as the driver).

    You can call it a marketing term, but in reality it can and does successfully drive point to point with no interventions normally. The places it does fuckup are consistent fuckups (e.g bad road markings that convey the wrong thing, and you only know because you’ve been on that road thousands of times). It’s not human, but it’s far more consistent than a human, in both the ways it succeeds and fails. If you learn these patterns you can spend more time paying attention to what other drivers are doing and novel things that might be dangerous (people, animals, etc ) and less time on trivial things like mechanically staying inside of two lines or adjusting your speed. Looking in your blindspot or to the side isn’t nearly as dangerous for example, so you can get more information.


  • Nevoic@lemm.eetoLinux@lemmy.mlLinux hits 4% on the desktop 🐧 📈
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Linux is a far more reliable operating system at the kernel level, which is why the vast majority of the Internet runs on Linux, and is very stable compared to anyone’s personal computer (no matter O.S). It’s also lighter weight at its core, which is a big plus for servers.

    The thing about Linux desktops that tend to be finicky is interop with some proprietary software (e.g nvidia drivers) or desktop environments (gnome can freeze/crash if you like running bleeding edge before bugs are ironed out). Windows has issues too however, free software often literally doesn’t run on Windows (requiring WSL, the same way games on Linux require wine), and the desktop environment is essentially indistinguishable from the base operating system. When you get a desktop environment crash on Windows, your system will BSOD and restart with no recourse, in Linux I can ssh into my still functioning computer and kill my DE, or drop to the TTL and do the same thing.

    The end might not seem like a big deal for some people (who cares if you have to restart by a button press or kill your DE and login, they’ll take a similar amount of time), but for someone like me where reliability is a big concern (as in, uptime for the half a dozen services/containers I run for people), this is great. People watching media off of jellyfin don’t have to stop because of a DE bug, but on Windows a BSOD would stop their media (and within the last week we’ve had several BSODs on Windows PCs due to bugs relating things like adaptive sync or sometimes just unknown reasons).

    For what it’s worth I also game exclusively on Linux, vk3d, dxvk, and proton are godsends. Somethings don’t work, developers who won’t flip the switch for EAC (e.g Fortnite), but for me the games I play always worked. This will actually change soon, Vanguard is coming to League and that only works on Windows, but also probably not my last install of Windows (I tried W11 when it came out because I’m just curious about new tech), but I had to do a TPMBypassCheck despite having ftpm enabled in the BIOS, and afaict, at least from people I know with similar builds to me, if this happened then firmware TPM probably isn’t being picked up by W11, and that means I need to buy a TPM module or drop to W10 to play League. Plus, vanguard is an intense rootkit with full 24/7 access to your O.S so I probably don’t want that installed anyway, even if it happened to work on Linux. Just going to stick to SoD for now in my free time lol