Linux nerd and consultant. Sci-fi, comedy, and podcast author. Former Katsucon president, former roller derby bouncer. http://punkwalrus.net

  • 0 Posts
  • 31 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle


  • When eventually washed off, the aerogel is handily broken down by soil microbes.

    I am not going to claim to be an expert on any of this BUT that wording sounds suspiciously like bullshit. Maybe it’s not, but it’s one of those phrases that sounds like when vitamin companies claim that more B12 has shown to fix whatever ails you. Or “our plastic is environmentally friendly: 100% recyclable, and breaks down into teeny micro-particles over time, and gets absorbed by the sea life like ordinary sand…”



  • Punkie@lemmy.worldtoTechnology@lemmy.world*deleted by creator*
    link
    fedilink
    English
    arrow-up
    36
    ·
    7 months ago

    I have had two tech jobs like that, even before COVID, starting in 2016. The first time, it was a company that outgrew their workspace. They put us in ‘rent-an-office’ spaces for a bit, and then my boss started working from home a few days a week. Then he allowed me to. We moved to a new office, but it was always empty in my section. That was fine, too, but the commute was terrible, so I started doing 2 days a week, then once a week, then a few times a month. I rarely saw my other coworkers in person, and nobody said anything aloud.

    The next job started because of COVID, and when they started doing RTO, they also wanted to do “hot desking” (no assigned seating) and open office plans, and I was not having that. I was not going to work in a “cafeteria” like setting. So I got contracted work and have worked from home 100% for several years now. Nobody has office space, and we work all over the world to collaborate. I get paid very well.

    I hope i never had to go back to an office. I reach retirement age in about 15 years, and I am hoping to make it.



  • The thing is that for a majority of cases, this is all one needs to know about git for their job. Knowing git add, git -m commit “Change text”, git push, git branch, git checkout , is most of what a lone programmer does on their code.

    Where it gets complicated real fast is collaboration on the same branch. Merge conflicts, outdated pulls, “clever shortcuts,” hacks done by programmers who “kindof” know git at an advanced level, those who don’t understand “least surprise,” and those who cut and paste fixes from Stackexchange or ChatGPT. Plus who has admin access to “undo your changes” so all that work you did and pushed is erased and there’s no record of it anymore. And egos of programmers who refuse any changes you make for weird esoteric reasons. I had a programmer lead who rejected any and all code with comments “because I like clean code. If it’s not in the git log, it’s not a comment.” And his git comments were frustratingly vague and brief. “Fixed issue with ssl python libs,” or “Minor bugfixes.”


  • I was burned afoul by a former admin who, instead of diagnosing why a mail service was failing, labeled a script as a /etc/cron.d file entry as “…” (three dots) which, unless you were careful, you’d never notice in an "ls " listing casually. The cron job ran a script with a similar name which he ran once every 5 minutes. It would launch the mail service, but simultaneous services were not allowed to run on the same box, so if it was running, nothing would happen, although this later explained hundreds of “[program] service is already running” errors in our logs. It was every 5 minutes because our solarwinds check would only notice if the service had been down for 5 minutes. The reason why the service was crashing was later fixed in a patch, but nobody knew about this little “helper” script for years.

    Until one day, we had a service failover from primary to backup. Normally, we had two mail servers servers behind a load balancer. It would serve only the IP that was reporting as up. Before, we manually disabled the other network port, but this time, that step was forgotten, so BOTH IPs were listening. We shut down the primary mail service, but after 5 minutes, it came back up. The mail software would sync all the mail from one server to the other (like primary to backup, or reversed, but one way only). With both up, the load balancer just sent traffic to a random one.

    So now, both IPs received and sent mail, along with web interface users could use. But now, with mail going to both, it created mass confusion, and the mailbox sync was copying from backup to primary. Mail would appear and disappear randomly, and if it disappeared, it was because backup was syncing to primary. It was slow, and the first people to notice were the scant IMAP customers over the next several days. Those customers were always complaining because they had old and cranky systems, and our weekend customer service just told them to wait until Monday. But then more and more POP3 customers started to notice, and after 5 days had passed, we figured out what had happened. And we only did Netbackups every week, so now thousands of legitimate emails were lost for good over 3000 customers. A lot of them were lawyers.

    Oof.


  • I hate to be honest, but I used Amazon Prime a lot because:

    1. I cannot drive. Thus, getting to the store is difficult.
    2. I must bring in 3-4 items a week, so yeah, I save on shipping.
    3. Auto-subscriptions save a little.
    4. I have priced a lot of stuff over the years, and while Amazon is not always the best, the convenience is impressive.
    5. They have, multiple times, been incredibly helpful with customer service. Like above and beyond.
    6. COVID and nobody masks around here. I have an autoimmune condition, so it’s important that I not leave unless it’s a medical appointment or similar need.
    7. They just have stuff I can’t find anywhere. Yes, as some have said, caveat emptor, but that’s true for all the stores.

    I also save a shit ton of money. When I used to browse Walmart or Target, I used to buy a lot of shit I didn’t need. I don’t get as distracted with focused buying. I also order from Aliexpress if I can wait 30 days, and I have only been ripped off three times in several years, for a total of maybe $35.

    I’m not saying my way is better, and certainly not if it’s better for you, but it’s been a godsend to the house-bound.






  • Being poor. In college in the 90s, my lead sysadmin couldn’t afford Minix for this system we had, so we tried to compile Linux on it. Three days later, we still failed, and gave up, but this was kernel 0.93 or something, so it had a ways to go. But I learned so much from that experience without paying for a university course or something.

    Years later, I bought a copy of Red Hat 6 at a Costco. Windows 95/98 was big, I didn’t know how to pirate it, so I went back to Linux and it worked great on my “franken-puters” cobbled together from spare parts dumpster diving. Steep learning curve back then, though. Then I brought it to my workplace, went from UNIX admin to Linux admin, and soon I preferred it to Windows. Been my daily driver for decades, now.

    Am I an evangel? A little, but I find that “right tool for right job” is a better approach. Linux is great for everything, BUT a comprehensive system like MS Office AND Active Directory simply does not exist in FOSS space yet; everything is cobbled together and a kludge still trying to catch up.

    Obsessed? Kinda. I just assembled some ansible scripts to roll my own distro. Why? To see if I could.



  • Punkie@lemmy.worldtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    1 year ago

    I worked in a job with build scripts. Developers would list what they wanted in a drop-down menu on a website, with very few “fill in the blanks.” This would create a template, which was sanity-checked.

    One of the “fill in the blanks” was “home directory of user, if not default /home/username.” Some people filled it in, some didn’t. A lot of “users” might be apps with /home being “/opt/appname” “/var/www/html” or something. We checked to make sure that directory existed, if not, create, and set permissions. Easy peasy, all automated. Ran this lots of times.

    Then one day, the script failed. Borked the whole box. Sometimes the VM was corrupt, so delete VM and try again. Usually worked. But this time, the build kept failing. The box went down. Wasn’t even bootable. This happened several times with this one build. So we mounted the borked drive under a new VM and checked out the logs. Just like the dessert stage of Willy Wonka chewing gum, it always failed at the last stage: making /home directories.

    It would create them, then halt that it could not find bash. We looked for bash on the bad drive, and it was the usual /bin/bash shortcut to /usr/bin/bash and we were truly puzzled. I did a chroot to the drive and NOTHING worked. It just hung. That was the first clue.

    The second was looking through the build script (in bash, which we didn’t write) and checking the steps. Looked it the logs. Always died at creating some user named sapadm, the user for the HANA database. Eventually, I checked the configure file, and noticed it was the only user with the odd home directory “/usr/sap.” Then it hit me: the permissions.

    The script, thinking it was a home directory, did a chmod - R 755 for all directories and chmod - R 644 for all files! That meant, while creating home, it made everything under /usr not executable anymore! Holy shit, no wonder nothing worked! So we commented out that user in the config, ran the build again, and we were good! We created the sapadm by hand, and then later fixed the bug in the script.

    SANITIZE YOUR DATA. Or you might turn Violet Beauregarde into a blueberry.


  • I have found that it’s like having a junior programmer assistant. It’s great for “write me python code for opening an in file from a command line argument, reading the contents into a key/value dict array, then closing the file.” It’s terrible for “write me a python code for pulling data into a redis database.”

    I find it’s wrong 50% of the time for certain command line switches, Linux file structure, and aws cli.

    I find it’s terrible for advanced stuff like, “using aws cli and jq, take all volumes in a vpc, and display the volume id, volume size in gb, instance id it’s attached to, private IP address of the instance, whether is a gp3 or gp2, and the vpc id in a comma separated format, sorted by volume size.”

    Even worse at, “take all my gp2 volumes and make them gp3.”