• 1 Post
  • 340 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle



  • I think in terms of cultural exchange of ideas and the enjoyment of being on the internet, 2005-2015 or so was probably the best. The barrier to entry was lowered to where almost anyone could make a meme or post a picture or upload a video or write a blog post or even a microblog post or forum comment of a single sentence and it might go viral through the power of word of mouth.

    Then when there was enough value in going viral people started gaming for that as a measure of success, so that it no longer was a reliable metric for quality.

    But plenty of things are now better. I think maps and directions are better with a smartphone. Access to music and movies is better than ever. It’s nice to be able to seamlessly video chat with friends and family. There’s real utility there, even if you sometimes have to work around things that aren’t ideal.


  • I get your perspective, but I think it’s inaccurate when applied to current consumer behavior. The iPhone market share is like 60%. You can’t tell me that 60% is inherently more consumerist than the 40% that is Android users, especially when we’re talking about how Apple users actually tend to keep their phones longer before upgrading/updating to a new phone.

    Especially when we’re talking about the mid-tier, non-flagship model in the lineup, like the non-Pro iPhones.


  • Plenty of people want small but powerful phones. The iPhone Mini line, for the 12 and 13 generation, offered the same features and processing power as the regular sized iPhone. But they didn’t offer as much as the “Pro” model, which came in both normal and “Max” sizes.

    So if you wanted the latest and greatest in CPU/GPU, camera sensors/lenses, display tech (not necessarily size), you tended to opt for the phone that just happened to be bigger.

    Basically, there’s never been a side by side comparison of the latest tech that actually happens to fit within the size of the first 5 generations of iPhone, versus the standard size of a flagship today.



  • They’ve basically brought over the broken ladder of the management track, over to the technical track of increased technical expertise (without necessarily increasing management/administrative responsibilities).

    Currently, each generation of executives doesn’t come from within the company. There’s no simple path from mail room to executive anymore. Now, you have to leave the company to go get an MBA, then get hired by a consulting firm, then consult with that company as a client, before you’re on track to make senior management at the company.

    If the technical track is going this way, too, then these companies are going to become more brittle, and the current generation of entry level workers are going to hit a lot more career dead ends. It’s bad for everyone.


  • No, I don’t think you owe an apology. It’s a super common terminology almost to the point where I wouldn’t really even consider it outright wrong to describe it as a SoC. It’s just that the blurred distinction between a single chip and multiple chiplets packaged together are almost impossible for an outsider to tell without really getting into the published spec sheets for a product (and sometimes may not even be known then).

    It’s just more technically precise to describe them as SiP, even if SoC functionally means something quite similar (and the language may evolve to the point where the terms are interchangeable in practice).


  • When I plug my phone into the wall, there are chips in the wall charger and on both sides of the cable, because the simple act of charging requires a handshake and an exchange of information notifying the charger, the cable, and the phone what charging modes are supported, and how to ask for more or less power.

    Seriously? Am I the only one thinking this could be done with less than 10 chips at most?

    How many chips are in a fully configured desktop computer? There’s like dozens of any given motherboard, controlling all the little I/O requirements. Each module of RAM is several chips. If you use external cards, each card will have a few chips, too. Meanwhile, the keyboard and the mouse each have a few chips, and the display/monitor has a bunch more.

    I’d be surprised if the typical computer had less than 100 chips.

    Now let’s look at the car functions. A turn signal that blinks, oscillating between on and off? That’s probably a chip. A windshield wiper that can do intermittent wiping at different speeds? Another chip or more. Variable valve timing that’s electronically controlled? Another few chips. Each sensor that detects something, from fuel tank status to engine knocking to air/fuel mixture? Probably another chip. Controllers that combine all this information to determine how to mix the fuel and air, whether to trigger a warning light on the dash, etc.? Probably more chips. What about deployment of airbags, or triggering of the anti-lock braking systems? Cruise control requires a few more chips, as speedometers and odometers are not electronic rather than the old analog systems. Smart cruise control and lane detection has even more chips. Hybrid drivetrains that charge or discharge batteries need dozens of chips controlling the flow of power (and the logic of when power should flow in which direction).

    By the time Toyota was in the news in 2011 for potential throttle sticking problems that killed people, it was typical for even economy cars to have something like 30 ECUs controlling different things, with each ECU and its associated sensors requiring multiple chips.

    Some modern perks require even more chips. Automatic lights? High beam dimming? Automatic wipers? Remote start or shutting off the engine at idle?

    And that’s just for driving. FM tuner? Chips. AM tuner? More chips. Bluetooth and Carplay/Android Auto? More chips. Rear view camera, now mandated on all cars? More chips. A built-in GPS or infotainment system? A full blown computer.

    All the little analog controllers that were present in cars in the 80’s are now more efficiently performed on integrated circuits, including analog circuits. Each function will require its own chip. If you’re trying to recreate the exact functionality of a typical car from the 1990’s, you’d probably still need a minimum of a few hundred chips to pull it off. And it’s probably smart to segment things so that each module does one thing in a specialized way, isolated from the others, lest an unexpected input on the radio mess up the spark plug timing.

    The world is run by chips, and splitting up the functions into multiple computers/controllers, with multiple chips each, is just the easier and more efficient way to do things.


  • Tags interfere with human readability. Open any markdown file with a text editor in plain text and you can basically read the whole thing as it was intended to be read, with possibly the exception of tables.

    There’s a time and a place for different things, but I like markdown for human readable source text. HTML might be standardized enough that you can do a lot more with it, but the source file itself generally isn’t as readable.


  • the only option for top performance will be a SoC

    System in a Package (SiP) at least. Might not be efficient to etch the logic and that much memory onto the same silicon die, as the latest and greatest TSMC node will likely be much more expensive per square mm than the cutting edge memory production node from Samsung or whatever foundry where the memory is being made.

    But with advanced packaging going the way it’s been over the last decade or so, it’s going to be hard to compete with the latency/throughout of an in-package interposer. You can only do so much with the vias/pathways on a printed circuit board.


  • (the preview fetch is not e2ee afaik)

    Technically, it is, but end to end encryption only covers the data between the ends, and not what one of the ends chooses to do with it. If one end of the conversation chooses to log the conversation in an insecure way, the conversation itself might technically be encrypted, but the contents of the conversation can be learned by another. Or if one end simply chooses to forward a message to a new party not part of the original conversation.

    The link previews are happening outside of the conversation, and that action can be seen by people like the owner of the website, your ISP, and maybe WhatsApp itself (if configured in that way, not sure if it does).

    So end to end isn’t a panacea. You have to understand how it fits into the broader context of security and threat models.


  • Can humans actually do it, though? Are humans actually capable of driving a car reasonably well using only visual data, or are we actually using an entire suite of sensors in our heads and bodies to understand our speed and orientation, road conditions, and our surroundings? Driving a car by video link is considerably harder than just driving a car normally, from within a car.

    And even so, computers have a long way to go before they catch up with our visual processing. Our visual cortex does a lot of error correction of visual data, using proprioceptive sensors in our heads that silently and seamlessly delete the visual smudges and smears of motion as our heads move. The error correction adjusts quickly to recalibrate things when looking at stuff under water or anything with a different refractive index, or when looking at reflections in a mirror.

    And we maintain that flow of visual data by correcting for motion and stabilizing the movement of our eyes to compensate from external motion. Maybe not as good as chickens, but we’re pretty good at it. We recognize faulty sensor data and correct for it by moving our heads around obstructions, of silently ignoring something that is just blocking one eye, of blinking or rubbing our eyes when tears or water make it hard to focus. We also know when to not trust our eyes (in the dark, in fog, when temporarily blinded by lights), and fall back to other methods of understand the world around us.

    Throw in our sense of balance in our inner ears, our ability to direction find on sounds, and the ability to process vibrations in our seat and tactile feedback on a steering wheel, the proprioception of feeling forces on our body or specific limbs, and we have an entire system that uses much more than visual data to make decisions and model the world around us.

    There’s no reason why an artificial system needs to use exactly the same type of sensors as humans or other mammals do. And we have preexisting models and memories of what is or was around us, like when we walk around our own homes in the dark. But my point is that we rely on much more than our eyes, processed through an image processing system far more complex than the current state of AI vision. Why hold back on using as much sensor data as possible, to build a system that has good, reliable sensor data of what is on the road?





  • all the quadratic communication and caching growth it requires.

    I have trouble visualizing and understanding how the Internet works at scale, but can generally grasp how page-by-page or resource-by-resource requests work. I struggle to understand how one could efficiently parse the firehose of activity coming from every user on every instance that your own users follow, at least in user-focused services like Mastodon (or Twitter or Bluesky). With Lemmy, there will be many more people following the biggest communities with the most activity, so caching naturally scales. But with Twitter-like follows of individual accounts, there are going to be a lot of accounts on the long tail, with lots of different accounts being followed only by a few people. The most efficient method is to just ignore the small accounts, but obviously that ends up affecting a large number of accounts. But on the other hand, keeping up with the many small accounts will end up occupying all the resources on stuff very few people want to see.

    A centralized service has to struggle with this as well, but might have better control over caching and other on-demand retrieval of content in lower demand, without inadvertently DDoSing someone else’s server.