• 0 Posts
  • 41 Comments
Joined 10 months ago
cake
Cake day: May 29th, 2024

help-circle
  • When you pick up an apple, do you consent to the pesticides used on them?

    THAT’S the example you choose?

    There are no informed here, only pitchfork wielders.

    Absolutely stunning. You actually unironically do not understand what consent is. You need to take an ethics class.

    I’ll give you the really basic version:

    #1: People are allowed to say no to you for any reason or no reason at all. It doesn’t matter if you think their reasons are invalid or misinformed. No means no.

    #2: A lack of a “no” does not mean “yes”. If a person cannot say “no” to what you are doing because they have no idea you’re doing it in the first place then that, in some ways, is even worse than disregarding a “no”. At least in that case they know something has been done to them.

    That, by the way, is what the “informed” in “informed consent” means. It doesn’t mean “a person needs to know what they’re talking about in order for their ‘no’ to be valid”, like you seem to think it means.

    Doctors used to routinely retain tissue samples for experimentation without informing their patients they were doing this. The reasoning went that this didn’t harm the patient at all, the origin of the tissue was anonymized, the patient wouldn’t understand why tissue samples were needed anyway, and it might save lives. That’s a much better justification than trying to develop a web browser, and yet today that practice is widely considered to be deplorable, almost akin to rape.









    1. You can host a webserver on a Raspberry Pi. I don’t know what you’re doing with your setup but you absolutely do not need hundreds of watts to serve a few hundred KB worth of static webpage or PDF file. This website is powered by a 30 watt solar panel attached to a car battery on some guy’s apartment balcony. As of writing its at 71% charge.

    2. An Ampere Altra Max CPU has 128 ARM cores (the same architecture that a raspberry pi uses), with a 250 watt max TDP. That works out to about 2 watts per core. Each of those cores is more than enough to serve a little static webpage on its own, but in reality since a lot of these sites get less than 200 hits per day the power cost can be amortized over thousands of them, and the individual cores can go to sleep if there’s still not enough work to do. Go ahead and multiply that number by 4 for failover if you want, its still not a lot. (Not that the restaurant knows or cares about any of this, all this would be decided by a team of people at a massive IT company that the restaurant bought webpage hosting from).


  • So, keep in mind that single photon sensors have been around for awhile, in the form of avalanche photodiodes and photomultiplier tubes. And avalanche photodiodes are pretty commonly used in LiDAR systems already.

    The ones talked about in the article I linked collect about 50 points per square meter at a horizontal resolution of about 23 cm. Obviously that’s way worse than what’s presented in the phys.org article, but that’s also measuring from 3km away while covering an area of 700 square km per hour (because these systems are used for wide area terrain scanning from airplanes). With the way LiDAR works the system in the phys.org article could be scanning with a very narrow beam to get way more datapoints per square meter.

    Now, this doesn’t mean that the system is useless crap or whatever. It could be that the superconducting nanowire sensor they’re using lets them measure the arrival time much more precisely than normal LiDAR systems, which would give them much better depth resolution. Or it could be that the sensor has much less noise (false photon detections) than the commonly used avalanche diodes. I didn’t read the actual paper, and honestly I don’t know enough about LiDAR and photon detectors to really be able to compare those stats.

    But I do know enough to say that the range and single-photon capability of this system aren’t really the special parts of it, if it’s special at all.


  • I think there’s a sort of perfect storm that can happen. Suppose there are two types of YouTube users (I think there are other types too, but for the sake of this discussion we’ll just consider these two groups):

    • Type A watches a lot of niche content of which there’s not a lot on YouTube. The channels they’re subscribed to might only upload once a month to once a year or less.

    • Type B tends to watch one kind of content, of which there’s hundreds of hours of it from hundreds of different channels. And they tend to watch a lot of it.

    If a person from group A happens to click on a video that people from group B tend to watch that person’s homepage will then be flooded with more of that type of video, blocking out all of the stuff they’d normally be interested in.

    IMO YouTube’s algorithm has vacillated wildly over the years in terms of quality. At one point in time if you were a type A user it didn’t know what to do with you at all, and your homepage would consist exclusively of live streams with 3 viewers and family guy funny moments compilation #39.



  • Specifically they are completely incapable of unifying information into a self consistent model.

    To use an analogy you see a shadow and know its being cast by some object with a definite shape, even if you can’t be sure what that shape is. An LLM sees a shadow and its idea of what’s casting it is as fuzzy and mutable as the shadow itself.

    Funnily enough old school AI from the 70s, like logic engines, possessed a super-human ability for logical self consistancy. A human can hold contradictory beliefs without realizing it, a logic engine is incapable of self-contradiction once all of the facts in its database have been collated. (This is where the SciFi idea of robots like HAL-9000 and Data from Star Trek come from.) However this perfect reasoning ability left logic engines completely unable to deal with contradictory or ambiguous information, as well as logical paradoxes. They were also severely limited by the fact that practically everything they knew had to be explicitly programmed into them. So if you wanted one to be able to hold a conversion in plain English you would have to enter all kinds of information that we know implicitly, like the fact that water makes things wet or that most, but not all, people have two legs. A basically impossible task.

    With the rise of machine learning and large artificial neural networks we solved the problem of dealing with implicit, ambiguous, and paradoxical information but in the process completely removed the ability to logically reason.


  • That sounds absolutely fine to me.

    Compared to an NVME SSD, which is what I have my OS and software installed on, every spinning disk drive is glacially slow. So it really doesn’t make much of a difference if my archive drive is a little bit slower at random R/W than it otherwise would be.

    In fact I wish tape drives weren’t so expensive because I’m pretty sure I’d rather have one of those.

    If you need high R/W performance and huge capacity at the same time (like for editing gigantic high resolution videos) you probably want some kind of RAID array.






  • That’s what Google was trying to do, yeah, but IMO they weren’t doing a very good job of it (really old Google search was good if you knew how to structure your queries, but then they tried to make it so you could ask plain English questions instead of having to think about what keywords you were using and that ruined it IMO). And you also weren’t able to run it against your own documents.

    LLMs on the other hand are so good at statistical correlation that they’re able to pass the Turing test. They know what words mean in context (in as much they “know” anything) instead of just matching keywords and a short list of synonyms. So there’s reason to believe that if you were able to see which parts of the source text the LLM considered to be the most similar to a query that could be pretty good.

    There is also the possibility of running one locally to search your own notes and documents. But like I said I’m not sure I want to max out my GPU to do a document search.