• 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: August 6th, 2023

help-circle



  • As a recently former hpc/supercomputer dork nfs scales really well. All this talk of encryption etc is weird you normally just do that at the link layer if you’re worried about security between systems. That and v4 to reduce some metadata chattiness and gtg. I’ve tried scaling ceph and s3 for latency on 100/200g links. By far NFS is easier than all the rest to scale. For a homelab? NFS and call it a day, all the clustering file systems will make you do a lot more work than just throwing hard into your nfs mount options and letting clients block io while you reboot. Which for home is probably easiest.





  • Bingo, the town I went to school in had barely 500 people when the school which had taken over for two other closed schools kids. It’s even less now. My grade was the largest at 32 kids too. There were former “towns” dotted all over from the rush west where train tracks used to be. All gone now and just somewhere used for cow shelter in the winter. These towns were simply stops for railroad cars to result water on the route west. Once that wasn’t needed the slow march to 0 began. My nearest non family member was over 7 miles away. There is a lot of interior USA that is really sparsely populated and is really just returning to pre colonial eras of mainly giant farmland or grazing pastures.






  • That’s rpm, suse Linux 1.0 was never built off the same source or installer that Redhat Linux was.

    Do you have a historical example where any suse distribution used redhat based source? As opensuse as I said only used the rpm package manager, it never used any other components of a redhat derived install.

    Source: I work there and can find zero redhat strings in any old source code from that era, the old greybeards took offense to the implication that suse was ever based on redhat other than using rpm which at the time was about it for packaging.

    All they did was start to use rpm instead of tar for packaging.



  • Sorry, but your reply suggests otherwise.

    I’m at work, I’m not going to go into a thesis on ip allocation.

    The RIRs (currently) never allocate a /64 nor a /58. /48 is their (currently) smallest allocation. For example, of the ~800,000 /32’s ARIN has, only ~47k are “fragmented” (smaller than /32) and <4,000 are /48s. If /32s were the average, we’d be fine, but in our infinite wisdom, we assign larger subnets (like Comcast’s 2601::/20 and 2603:2000::/20).

    Correct all noted here https://www.iana.org/numbers/allocations/arin/asn/

    Taking into account the RIPE allocations, noted above, the closer equivalent to /8 is the 1.048M /20s available. Yes, it’s more than the 8-bit class-A blocks, but does 1 million really sound like the scale you were talking about? “enough addresses in ipv6 to address every known atom on earth”

    If you’re going to go through and conflate 2^128 as being larger than the amount of atoms on earth to a prefixing assignment scheme I’m just going to assume this is a bad faith argument.

    Have a good one I’m not wasting more time on this. The best projections for “exhausting” our ipv6 allocations is around 10 million years from now. I think by then we can change the default cidr allocations.

    https://samsclass.info/ipv6/exhaustion-2016.htm

    Its old sure but not worth arguing further.


  • I’m fully aware how rirs allocate ipv6. The smallest allocation is a /64, that’s 65535 /64’s. There are 2^32 /32’s available, and a /20 is the minimum allocatable now. These aren’t /8’s from IPv4, let’s look at it from a /56, there are 10^16 /56 networks, roughly 17 million times more network ranges than IPv4 addresses.

    /48s are basically pop level allocations, few end users will be getting them. In fact comcast which used to give me /48s is down to /60 now.

    I’ll repeat, we aren’t running out any time soon, even with default allocations in the /3 currently existing for ipv6.