

It would help if you could recall what steps you did, a link to the instructions you followed, and what you’re currently observing. Otherwise, we’re all just guessing at what might be amiss.
It would help if you could recall what steps you did, a link to the instructions you followed, and what you’re currently observing. Otherwise, we’re all just guessing at what might be amiss.
I’ve been reading a lot of Soatok’s blog, so when I see software that claims to be privacy-oriented, my first thought is: secure against what?
And in a refreshing change of pace, CryptPad actually outlines their threat model and how the software features might widen certain threats plus how to avoid those pitfalls. I’m not a security expert, but it’s clear they paid at least some attention to assuring privacy, rather than just paying it lip-service. So we’re off to a good start.
I select hostnames drawn from the ordinal numerals of whatever language I happen to be trying to learn. Recently, it was Japanese so the first host was named “ichiro”, the second as “jiro”, the third as “saburo”.
Those are the romanized spellings of the original kanji characters: 一郎, 二郎, and 三郎. These aren’t the ordinal numbers per-se (eg first, second, third) but are an old way of assigning given names to male children. They literally mean “first son”, “second son”, “third son”.
Previously, I did French ordinal numbers, and the benefit of naming this way is that I can enumerate a countably infinite number of hosts lol
Kubernetes does indeed have a learning curve, but it’s also strangely accommodating for single-node setups which can then be expanded only by adding components, rather than tearing the whole thing down and starting again. In that sense, it’s a great learning platform towards managing larger or commercial clusters, if simply to get experience with the unique challenges inherent to scaling up.
But that might be more of a [email protected] point of view haha
Ah, now I understand your setup. To answer the title question, I’ll have to be a bit verbose with how I think Incus behaves, so that the Docker behavior can be put into context. Bear with me.
br0 has the same MAC as the eth0 interface
This behavior stood out to me, since it’s not a fundamental part of Linux bridging. And it turns out that this might be a systemd-specific thing, since creating a bridge is functionally equivalent to a software switch, where every port of the switch has its own MAC, and all “clients” to that switch also have their own MAC. If I had to guess, systemd does this so that traffic from the physical interface (eth0) that passes directly through to br0 will have the MAC from the physical network, thus making it easier to understand traffic flows in Wireshark, for example. I personally can’t agree with this design choice, since it obfuscates what Linux is really doing vis-a-vis a software switch. But reusing this MAC here is merely a weird side-effect and doesn’t influence what Incus is doing.
Instead, the reason Incus needs the bridge interface is precisely because a physical interface like eth0 will not automatically forward frames to subordinate interfaces. Whereas for a virtual switch, that’s the default. To that end, the bridge interface is combined with virtual ethernet (veth) interfaces – another networking primitive in Linux – to each container that Incus manages. The behavior of a veth is akin to a point-to-point network cable, plus the NICs on both ends. That means a veth always consists of a pair of interfaces, where traffic into one end comes out the other, and each interface has its own MAC address. Functionally, this is the networking equivalent of a bidirectional pipe.
By combining a bridge (ie a virtual switch) with veth (ie virtual cables), we have a full Layer 2 network topology that behaves identically as if it were a physical bridge with physical cables. Thus, your DHCP server is none the wiser when it sends and receives BOOTP traffic for assigning an IP address. This is the most flexible way of constructing a virtual network within Linux, since it has feature-parity with physical networks: there is no Macvlan or Ipvlan or tunneling or whatever needed to make this work. Linux is just operating as a switch, with all the attendant flexibility. This architecture is what Calico – a network framework for Kubernetes – uses, in order to achieve scalable, Layer 3 connectivity to containers; by default, Kubernetes does not depend on Layer 2 to function.
OK, so we now understand why Incus does things the way it does. For Docker, when using the Macvlan driver, the benefits of the bridge+veth model are not achieved, because Macvlan – although being a feature of Linux networking – is something which is implemented against an individual interface on the host. Compare this to a bridge, which is a standalone concept and thus can exist with or without any interfaces to the host: when Linux is actually used as a switch – like on many home routers – the host itself can choose to have zero interfaces attached to the switch, meaning that traffic flows through the box, rather than to the box as a destination.
So when creating subordinate interfaces using Macvlan, we get most of the bridging behavior as bridge+veth but the Macvlan implementation in the kernel means that outbound traffic from a subordinate interface always get put onto the outbound queue of the parent interface. This makes it impossible for a subordinate interface to exchange traffic with the host itself, by design. Had they chosen to go the extra mile, they would have just reinvented a version of bridge+veth that is excessively niche.
We also need to discuss the behavior of Docker networks. Similar to Kubernetes, containers managed by Docker mandate having IP connectivity (Layer 3). But whereas Kubernetes will not start a container unless an IPAM (IP Address Management) plugin explicitly provides an IP address, Docker’s legacy behavior is to always generate a random IP address from a default range, unless given an IP explicitly. So even though bridge+veth or Macvlan will imbue Layer 2 connectivity to a DHCP server to obtain an IP address, Docker is eager to provide an IP, just so the container has one from the very start. The distinction between Docker and Kubernetes+Calico is thus one of actual utility: by getting an address from Calico’s IPAM, Kubernetes knows that the address will actual work for networking, because Calico also creates/manages a network. Whereas Docker has no problem assigning an IP but not actually checking if this IP can be used on that network; it’s almost a pro-forma exercise.
I will say this about early Docker: although they led the charge for making containers useful, how they implemented networking was very strange and led to a whole class of engineers who now have a deep misunderstanding of how real networks operate, and that only causes confusion when scaling up to orchestrated container frameworks like Kubernetes that depend on rigorous understanding of networking and Linux implementations. But all the same, Docker was more interested in getting things working without external dependencies like DHCP servers, so there’s some sense in mandating an IP locally, perhaps because they didn’t yet envision that containers would talk to the physical network.
The plugin that you mentioned operates by requesting a DHCP-assigned address for each container, but within the Docker runtime. And once it obtains that address, it then statically assigns it to the container. So from the container’s perspective, it’s just getting an IP assigned to it, not aware that DHCP has happened at all. The plugin is thus responsible for renewing that IP periodically. It’s a kludge to satisfy Docker’s networking requirements while still using DHCP-assigned addresses. But Docker just doesn’t play well with Layer 2 physical networks, because otherwise the responsibility for running the DHCP client would fall to the containers; some containers might not even have a DHCP client to run.
If I’m missing something about MACVLAN that makes DHCP work for Docker, let me know!
Sadly, there just isn’t a really good way to do this within Docker, and it’s not the kernel’s fault. Other container runtimes like containerd – which relies wholly on the standard CNI plugins and thus doesn’t have Docker’s networking footguns – have no problem with containers running their own DHCP client on a bridged network. But for any container manager to handle DHCP assignment without the container’s cooperation always leads to the same kludge as what Docker did. And that’s probably why no major container manager does that natively; it’s hard to solve.
I do wish there could be something like Incus’ hassle-free solution for Docker or Podman.
Since your containers were able to get their own DHCP addresses from a bridged network in Incus, can you still run the DHCP client on those containers to override Docker’s randomly-assigned local IP address? You’d have to use the bridge network driver in Docker, since you also want host-container traffic to work and we know Macvlan won’t do that. But even this is a delicate solution, since if DHCP fails to assign an address, then your container still has the Docker-assigned address but it won’t be usable on the bridged network.
The best solution I’ve seen for containers on DHCP-assigned networks is to not use DHCP assignment at all. Instead, part of the IP subnet is carved out, a region which is dedicated only for containers. So in a home IPv4 network like 192.168.69.0/24, the DHCP server would be restricted to only assigning 192.168.69.2 through 192.168.69.127, and then Docker would be allowed to allocate the addresses from 192.168.69.128 to 192.168.69.254 however it wants, with a subnet mask of 255.255.255.0. This mask allows containers to speak directly to addresses in the entire 192.168.69.0/24 range, which includes the rest of the network. The other physical hosts do the same, allowing them to connect to containers.
This neatly avoids interacting with the DHCP server, but at a loss of central management and it splits the allocatable addresses into smaller parts, potentially causing exhaustion in one side while the other has spare addresses. Yet another reason to adopt IPv6 as the standard for containers, but I digress. For Kubernetes and similar orchestration frameworks, DHCP isn’t even considered since the orchestrator must have full internal authority to assign addresses with its chosen IPAM plugin.
TL;DR: if your containers are like mini VMs, DHCP assignment is doable. But if they’re pre-packaged appliances, then only sadness results when trying to use DHCP.
I want to make sure I’ve understood your initial configuration correctly, as well as what you’ve tried.
In the original setup, you have eth0 as the interface to the rest of your network, and eth0 obtains a DHCP-assigned address from the DHCP server. Against eth0, you created a bridge interface br0, and your host also obtains a DHCP-assigned address in br0. Then in Incus, you created a Macvlan network against br0, such that each containers against this network will be assigned a random MAC, and all the container Ethernet frames will be bridged to br0, which in-turn bridges to eth0. In this way, all containers can each receive a DHCP-assigned address. Also, each container can send traffic to the br0 IP address, to access services running on the host. Do I have that right?
For your Docker attempt, it looks like you created a Docker network using the Macvlan driver, but it wasn’t clear to me if the parent interface here was eth0 or br0, if you still have br0. When you say “I have MACVLAN working”, can you describe which aspect is working? Unique MAC assignment? Bridged traffic to/from the containers or the network?
I’m not very familiar with Incus, but I’m entirely in the dark about this shoddy plugin you mentioned for DHCP and Macvlan to work. So far as I’m aware, modern Docker Engine uses the CNI plugins when creating networks, so the “-d macvlan” parameter specifies which CNI plugin will load. Since this would all be at Layer 2, I don’t see why a plugin is needed to support DHCP – v4 or v6? – traffic.
And the host cannot contact the container due to the MACVLAN method
Correct, but this is remedied by what’s to follow…
Can I make another bridge device off of br0 and bind to that one host-like?
Yes, this post seems to do exactly that: https://kcore.org/2020/08/18/macvlan-host-access/
I can always put a Docker/podman inside of an Incus container, but I’d like to avoid onioning if possible.
I think you’re right to avoid multiple container management tools, if simply because it’s generally unnecessary. Although it kinda looks like Incus is more akin to Proxmox, in that it supports managing VMs and containers, whereas Podman and Docker only manage containers, which is further still distinct from the container runtime (eg CRI-O, containerd, Docker Engine (which uses containerd under the hood)).
Absolutely. An example of a malicious collision would be to request the file with the SHA-1 of 38762cf7f55934b34d179ae6a4c80cadccbb7f0a. But… there’s two of them here.
MD5 is so broken that its former status as a cryptographic hash function has been stripped. And efforts are underway to replace SHA-1 where it’s used, since although it takes some prerequisites to intentionally create a SHA-1 collision today, it’s worth remembering that “attacks always get better, they never get worse”.
provide the hash of an arbitrarily large file and retrieve it from the network
I sense an XY Problem scenario. Can you explain what you’re seeking to ultimately build and what requirements you have?
Does the solution need to be distributed? Does the retrieval need to complete ASAP or can wait until data becomes available? What sort of reliability/availability does this need? If only certain hash algorithms can be supported, which ones do you need and why?
I ask this because the answer will be drastically different if you’re building the content distribution system for a small video game versus building the successor to Kim Dotcom’s Mega file-sharing service.
Ctrl Alt Speech: a podcast by TechDirt’s Mike Masnick (who coined the term “Streisand Effect”) about online speech and content regulation, and how it’s not at all a simple nor straightforward task.
Feed: https://feeds.buzzsprout.com/2315966.rss
Soatok’s Dhole Moments: a blog on cryptography and computer security, with in-depth algorithm discussions interspersed with entertaining furry art. SFW. Also find Soatok on Mastodon.
Feed: https://soatok.blog/feed/
Molly White’s Citation Needed newsletter: critiques of cryptocurrency, regulations, policies, and news. Available as a podcast too. Also find Molly White on Mastodon. She also has a site dedicated to cryptocurrency disasters.
I am always deeply enthralled when math and comp-sci unite to yield an elegant result, where my definition of elegance is: efficient + minimal.
I read this, and thought it was kind of all over the place. Even the first “falsehood” about always immediately crashing is answered as “true for some languages but not some others”. Even the motion of superlatives in CS like “always” and “never” rarely hold, including this very sentence and almost certainly when talking about multiple programming languages.
And on that point, it’s a minor quibble, but while Go’s nil pointers are similar to C null pointers and Rust’s null raw pointers, it’s a strange thing to have the title be about falsehoods about null pointers.
But then much of the other supposed falsehoods are addressed only for the C language, such as null deference being UB or not.
- On platforms where the null pointer has address 0, C objects may not be placed at address 0.
I would like to see a ©itation [pun intended] for this being a supposed falsehood, since my understanding is that if an implementation uses 0x0 as the null pointer, then the check for a null pointer is to check if it’s equal to 0x0, which would require that no “thing” in C use that address.
Can’t Python be translated into machine code
Yes, and that’s basically what the CPython interpreter does when you call a Python script. It sometimes even leaves the result laying in your filesystem, with the extension .pyc . This is the byte code (aka machine code) for CPython’s implementation of the Python Virtual Machine (PVM).
and packaged into a binary?
Almost. The .pyc file is meant to run with the appropriate PVM, not for x86 or ARM64, for example. But if you did copy that .pyc to another computer that has a CPython PVM, then you can run that byte code and the Python code should work.
To create an actual x86 or ARM64 binary, you might use a Python compiler like cython, which compiles to x86 or ARM64 by first translating to C, and then compiling that. The result is a very inefficient and slow binary, but it is functional. You probably shouldn’t do this though.
While I get your point that Python is often not the most appropriate language to write certain parts of an OS, I have to object to the supposed necessity of C. In particular, the bolded claim that an OS not written in C is still going to have C involved.
Such an OS could instead have written its non-native parts using assembly. And while C was intentionally designed to be similar to assembly, it is not synonymous with assembly. OS authors can and do write assembly when even the C language cannot do what they need, and I gave an example of this in my comment.
The primacy of C is not universal, and has a strong dependency on the CPU architecture. Indeed, there’s a history of building machines which are intended for a specific high-level language, with Lisp Machines being one of the most complex – since Lisp still has to be compiled down to some sort of hardware instructions. A modern example would be Java, which defines the programming language as well as the ISA and byte code: embedded Java processors were built, and thus there would have been zero need for C apart from legacy convenience.
As it happens, this is strikingly similar to an interview question I sometimes ask: what parts of a multitasking OS cannot be written wholly in C. As one might expect, the question is intentionally open-ended so as to query a candidate’s understanding of the capabilities and limitations of the C language. Your question asks about Python, but I posit that some OS requirement which a low-level language like C cannot accomplish would be equally intractable for Python.
Cutting straight to the chase, C is insufficient for initializing the stack pointer. Sure, C itself might not technically require a working stack, but a multitasking operating system written in C must have a stack by the time it starts running user code. So most will do that initialization much earlier, so that the OS’s startup functions can utilize the stack.
Thjs is normay done by the bootloader code, which is typically written in assembly and runs when the CPU is taken out of reset, and then will jump into the OS’s C code. The C functions will allocate local variables on the stack, and everything will work just fine, even rewriting the stack pointer using intrinsics to cause a context switch (although this code is often – but not always – written in assembly too).
The crux of the issue is that the initial value of the stack pointer cannot be set using C code. Some hardware like the Cortex M0 family will initialize the stack pointer register by copying the value from 0x00 in program memory, but that doesn’t change the fact that C cannot set the stack pointer on its own, because invoking a C function may require a working stack in the first place.
In Python, I think it would be much the same: how could Python itself initialize the stack pointer necessary to start running Python code? You would need a hardware mechanism like with the Cortex M0 to overcome this same problem.
The reason the Cortex M0 added that feature is precisely to enable developers to never be forced to write assembly for that architecture. They can if they want to, but the architecture was designed to be developed with C exclusively, including interrupt handlers.
If you have hardware that natively executes Python bytecode, then your OS could work. But for x86 platforms or most other targets, I don’t think an all-Python, no-assembly OS is possible.
I also want to note that in the year 2025, GitHub still does not support IPv6. Folks behind CGNAT in IPv4-starved geos suffer, as does everyone developing for all-IPv6 networks. And it’s not like they can’t do it, seeing as their various subdomains like pages.github.com have working IPv6 already.
I suspect that PG&E’s smart meters might: 1) support an infrared pulse through an LED on the top of the meter, and 2) use a fairly-open protocol for uploading their meter data to the utility, which can be picked up using a Software Defined Radio (SDR).
Open Energy Monitor has a write-up about using the pulse output, where each pulse means a quantity of energy was delivered (eg 1 Watt-hour). So counting 1000 of such pulses would be 1 kWh, and that would be a way to track your energy consumption for any timescale.
What it won’t do is provide instantaneous power (ie kW drawn at this very moment) because the energy must accumulate to the threshold before sending a pulse. For example, a 9 Watt LED bulb that is powered on would only cause a new pulse every 6.7 minutes. But for larger loads, the indication would be very quick; a 5000 W dryer would emit a new pulse after no more than 0.72 seconds.
The other option is decoding the wireless protocol, which people have done using FOSS software. An RTL-SDR receiver is not very expensive, is very popular, and can also be used for other purposes besides monitoring the electric meter. Insofar as USA law is concerned, unencrypted transmissions are fair game to receive and decode. This method also has a wealth of other useful info in the data stream, such as instantaneous wattage in addition to the counter registers.
Unabashed plug for GnuCash. It’s FOSS, double-entry, and capable enough for oddball personal finances or business finance, with all the spreadsheet exporting one might need.
To make sure we’re all on the same page, this proposal involves creating an account with a service provider, then uploading some sort of preexisting, established proof-of-identity (eg passport data page), and then requesting a token against that account. The token is timestamped and non-fungible, so that when the token is presented to an age-restricted website, that website can query the service provider to verify that: 1) the token is still valid, 2) the person associated with the token is at least a certain age.
If I understood that correctly, what you’re describing is an account service combined with an identity service, which could achieve the objectives of a proof-of-age service, but does not minimize privacy complications. And we already have account services of varying degrees and complexity: Google Accounts, OAuth, etc. Basically any service where you log-in, since the point of logging in is to associate to a account, although one person can have multiple accounts. Passing around tokens isn’t strictly necessary since you can just ask the user to prove account ownership by signing into their Google Account, for example. An account service need not necessarily verify age, eg signing in to post a comment on a news article.
Compare this with an identity service like ID.me, which provide records on an individual; there cannot be multiple records for the same live person. This type of service is distinct from an account service, but some accounts are necessarily tied to a single identity, such as online banking. But apart from KYC regulations or filing one’s taxes online, an identity service isn’t required for most day to day activities, and any additional uses pose identify theft concerns.
Proof-of-age – as I understand it from the Australian legislation – does not necessarily demand an identity service be used to satisfy the law, but the question in this Lemmy thread is whether that’s a distinction without a difference. We don’t want to be checking identities if we don’t have to, for privacy and identity theft reasons.
In short, can a person be uniquely, anonymously age-verified online? I suspect not. Your proposal might be reasonable for an identity service, but does not move us further towards a theoretical privacy-centric proof-of-age validation mechanism. If such a mechanism doesn’t exist, then the Australian legislation would be mandating identity checks for subject websites, which then become targets for the holder of those identity records. This would be bad.
Sadly, this type of scheme suffers from: 1) repudiation, and 2) transferability. An ideal system would be non-repudiable, meaning that when a GUID is used, it is unmistakably an action that could only be undertaken by the age-verified person. But a GUID cannot guarantee that, since it’s easy enough for an adult to start selling their valid GUIDs online to the highest bidder en-masse. And being a simple string, it can easily and confidentially be transferred to the buyer, so that no one but those two would know that the transaction actually took place, or which GUID was passed along.
As a general rule, when complex questions arise which might possibly be solved by encryption, it’s fairly safe to assume that expert cryptographers have already looked at the problem and that no easy or obvious solution exists. That’s not to say that cryptographers must never be questioned, but that the field is complicated enough that incomplete answers abound.
IMO, the other comments have it right: there does not exist a general solution to validate age without also compromising anonymity or revealing one’s identity to someone. And that alone is already a privacy compromise.
You and friend 1 have working setups. Friend 2 can’t seem to get their setup to work. So the problem has to be specific to friend 2’s machine or network.
To start at the very basics, when WG is disabled, what are friend 2’s DNS servers, as listed in “/etc/resolve.conf” (Linux) or in “ipconfig” on Windows. This can be an IPv4 or IPv6 address. Whatever it is, take note of it. Also try to ping it and make sure the ping is successful.
Then have friend 2 enable WG. Now try pinging the same DNS servers again. If this fails, you are one step closer to the problem. If this succeeds, then check to see if WG caused new DNS servers to replace the former ones.
One possibility is that friend 2’s home network also uses 192.168.8.X, and so the machine tries to reach the DNS servers by going through WG. But we need more details before making this conclusion.
You also said friend 2 can ping 9.9.9.9 (aka Quad9), but is this friend using Quad9 as their DNS server? If so, what exactly is observed when you say that “DNS doesn’t resolve”? Is this an error in a browser or the result from running “nslookup” in the command line?
IPv6 isn’t likely to be directly responsible for DNS resolution failures, but a misconfigured WG tunnel that causes an IPv6 DNS server to be blackholed is one way to create resolution failure. It may also just be red herring, and the issue is contained entirely to IPv4. I would not recommend turning off IPv6, because that’s almost always the wrong answer and sweeps the other problems under the rug.