a beautiful robot, dancing alone · showgirls über alles: kylie, angèle · masto · last.fm · listenbrainz · https://www.lovekylie.com/keyoxide
OSM has a lot more data inside than the website shows - in dense shopping areas you can’t zoom in far enough to see all the POIs, much less business names.
I’ve read before that using cached previews was done to stay accessible to less-powerful mobile devices, which would have smaller CPUs that would be taxed by rendering the native vector data. I view it as a branding disadvantage that OSM appears, from desktops, to have less info than alternatives. But that’s a battle that’s been had many times before, one might as well argue over paper vs plastic.
The main URL points to this:
cargo install mollysocket
mollysocket
executable if desiredmollysocket
once so that it will emit the default config.config/mollysocket/default-config.toml
and copy it somewhere.allowed_endpoints
line with allowed_endpoints = ['*']
. The default 0.0.0.0 config appears to be a bug; this setting controls access to endpoints within the app, not IPs from outside. Leaving the original value causes mollysocket to reject everything.db = './mollysocket.db'
line rather than just having it land wherever you’re sitting.mollysocket.db
that was created on first run (even if it’s already where you’re intending to put it). This is just to make sure the web server creates it and has the correct permissions.export ROCKET_PORT=8020
export RUST_LOG=info
export MOLLY_CONF=/path/to/your/config.toml
/
to your mollysocket server and ROCKET_PORT.deleted by creator
you probably already found this, but for others who might be curious:
in the settings if you change notification method from websocket to unified push, the UP settings come up, including a server address (which is what they intend to be used) or some air gap mode that i can’t find documented
again not foss so won’t dwell at length — but i use fund manager from beiley software. commercial, but works double-entry and handles more investment complexity than a human could ever need. windows app, i run it under wine on linux and crossover on mac. (i don’t own a windows box — that’s how irreplaceable it was for me.)
so per wikipedia and confirmed at MDN, firefox is the only major browser line not to consider certificate transparency at all. and yet it’s the only one that has given me occasional maddening SSL errors that have blocked site access (not always little sites, it’s happened with amazon).
i don’t understand how firefox can be simultaneously the least picky about certificates and the most likely to spuriously decide they’re invalid.
well i feel stupid now for not doing the obvious. but…
Blocked Page
Your organization has blocked access to this page or website.
on the PPA box, this is what it showed me (meanwhile it was attempting to connect to incoming.telemetry.mozilla.org). another symptom of displaying respect for enterprise policies but in fact ignoring them. (as i had mentioned, on this box all of the settings look locked down as they should be, but it’s still attempting to send telemetry.)
thanks, i’ll look again. it’s not that i love the idea of being fingerprinted; i just think that five mylar bags, four tin hats and a partridge in a pear tree won’t save me from that. i need my password manager, and once that’s in, enforcing a generic screen is silly - cow’s out of the barn. but not having the arms race against pocket and telemetry would be a big bonus.
i did try that but the never-dark mode blinded me. i understand the reasoning, but absolute anonymity isn’t my own threat model; i’d like to be able to use themes and resize the window
an interesting oddity: on my non-rooted xperia, signal thinks that i don’t have play services and so it falls back to… polling. every five minutes. killing my battery and my logs.
i had to put signal into the restricted battery group, which means no notifications. i anxiously await the new molly, as i already have a unified push environment. it looks like the migration will be a bit delicate.
It exists, it’s called a robots.txt file that the developers can put into place, and then bots like the webarchive crawler will ignore the content.
the internet archive doesn’t respect robots.txt:
Over time we have observed that the robots.txt files that are geared toward search engine crawlers do not necessarily serve our archival purposes.
the only way to stay out of the internet archive is to follow the process they created and hope they agree to remove you. or firewall them.
i am not sure it’s a flaw at all. the conditional tag syntax is based on opening_hours, which should be able to express ‘closed at these times until that date’. there are ways to finesse this. but as long as the published guideline is ‘don’t do this’, there’s little point pondering practical solutions.
Our map data is often downloaded and used offline on various devices for several weeks or months. For offline data to be useful, it should at least be expected to remain unchanged in the next few weeks when you map it.
yes, by this blurb, concession for offline users does supersede safety.
i’m an editor active enough to have been granted foundation membership but hadn’t known this rule; it indicates a view of osm as analogous to a paper map rather than for real-time navigation. if a change of less than weeks’ length is discouraged, i can’t in good conscience steer my friends away from google maps, as navigation is not a primary use case.
it is common practice in the u.s., at least, to use two nodes for big chain drugstores, where the shop, marked chemist, often has wildly different hours from the pharmacy. they have the same name and much of the same info
i made the same migration from markor (files in a folder) to logseq. there’s a lot to be gained - always-preview alone is a game changer - but on mobile the visibility of the keyboard can be fiddly. once in a while you’ll feel like you’re in vi, it has such a mind of its own. but i’m not planning to go back
looks great! the catch for me is that my current host doesn’t have docker support. your dependencies don’t look crazy so in theory i could burst it and install directly to the host environment, but at that point i’m giving myself grocy-level headaches.
reading about docker-capable hosts, i was surprised to see them starting at 1GB RAM - i couldn’t run pac-man in that. what would be a reasonable expectation for kitchenowl?
i haven’t tried the docker route - it seems fairly new. it also doesn’t seem like it would fix the issues i ran into. containerization is great for insulating the app from external dependency hell and environmental variation. but the problems i’ve had involve its own code and logic, and corruption of a sqlite database within its own filesystem; wrapping issues like that in a docker container only makes them harder to solve
i’m no expert — consensus sounds like putting disused only on the main tag, and when i’ve encountered this, i haven’t marked anything disused at all. i’ve only looked at the stop/platform to make sure they weren’t in any relation (transit line relations may include the passing way but shouldn’t include the disused stop/platform). and i make sure route_ref isn’t set on the stop/platform. were the stop to be used again, i figure it would have the same ref/stop id and operator, so i don’t remove them. listening for better ideas though