a search engine for the gemini network protocol
a search engine for the gemini network protocol
gemini://gemplex.space/search
have 2 flash/hard/whatever drives: A and B
once a month (or at what ever frequency you can sustain)
backup your data to A and next cycle backup your data to B
nothing fancy or technical, just some basic consistent backups.
if you can do that you’ll likely be fine. There are nearly infinitely many enhancements you can do if you are more technical or can follow technical instructions.
That is just the gateway drug to bootstrapping.
Check out https://github.com/fosslinux/live-bootstrap
if you want the real hard stuff.
They already did: https://www.commanderx16.com/
you just probably want something better.
and that is the problem building higher performance requires more advanced lithography and that is expensive and until recently was not even an option for a hobbyist (without taking a mortgage on their house).
Given current stagnation, you need only wait about 10 years for that viable option.
rxvt-unicode with tabbedex.
I refuse to use a terminal emulator that needs more than 100MB of RAM to display 80x24 green text on a black display
checksums at the filesystem level does nothing to protect against memory corruption which can overwrite everything on your disk with null values and a matching checksum; fail to write anything to disk and/or do nothing.
But that is the gamble you take every day with every GB of RAM you have.
the correct answer is Gemini or gopher.
No ECC, absolutely worthless for a NAS if you care about your data.
a git mirror of the content is the golden standard if you can do that.
It is recommended by the permacomputing community
Raid stopped being optimal now that btrfs and ZFS exist.
If you plan on doing matching drives ZFS is recommended
If you expect mismatched disks, btrfs will work.
If you are most worried about stability get a computer with ECC memory.
If you are most worried about performance, use SSD drives.
If you want a bunch of storage for cheap, use spinning disks (unless you exceed the 100TB capacity range)
looks interesting but does it have download/clone/mirror setup so that it doesn’t become another data graveyard?
The tools are already readily available under FSF approved licenses. https://www.gnu.org/licenses/licenses.en.html
Support the FSF if that is a legitimate concern to you
The tools are already readily available:
Relational Databases
SAT solvers
The missing bit is social action, which no amount of software can solve.
So effectively light enough that it could run on a raspberry PI 4. Well that would put you under 10W
Well the first question is what software you NEED to run, then we can figure out hardware.
Your ZFS backup strategy should be to follow one of the following rulesets:
3-2-1 [3 copies of the data at 2 different locations for the 1 purpose of preserving the data]
4-3-2-1 [4 copies of the data at 3 different locations in 2 different types of media for the 1 purpose of preserving the data]
5-4-3-2-1 [5 copies of the data at 4 different locations across 3 different continents in 2 different types of media for the 1 purpose of preserving the data]
The details of the backup is more if you have a second system to enable ZFS send/receive or if you have to transport deltas from ZFS send
heating is not done year around (365.25 days/year) for the majority of the world’s population.
Hence why places which need heating year around are generally considered an edge case.
Yes in a scenario, which you are in a cold climate which it is always cold outside. Then yes, thermal energy storage would be an extremely efficient option.
It doesn’t apply to most living humans but I grant you that special case.
yes, I did look at your link and noted all of sites are those near mountain ranges; which I certainly grant you is near (within 100 miles of) most human population centers.
Warzone 2100 (you can download for free as it is an old PC game that went GPL)
gets more on the nose by the day