I've deployed a resource of bible databases, canonical and non-canonical. The readable sets: https://scrollmapper.github.io/ (archive) The canonical databases (sql, txt, md) https://github.com/scrollmapper/bible_databases_deuterocanonical The apocryphal databases (sql, txt, md) https://github.com/scrollmapper/bible_databases These were created to easily facilitate the transmission of biblical and Christian traditional texts. Of all the apocryphal books, my favorites are: 1 Enoch 2 Esdras 2 Baruch
Big Island Hawaii Web Design and Services
Vault Tech calling! Presenting a secondary internet for when after the bombs fall… I’ve started a new project at GitHub. For some time now I’ve been wanting to create a simple, separate internet. It would be based on nothing more than human readable text (.txt) files and images. Originally, it was inspired by a sort of technological dooms-day prepping – a creation of archives and rudimentary internets that could easily be deployed.
The original founders of the major programming languages and computer systems are still alive today. They’ve had every opportunity to change with the times. They, after all, know innovation. They inspired us. They still do. Lets look at websites of the great masters in 2020 and take a lesson: Bjarne Stroustrup (C++) How he centered his picture, I will never know… https://www.stroustrup.com/ (archive) Richard Stallman (GNU General Public License) Regularly updated…
This is a small addition to the previous post: Why I Link to WayBackMachine Instead of Original Site (archived): This started a discussion on Hacker News (archived) where Wayback Machine responded with what they consider best-practice for linking to archived URLs. The Official Wayback Machine Suggestion for Circumventing Link Rot and Content Drift We suggest/encourage people link to original URLs but ALSO (as opposed to instead of) provide Wayback Machine URLs so that if/when the original URLs go bad (link rot) the archive URL is available, or to give people a way to compare the content associated with a given URL over time (content drift) BTW, we archive all outlinks from all Wikipedia articles from all Wikipedia sites, in near-real-time… so that we are able to fix them if/when they break.
When linking to a page for the purpose of reference, it seems better to me to link to the archive of a given page, rather than to the original site itself. This ensures that after some years have gone by, my article is guaranteed to be consistent. Due the changing nature of the web, there is a chance that after some years, the link could lead to a: 404 / Not Found (most common) Changed or edited content, or entirely replaced content Content that, due to a rise in popularity, is now shielded, demanding the user to make an account to read the entire article.