think of an archive library as a bookshelf, with some books on it (the separate .o files).
some books may refer you to other books (via unresolved symbols), which may be on the same, or on a different bookshelf.
The Not Rocket Science Rule Of Software Engineering: automatically maintain a repository of code that always passes all the tests
Time passed, that system aged and (as far as I know) went out of service. I became interested in revision control, especially systems that enforced this Not Rocket Science Rule. Surprisingly, only one seemed to do so automatically (Aegis, written by Peter Miller, another charming no-nonsense Australian who is now, sadly, approaching death).
Fantastic post by Jason Crawford (The Roots of Progress)
A major theme of the 19th century was the transition from plant and animal materials to synthetic versions or substitutes mostly from non-organic sources
(Ivory, fertilizer, lighting, smelting, shellac)
There are many other biomaterials we once relied on—rubber, silk, leather and furs, straw, beeswax, wood tar, natural inks and dyes—that have been partially or fully replaced by synthetic or artificial substitutes, especially plastics, that can be derived from mineral sources. They had to be replaced, because the natural sources couldn’t keep up with rapidly increasing demand. The only way to ramp up production—the only way to escape the Malthusian trap and sustain an exponentially increasing population while actually improving everyone’s standard of living—was to find new, more abundant sources of raw materials and new, more efficient processes to create the end products we needed. As you can see from some of these examples, this drive to find substitutes was often conscious and deliberate, motivated by an explicit understanding of the looming resource crisis.
In short, plant and animal materials had become unsustainable.
To my mind, any solution to sustainability that involves reducing consumption or lowering our standard of living is no solution at all. It is giving up and admitting defeat. If running out of a resource means that we have to regress back to earlier technologies, that is a failure—a failure to do what we did in the 19th century and replace unsustainable technologies with new, improved ones that can take humanity to the next level and support orders of magnitude more growth.
free classic literature ebooks
Under general relativity, gravity is not a force. Instead it is a distortion of spacetime. Objects in free-fall move along geodesics (straight lines) in spacetime, as seen in the inertial frame of reference on the right. When standing on Earth we experience a frame of reference that is accelerating upwards, causing objects in free-fall to move along parabolas, as seen in the accelerating frame of reference on the left.
It is not safe stagnation and risky growth that we must choose between; rather, it is stagnation that is risky and it is growth that leads to safety.
we might be advanced enough to have developed the means for our destruction, but not advanced enough to care sufficiently about safety. But stagnation does not solve the problem: we would simply stagnate at this high level of risk.
The risk of a existential catastrophe then looks like an inverted U-shape over time:
There is an analog to this in environmental economics, called the “environmental Kuznets curve.” It was theorized that pollution initially rises as countries develop, but, as people grow richer and begin to value a clean environment more, they will work to reduce pollution again. That theory has arguably been vindicated by the path that Western countries have taken with regard to water and air pollution, for example, over the past century.
Carl Sagan was the one who coined the term “time of perils.” Derek Parfit called it the “hinge of history.”
On the other extreme, humanity is extremely fragile. No matter how high a fraction of our resources we dedicate to safety, we cannot prevent an unrecoverable catastrophe. Perhaps weapons of mass destruction are simply too easy to build, and no amount of even totalitarian safety efforts can prevent some lunatic from eventually causing nuclear annihilation. We indeed might indeed be living in this world; this would be the model’s version of Bostrom’s “vulnerable world hypothesis,” Hanson’s “Great Filter,” or the “Doomsday Argument.”
Perhaps, if we followed this argument to the end, we might reach the counterintuitive conclusion that the most effective thing we can do reduce the risk of an existential catastrophe is not to invest in safety directly or to try to persuade people to be more long-term oriented—but rather to spend money on alleviating poverty, so more people are well-off enough to care about safety.
It’s been 13 years since Yudkowsky published the sequences, and 11 years since he wrote “Rationality is Systematized Winning“.
So where are all the winners?
Immediately after the Systematised Winning, Scott Alexander wrote Extreme Rationality: It’s Not That Great claiming that there is “approximately zero empirical evidence that x-rationality has a large effect on your practical success”
The primary impacts of reading rationalist blogs are that 1) I have been frequently distracted at work, and 2) my conversations have gotten much worse.
Spin networks are states of quantum geometry in a theory of quantum gravity, discovered by Lee Smolin and Carlo Rovelli, which is the conceptual ancestor of the imaginary physics of Schild’s Ladder.
Cool, but also damning?
"Proposed by Michael Spivak in 1965, as an exercise in Calculus"