Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take:

  • The US is falling apart rapidly (on the scale of years), as evident in US politics departing from sanity and honor, sharp polarization, violent civil unrest, hopeless pandemic responses, ensuing economic catastrophe, one in a thousand Americans dying by infectious disease in 2020, and the abiding popularity of Trump in spite of it all.
  • Western civilization is declining on the scale of half a century, as evidenced by its inability to build things it used to be able to build, and the ceasing of apparent economic acceleration toward a singularity.
  • AI agents will control the future, and which ones we create is the only thing about our time that will matter in the long run. Major subplots:
    • ‘Aligned’ AI is necessary for a non-doom outcome, and hard.
    • Arms races worsen things a lot.
    • The order of technologies matters a lot / who gets things first matters a lot, and many groups will develop or do things as a matter of local incentives, with no regard for the larger consequences.
    • Seeing more clearly what’s going on ahead of time helps all efforts, especially in the very unclear and speculative circumstances (e.g. this has a decent chance of replacing subplots here with truer ones, moving large sections of AI-risk effort to better endeavors).
    • The main task is finding levers that can be pulled at all.
    • Bringing in people with energy to pull levers is where it’s at.
  • Institutions could be way better across the board, and these are key to large numbers of people positively interacting, which is critical to the bounty of our times. Improvement could make a big difference to swathes of endeavors, and well-picked improvements would make a difference to endeavors that matter.
  • Most people are suffering or drastically undershooting their potential, for tractable reasons.
  • Most human effort is being wasted on endeavors with no abiding value.
  • If we take anthropic reasoning and our observations about space seriously, we appear very likely to be in a ‘Great Filter’, which appears likely to kill us (and unlikely to be AI).
  • Everyone is going to die, the way things stand.
  • Most of the resources ever available are in space, not subject to property rights, and in danger of being ultimately had by the most effective stuff-grabbers. This could begin fairly soon in historical terms.
  • Nothing we do matters for any of several reasons (moral non-realism, infinite ethics, living in a simulation, being a Boltzmann brain, ..?)
  • There are vast quantum worlds that we are not considering in any of our dealings.
  • There is a strong chance that we live in a simulation, making the relevance of each of our actions different from that which we assume.
  • There is reason to think that acausal trade should be a major factor in what we do, long term, and we are not focusing on it much and ill prepared.
  • Expected utility theory is the basis of our best understanding of how best to behave, and there is reason to think that it does not represent what we want. Namely, Pascal’s mugging, or the option of destroying the world with all but one in a trillion chance for a proportionately greater utopia, etc.
  • Consciousness is a substantial component of what we care about, and we not only don’t understand it, but are frequently convinced that it is impossible to understand satisfactorily. At the same time, we are on the verge of creating things that are very likely conscious, and so being able to affect the set of conscious experiences in the world tremendously. Very little attention is being given to doing this well.
  • We have weapons that could destroy civilization immediately, which are under the control of various not-perfectly-reliable people. We don’t have a strong guarantee of this not going badly.
  • Biotechnology is advancing rapidly, and threatens to put extremely dangerous tools in the hands of personal labs, possibly bringing about a ‘vulnerable world’ scenario.
  • Technology keeps advancing, and we may be in a vulnerable world scenario.
  • The world is utterly full of un-internalized externalities and they are wrecking everything.
  • There are lots of things to do in the world, we can only do a minuscule fraction, and we are hardly systematically evaluating them at all. Meanwhile massive well-intentioned efforts are going into doing things that are probably much less good than they could be.
  • AI is powerful force for good, and if it doesn’t pose an existential risk, the earlier we make progress on it, the faster we can move to a world of unprecedented awesomeness, health and prosperity.
  • There are risks to the future of humanity (‘existential risks’), and vastly more is at stake in these than in anything else going on (if we also include catastrophic trajectory changes). Meanwhile the world’s thinking and responsiveness to these risks is incredibly minor and they are taken unseriously.
  • The world is controlled by governments, and really awesome governance seems to be scarce and terrible governance common. Yet we probably have a lot of academic theorizing on governance institutions, and a single excellent government based on scalable principles might have influence beyond its own state.
  • The world is hiding, immobilized and wasted by a raging pandemic.

It’s a draft. What should I add? (If, in life, you’ve chosen among ways to improve the world, is there a simple story within which your choices make particular sense?)

New to LessWrong?

New Comment
48 comments, sorted by Click to highlight new comments since: Today at 5:03 PM

The above narratives seem to be extremely focused into a tiny part of narrative-space, and it's actually a fairly good representation of what makes LessWrong a memetic tribe. I will try to give some examples of narratives that are... fundamentally different, from the outside view; or weird and stupid, from the inside view. (I'll also try to do some translation between conceptual frameworks.) Some of these narratives you already know - just look around the political spectrum, and notice what narratives people live in. There are aslo some narratives I find better than useless:

  1. Karma. Terrible parents will likely have children who can't reach their full potential and can't help the world, and who will themselves go on becoming terrible parents. Those who were abused by the powerful will go on abusing their power wherever and whenever they have any. Etc. Your role is to "neutralize the karma", to break the part of the cycle that operates through you: don't become a terrible parent yourself, don't abuse your power, etc. even though you were on the recieving end.
  2. The world is on the verge of collapse because the power of humanity through technology has risen faster than our wisdom to handle it. You have to seek wisdom, not more power.
  3. The world is run by institutions that are run by unconscious people (i.e. people who aren't fully aware of how their contribution as a cog to a complex machine affects the world). Most problems in the world are caused by the ignorant operation of these institutions. You have to elevate people's consciousness to solve this problem.
  4. Humans and humanity is evolving through stages of development (according to something like integral theory). Your role is to reach the higher stages of development in your life, and help your environment to do likewise.
  5. History is just life unfolding. Your job isn't to plan the whole process, just as the job of a single neuron isn't to do the whole computation. The best thing you can do is just to live in alignment with your true self, and let life unfold as it has to, whatever the consequences (just as a neuron doing anything other than firing according to its "programming" is simply adding noise to the system).
  6. Profit (Moloch) has overtaken culture (i.e. the way people's minds are programmed). The purpose of profit (i.e. the utility function of Moloch that can be reconstructed from its actions) isn't human well-being or survival of civilization, so the actions of people (which is a manifestation of the culture) won't move the world toward these goals. Your role is to raise awareness, and to help reclaim culture from the hands of profit, and put a human in the driver's seat again (i.e. realign the optimization process by which culture is generated so that the resulting culture is going to be aligned with human values).
  7. Western civilization is at the end of its lifecycle. This civilization has to collapse, to make way for a new one that relates to this civilization in the same way the western civilization relates to the fallen Rome. Your role isn't to prevent the collapse, but to start creating the first building blocks which will form the basis for the new civilization.
  8. The world is on the brink of a context switch (i.e. the world will move to a formerly inaccessible region of phase space - or has already done so). Your models of the world are optimized to the current context, and therefore they are going to be useless in the new context (no training data in that region of the phase space). So you can't comprehend the future by trying to think in terms of models, instead you have to reconnect with the process that generated those models. Your role is to be prepared for the context switch to try to mess things up as little as possible, though some of it is inevitable.
  9. Reality (i.e. the "linear mapping" you use to project the world's phase space to a lower dimensional conceptual space through your perception and sensemaking) is an illusion (i.e. has in its Kernel everything that actually matters). Your role is to realize that (and after that your role will be clear to you).
  10. The world is too complex for any individual to understand. Your role is to be part of a collective sensemaking through openness and dialog that has the potential to collectively understand the world and provide actionable information. (In other words, there is no narrative simple enough for you to understand but complex enough to tackle the world's challenges.)
  11. The grand narrative you have to live your life by changes depending on your circumstances, just like it depends on where you are whether you have to turn left or right. Your role is to learn to understand and evaluate the utility of narratives, and to increase your capacity to do so.

This list is by no means comprehensive, but this is taking way too much time, so I'll stop now, lest it should become a babble challenge.

[-][anonymous]3y60

[Deleted]

I don't understand your post. Why are memetic tribes relevant to the discussion of potential existential risks; which is the basis of the original post? Is your argument that all communities have some sort of shared existential threat, that is contradictory to the other existential threats of other communities? It seems to me the point of a rationalist community should be to find the greatest existential threats and focus on finding solutions.

The basis of the original post isn't existential threats, but narratives - ways of organizing the exponential complexity of all the events in the world into a comparatively simple story-like structure.

Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take

Memetic tribes are only tangentially relevant here. I didn't really intend to present any argument, just a set of narratives present in some other communities you probably haven't encountered.

Strongly disagree about the "great filter" point.

Any sane understanding of our prior on how many alien civilizations we should have expected to see is structured (or at least, has much of its structure that is) more or less like the Drake equation: a series of terms, each with more or less prior uncertainty around it, that multiply together to get an outcome. Furthermore, that point is, to some degree, fractal; the terms themselves can be — often and substantially, though not always and completely — understood as the products of sub-terms.

By the Central Limit Theorem, as the number of such terms and sub-terms increases, this prior approaches a log-normal distribution; that is, if you take the inverse (proportional to the amount of work we'd expect to have to do to find the first extraterrestrial civilization), the mean much higher than the median, dominated by a long upper tail. That point applies not just to the prior, but to the posterior after conditioning on evidence. (In fact, as we come to have less uncertainty about the basic structure of the Drake-type equation — which terms it comprises, even though we may still have substantial uncertainty about the values of those terms — the argument that the posterior must be approximately log-normal only grows stronger than it was for the prior.)

In this situation, given the substantial initial uncertainty about the value of the terms associated with steps that have already happened, the evidence we can draw from the Great Silence about any steps in the future is very, very weak.

As a statistics PhD, experienced professionally with Bayesian inference, my confidence on the above is pretty high. That is, I would be willing to bet on this at basically any odds, as long as the potential payoff was high enough to compensate me for the time it would take to do due diligence on the bet (that is, make sure I wasn't going to get "cider in my ear", as Sky Masterson says). That's not to say that I'd bet strongly against any future "Great Filter"; I'd just bet strongly against the idea that a sufficiently well-informed observer would conclude, post-hoc, that the bullet point above about the "great filter" was at all well-justified based on the evidence implicitly cited.

True, the typical argument for the great silence implying a late filter is weak, because an early filter is not all that a priori implausible. 

However, the OP (Katja Grace) specifically mentioned "anthropic reasoning".

As she  previously pointed out, an early filter makes our present existence much less probable than a late filter. So, given our current experience , we should weight the probability of a late filter much higher than the prior would be without anthropic considerations.

Thanks for pointing that out. My arguments above do not apply.

I'm still skeptical. I buy anthropic reasoning as valid in cases where we share an observation across subjects and time (eg, "we live on a planet orbiting a G2V-type star", "we inhabit a universe that appears to run on quantum mechanics"), but not in cases where each observation is unique (eg, "it's the year 2021, and there have been about 107,123,456,789 (plus or minus a lot) people like me ever"). I am far less confident of this than I stated for the arguments above, but I'm still reasonably confident, and my expertise does still apply (I've thought about it more than just what you see here).

This could mean you would also have to reject thirding in the famous Sleeping Beauty problem. Which contradicts a straightforward frequentist interpretation of the setup: If the SB experiment was repeated many times, one third of the awakenings would be Monday Heads, so if SB was guessing after awakening "the coin came up heads" she would be right with frequentist probability 1/3.

Of course there are possible responses to this. My point is just that: rejecting Katja's doomsday argument by rejecting SIA style anthropic reasoning may come with implausible consequences in other areas.

You’re giving us a convenient set of narratives, and asking us to explain out major life decisions in terms of them.

I think a better question is whether any of these narratives, or some combination of them, are the primary conscious reasons for any of our major life decisions. Also, to what degree the narratives as stated accurately and unambiguously match the versions we believe ourselves. Also, which narratives we think are important or irrelevant.

Otherwise, you get a dynamic where the inconvenience of making these distinctions and providing nuance gives the impression that people actually believe this stuff as stated.

You get a few points of “supporting evidence” per respondent, but no “negative evidence” since you’re not asking for it. It starts to look like every narrative has at least a few smart people taking to really seriously, so we should take them all seriously. As opposed to every theory having the majority of smart people not taking them seriously.

Then of course you’re targeting this question to a forum where we all know there’s a higher proportion of people who DO take these things seriously than in the general population of Smart People, so you’re also cherry picking.

I don’t know what you’re planning on doing with the responses you get, but I hope you’ll take these issues into consideration.

Not saying I endorse these fully, certainly not to the extent of them being the "whole plot" and making other considerations irrelevant, but I think they both contain enough of a kernel of truth to be worth mentioning:


1) While not quite an existential threat, climate change seems posed to cause death and suffering on a considerable, perhaps unprecedented, scale within this century, and will likely also act as a "badness multiplier", making pre-existing issues like disease, political instability and international conflicts worse. Absent technological advances to offset these problems, the destruction of arable land and increasing scarcity of drinking water will likely increase zero-sum competition and make mutually beneficial cooperation more difficult.

2) More speculatively: due to the interconnectedness of the modern world, our increased technological capabilities, and the sheer speed of technological, cultural and political change, the world is becoming more complex in a way that makes it increasingly hard to accurately understand and act rationally in - the "causal graphs" in which people are embedded are becoming both larger and denser (both upstream and downstream of any one actor), and unpredictable, non-linear interaction between distant nodes more often have an outsized effect - black swans are becoming both larger and more common. The central plot is that everybody has lost the plot, and we might not be cognitively equipped to recover it.

Trump remains popular "in spite of it all" because those who despise his supporters refuse to understand why they support him. Why refuse? Equal measures of spite. To better understand, change the word "partner" to "political enemy" in your comment guideline "If you disagree, try getting curious about what your partner is thinking".

You could make that less parochial by rephrasing to something like:

There is a worldwide rise in nationalism and populism and corresponding rejection of globalism, leading to worse leaders. This rise is poorly understood by elites, which lessens hope that this trend is going away soon.

This rise is poorly understood by elites

The incentives for elites seem to prevent understanding. By (being known as) understanding non-elites, you lose your elite status.

A king is allowed to express certain amount of empathy with starving peasants, and he still remains king. For political elites, expressing empathy with their opponents would probably be a career suicide. (Instead, the proper form of "understanding" your opponents is to attack their strawmen.)

The world is controlled by governments, and really awesome governance seems to be scarce and terrible governance common

Or...liberal democracy has spread , as other systems have failed. But maybe liberal democracy isnt good enough to count as really awesome.

Though I've posted 3 more-or-less-strong disagreements with this list, I don't want to give the impression that I think it has no merit. Most specifically: I strongly agree that "Institutions could be way better across the board", and I've decided to devote much of my spare cognitive and physical resources to gaining a better handle on that question specifically in regards to democracy and voting.

Maybe something about the collapse of sensemaking and the ability of people to build a shared understanding of what's going on, partly due to rapid changes in communications technology transforming the memetic landscape?

what if it's just raw size?

Raw size feels like part of the story, yeah, but my guess is increased communications leading to more rapid selection for memes which are sticky is also a notable factor.

I'm don't think the universe is obliged to follow any "high level narratives". I'm afraid I don't understand how thinking of events in these terms is helpful.

These narratives are frameworks, or models. There's the famous saying that all models are wrong, but some are useful. Here, the narratives take the complex world and try to simplify it by essentially factoring out "what matters". Insofar as such models are correct or useful, they can aid in decision-making, e.g. for career choice, prioritisation, etc.

Even Less Wrong itself was founded on such a narrative, one developed over many years. Here's EY's current Twitter bio, for instance:

Ours is the era of inadequate AI alignment theory. Any other facts about this era are relatively unimportant, but sometimes I tweet about them anyway.

Similarly, a political science professor or historian might conclude a narrative about trends in Western democracies, or something. And the narrative that "Everyone is going to die, the way things stand." (from aging, if nothing else) is as simple as it is underappreciated by the general public. If we took it remotely seriously, we would use our resources differently.

Finally, another use of the narratives in the OP is to provide a contrast to ubiquitous but wrong narratives, e.g. the doomed middle-class narrative that endless one-upmanship towards neighbors and colleagues will somehow make one happy.

A few other narratives:

If reactor grade plutonium could be used to make nuclear weapons, there is enough material in the world to make million nukes and it is dispersed through many actors. 

Only arctic methane eruption matters, as it could trigger runaway global warming.

Only Peak oil matters, and in next 10 years we will see shortages of it and other raw materials.

Only coronavirus mutations matters, as they could become more deadly.

Only reports about UFO matters, as they imply that our world model is significantly wrong.

Here's mine: a large portion of the things that matter most in human life, including particularly most of the ways of life we originally evolved for, are swiftly becoming rare luxuries throughout the West, primarily at the behest of liberalism (which otherwise has produced many positives). Examples:

  1. embeddedness in a small tribe where everyone knows everyone else
  2. the expectation of having a loving mate and healthy family
  3. spiritual connection with a symbolically rich world of mythology (which need not be materially "real" in order to be valuable)
  4. veneration for the ancestors and the mighty dead, with recognition of oneself as a continuation of their being and as indebted to them
  5. a sufficiently simple local reality that it can be modeled, understood, and predicted without information overload
  6. emotional connection with nonhuman organisms, ecosystems, and the land in a web of respectful, honorable give and take
  7. capacity for self-reliance and individual responsibility for survival and flourishing
  8. a clear and unambiguous system of social roles on the basis of age, gender, lineage, etc, which is seen as legitimate by all

The reason I see the loss of these things as a terrible part of the "central plot" is because they are for the most part ignored, yet deeply important aspects of what it means to be human, which we are in danger of permanently losing even if ALL those other problems are solved. If people forget where we came from, and wholesale let go of the past and traditional values in favor of "progress" for its own sake, I think it will be a net loss regardless of how happy the abhuman things that we become will be. And the evidence is in my favor that these problems are making people miserable - just look at conservatives, who still are trying to hold on to these aspects of being human and seeing them threatened from every direction.

Third, separate disagreement: This list states that "vastly more is at stake in [existential risks] than in anything else going on". This seems to reflect a model in which "everything else going on" — including power struggles whose overt stakes are much much lower — does not substantially or predictably causally impact outcomes of existential risk questions. I think I disagree with that model, though my confidence in this is far, far less than for the other two disagreements I've posted.

Almost all of these could have been said 50 years ago with no or minor (e.g. change Trump to Nixon) change with pretty much the same emphasis. Even those that not (e.g. Pandemic), could be easily replaced with other things similar in nature in absolute outcome (famine in China, massive limitation of mobility (and other freedoms) in the Eastern Block etc.).

Even 100 years ago you could make similar cases for most things (except A.I., that is a newer concept, yet there could have been similar issues in those times for which people had the same hope for that I am not aware of).

Yet, here we are, better off than before. Was this the expected outcome?

I think it would have been way less popular to say "Western Civilization is declining on the scale of half a century"; I think they were clearly much better off than 1920. I think they could have told stories about moral decline, or viewed the West as not rising to the challenge of the cold war or so on, but they would have been comparing themselves to the last 20-30 years instead of the last 60-100 years.

Separate point: I also strongly disagree with the idea that "there's a strong chance we live in a simulation". Any such simulation must be either:

  • fully-quantum, in which case it would require the simulating hardware to be at least as massive as the simulated matter, and probably orders of magitude more massive. The log-odds of being inside such a simulation must therefore be negative by at least those orders of magnitude.
  • not-fully-quantum, in which case the quantum branching factor per time interval is many many many orders of magnitude less than that of an unsimulated reality. In this case, the log-odds of being inside such a simulation would be very very very negative.
  • based on some substrate governed by physics whose "computational branching power" is even greater than quantum mechanics, in which case we should anthropically expect to live in that simulator's world and not this simulated one.

Unlike my separate point about the great filter, I can claim no special expertise on this; though both my parents have PhDs in physics, I couldn't even write the Dirac equation without looking it up (though, given a week to work through things, I could probably do a passable job reconstructing Shor's algorithm with nothing more than access to Wikipedia articles on non-quantum FFT). Still, I'm decently confident about this point, too.

As someone who mostly expects to be in a simulation, this is the clearest and most plausible anti-simulation-hypothesis argument I've seen, thanks.

How does it hold up against the point that the universe looks large enough to support a large number of even fully-quantum single-world simulations (with a low-resolution approximation of the rest of reality), even if it costs many orders of magnitude more resources to run them?

Perhaps would-be simulators would tend not to value the extra information from full-quantum simulations enough to build many or even any of them? My guess is that many purposes for simulations would want to explore a bunch of the possibility tree, but depending on how costly very large quantum computers are to mature civilizations maybe they'd just get by with a bunch of low-branching factor simulations instead?

I think both your question and self-response are pertinent. I have nothing to add to either, save a personal intuition that large-scale fully-quantum simulators are probably highly impractical. (I have no particular opinion about partially-quantum simulators — even possibly using quantum subcomponents larger than today's computers — but they wouldn't change the substance of my not-in-a-sim argument.)

hm, that intuition seems plausible.

The other point that comes to mind is that if you have a classical simulation running on a quantum world, maybe that counts as branching for the purposes of where we expect to find ourselves? I'm still somewhat confused about whether exact duplicates 'count', but if they do then maybe the branching factor of the underlying reality carries over to sims running further down the stack?

It seems to me that exact duplicate timelines don't "count", but duplicates that split and/or rejoin do. YMMV.

I don't think the branching factor of the simulation matters, since the weight of each individual branch decreases as the number of branches increases. The Born measure is conserved by branching.

This is certainly a cogent counterargument. Either side of this debate relies on a theory of "measure of consciousness" that is, as far as I can tell, not obviously self-contradictory. We won't work out the details here.

In other words: this is a point on which I think we can respectfully agree to disagree.

Fair, although I do think your theory might be ultimately self-contradictory ;)

Instead or arguing that here, I'll link an identical argument I had somewhere else and let you judge if I was persuasive.

I don't think the point you were arguing against is the same as the one I'm making here, though I understand why you think so.

My understanding of your model is that, simplifying relativistic issues so that "simultaneous" has a single unambiguous meaning, total measure across quantum branches of a simultaneous time slice is preserved; and your argument is that, otherwise, we'd have to assign equal measure to each unique moment of consciousness, which would lead to ridiculous "Bolzmann brain" scenarios. I'd agree that your argument is convincing that different simultaneous branches have different weight according to the rules of QM, but that does not at all imply that total weight across branches is constant across time.

The argument I made there was that we should consider observer-moments to be 'real' according to their Hilbert measure, since that is what we use to predict our own sense-experiences. This does imply that observer-weight will be preserved over time, since unitary evolution preserves the measure(as you say, this also proves it is conserved by splitting into branches, since you can consider that to be projecting onto different subspaces)

Even without unitarity, you shouldn't expect the total amount of observer-weight to increase exponentially in time, since that would cause the total amount of observer-weight to diverge, giving undefined predictions.

Our sense-experiences are "unitary" (in some sense which I hope we can agree on without defining rigorously), so of course we use unitary measure to predict them. Branching worlds are not unitary in that sense, so carrying over unitarity from the former to the latter seems an entirely arbitrary assumption.

A finite number (say, the number of particles in the known universe), raised to a finite number (say, the number of Planck time intervals before dark energy tears the universe apart), gives a finite number. No need for divergence. (I think both of those are severe overestimates for the actual possible branching, but they are reasonable as handwavy demonstrations of the existence of finite upper bounds)

Ah, by 'unitary' I mean a unitary operator, that is an operator which preserves the Hilbert measure. It's an axiom of quantum mechanics that time evolution is represented by a unitary operator.

Fair point about the probable finitude of time(but wouldn't it be better if our theory could handle the possibility of infinite time as well?)

Bold claims about objective reality, unbacked by evidence, seemingly not very useful or interesting (though this is subjective) and backed by appeal to tribal values (declaring AI alignment as a core issue, blue tribe neoliberalism assumed as being the status quo...etc).

This seems to go against the kind of things I ever assumed could make it to less wrong sans-covid, yet here it is heavily upvoted.

Is my take here overly critical ?

[This comment is no longer endorsed by its author]Reply

I think it is. The post is not intended to be a list of things the author believes, but rather a collection of high-level narratives that various people use when thinking about the impact of their decisions on the world. As such, you wouldn't really expect extensive evidence supporting them, since the post isn't trying to claim that they are correct.

Interesting that about half of these "narratives" or "worldviews" are suffixed with "-ism": Malthusianism, Marxism, Georgism, effective altruism, transhumanism. But most of the (newer and less popular) rationalist narratives haven't yet been suchly named. This would be one heuristic for finding other worldviews. 

More generally, if you want people to know and contrast a lot of these worldviews, it'd be useful to name them all in 1-2 words each.

This list is pretty relevant too: http://culturaltribes.org/home

This list seems to largely exclude positive narratives. What about the Steven Pinker/ Optimists narrative that the world is basically getting better all the time?

Perhaps to see the true high level narrative, we should focus on science, technology and prosperity, only considering politics in so far as it changes the direction of their long term trends.

I enjoyed this take https://www.roote.co/wisdom-age

Seems like a number of the items fall under a common theme/area. I wonder if focusing on them separately rather than perhaps seeing them as different context/representations of a common underlying source is best.

Basically, all the bits about failing governments, societies/cultures/institutions seem to be a rejection of the old "Private Vices, Public Virtues" idea and Smith's Invisible Hand metaphor. So perhaps the questions might be what's changed that make those types of superior outcomes from the individual actions that never aimed at such results no longer as effective.

Is there a common gear that is now broken or are these all really independent issues?

There is a strong chance that we live in a simulation

Is there a version of simulation theory that is falsifiable?

Uncontrolled population growth in Africa, India, the Middle East, and other developing countries

Narrative – a story that is put above facts, logic, and evidence.