How to assess the accuracy of your belief system [Frameworks]
Extreme views on any topic seem to be becoming more popular. News outlets are getting increasingly polarized, and I can’t remember the last time I read a balanced news report exploring multiple perspectives.
In a world of short-attention spans, this type of shock reporting and/or giving information consumers exactly what they want to hear makes sense. Especially if the problem you’re solving for as content creator is eyeballs. “Truth” doesn’t rank as high on a creator’s list of priorities. And I don’t think this prioritization is likely to change soon.
This is true of not just politics and religion, domains that have been a haven for extreme viewpoints. Digital media has enabled propaganda & extreme viewpoints to flourish like never before — to topics as diverse as healthcare, ecological conservation & even current news.
While good for marketability and capturing attention, extreme viewpoints are hard to deal with as a truth-seeking consumer — especially on topics that one may be unfamiliar with. This article is an attempt to understand how different opinions constitute a spectrum & the nature of these spectrums. This can enable one to have a roadmap to navigate to the “truth”.
We often see extreme opinions on a given domain as lying on ends of a spectrum.
Furthermore, I am betting I’m not alone in thinking (especially in domains I am unfamiliar with) that the “truth” lies somewhere in the middle of this line. In pictorial form:
Unfortunately, I’ve long suspected that the view above was incorrect. I see two main problems with the view of spectrums as described above.
Problem 1: The two ends of a spectrum in a domain often seem to lead to extremely similar outcomes. For example, in the following domain of meaning for life/”truth”:
History has shown that people who lie on either end of the spectrum above may have different beliefs, but if they were to be in charge of society — the end result would be the same. In the example above, an extremely repressive state where non-believers are killed.
The same is true even for socio-political systems of governance:
We have seen instances of extreme collectivism in Stalin’s Russia, where the “rich” (a fluid and arbitrary definition that eventually expanded to just about everyone) were persecuted while a class of bureaucrats and rulers thrived. On the other end of the spectrum, extreme individualism leads to kings or warlords whose benefit trumps everyone else.
The outcome, on both sides of the spectrum, is a system where the masses are exploited or live in fear & a small minority rule without checks and balances.
It has always been fascinating to me that extreme ends of spectrums often produce identical results. This hints at some structural similarities that are worth investigating.
Problem 2: In certain cases, especially ones where there is a lot of data backing one side, the logic of “truth lies in the middle” starts to fall apart. For example, flat earthers vs scientists. It seems unfair to give flat earthers the same credibility as scientists.
The goal of this article is to create an internally consistent framework that can help resolve the problems above.
This has the added benefit of helping us judge where the likely possible “truth” exists within an unfamiliar domain of knowledge.
Types of domains
Let’s first categorize domains of knowledge/opinion into three broad categories: truly subjective, questionably objective and truly objective.
An example of truly subjective is any domain where there is likely no “truth” to begin with: for example, “The deliciousness of Chinese food”, or “beauty of a sunset”. There is no definite proof that will allow us to say that Chinese food is indeed delicious, or that a rose truly is beautiful.
These domains exist because people have a lot of opinions on it that are innately felt (vs logically arrived at). These domains are not the topic of our discussion because no one will try searching for truth in these domains to begin with. A spectrum in this domain truly is a line, with the “truth” distributed equally all along the line. In other words, there is no “truth”.
Next are questionably objective domains. These are probably the most interesting and polarizing topics out there. Domains for which a truth may exist, but it is impossible to find out what it is. What’s more, all of us are convinced that a truth does exist.
Religion and politics stand out as exceptionally powerful questionably objective domains. Recently, they’ve be joined by other topics such as human rights, fairness etc.
Finally, we have truly objective domains, these are domains for which an answer exists, but it may still be difficult to get to. I.e., “How many grains of sand are there on Earth”, “what is the shape of the universe?”
While answers are very difficult to get to, you can at least say with certainty that the domain does have a “truth” that exists in the real world.
In this article we will deal with objective domains — both questionably and truly objective.
Knowing which category the domain you are interested in is very useful to help you understand where the likely truth will be.
Frameworks and “truth”
The world continues to be more bizarre than we could imagine. I doubt this will stop before we go extinct.
In the absence of a guaranteed understanding of a domain, humanity’s best option is to create frameworks that help APPROXIMATE for the “truth”.
In fact, EVERY knowledge that we possess can be said to be based on frameworks that approximate truth. Some of these frameworks are exceptionally reliable (e.g. theory of gravity) whereas others are less so (e.g., psychology).
But what do we mean by reliable? The best available proxies of reliability, one that scientists often use, is to measure how accurate framework-based predictions are over time. In other words, a framework can be said to be very reliable if it answers “yes” to the following questions:
- Can the framework be used to predict what will happen?
- Are these predictions testable?
- Do the tests show accuracy?
For each domain, there is a theoretical but constrained number of accurate predictions that one could make if one knew the truth. The number of theoretically accurate predictions related to a domain is a function of the domain. In other words its a function of “truth” and not the means of arriving at the truth.
For example, within the domain of psychology, there is a theoretical number of testable predictions we could make. This constraint is based on human behaviour and not necessarily a specific psychological theory. For example, any framework of psychology cannot be used to predict the height of a child or the color of her hair. If a psychology theory starts to make predictions about the color of hair then we can reasonably assume that the theory is over-reaching.
Frameworks: a visual representation
We start our discussion of spectrums in objective domains not with a line, but a cone instead. I propose that we assign a cone for each area of knowledge (from religion to geometry). Allow me to explain what the cone represents.
The top of the cone (apex) represents perfect understanding of the domain.
Next, let’s focus on the spine of the cone. This is the line along which different frameworks that approximate reality lie.
Moving down along the spine of the cone, we encounter circles of increasing area — each associated with a unique framework.
Each circle represents the difference between (a) set of total predictions made by framework and (b) set of accurate & testable predictions that could be made (if the truth was known).
i.e., Size of circle = |total # of predictions implied (framework led) — testable & accurate predictions possible (reality led) |
As you move along the spine, the size of each circle continues to increase. This means that either:
1. number of predictions made by the framework is significantly greater than accurate & testable predictions that could be made in domain
2. framework is not making any claims at all — even when certain testable claims could be made
One can argue that increasing circles makes sense, because a framework that does one of the two above (either makes predictions it has no business making OR it doesn’t make enough predictions that could be made) is likely to be further away from the Apex point / “true understanding”.
In our example, it is also possible for two different frameworks to lead to similar sized circles.
This could happen due to two reasons:
1. Frameworks are structurally similar and only different from labelling POV. For example, consider a fundamentalist Christian or Islamist. Both may have different frameworks but their predictions are quite similar to each other
2. Another way different frameworks could lead to the same circle is when Framework A leads to lot more untestable claims vs testable claims possible -> big circle. And Framework B makes no claims at all, even when testable claims are possible -> big (negative) circle.
From frameworks to spectrums
I have talked a lot about the nature of cones because I believe it is the primary determinant of the spectrum in each domain of knowledge.
Consider this, differing opinions in an objective domain can only be possible if there are competing frameworks to approximate reality. An alternative way to view spectrum would be as a catalogue of frameworks.
In other words, a spectrum can be said to be a catalogue of the different sets of circles in a domain.
The right-end of the spectrum being driven by increasingly better understanding of the domain. As our understanding improves, newer frameworks are born — ones whose associated circles continue to get smaller and smaller. This means the claims made by a framework start overlapping strongly with testable & accurate claims possible in the domain.
This definition of spectrum suggests that rather than “truth lies in the middle”, it is more likely that “truth is always right” (my attempt at a catchphrase) — with frameworks that make the least questionable predictions.
Let’s examine how this definition of spectrum can help us theoretical-only and truly objective domains.
Questionably objective domains
As mentioned, these are domains where any understanding of the “truth” will always lie far outside humanity’s abilities.
Religion or political beliefs are a good example of this type of domain.
Spectrums in these domains don’t look like this:
But rather like this
This way of thinking about spectrums leads to some interesting take-aways:
1. It helps resolve why seemingly different approaches sometimes lead to the same outcomes. This is most likely because the different frameworks share a similar set of untestable predictions amongst themselves.
In our example, the Christian or Islamist fundamentalist lies at the same end of the spectrum as their circles are likely to be very similar.
2. Atheism, which many may consider to be the opposite of the religious fundamentalists are probably somewhere in the middle of the spectrum — i.e., they do not define the end of the spectrum. This is because they also make predictions on the nature of reality as well — i.e., no God exists definitively.
The second point in particular is important enough to repeat: in questionably objective domains where the “truth” will always be out of reach & accurate and testable predictions a pipe-dream. The right-end of the spectrum (“truth”) will be defined by a lack of opinions/predictions rather than the opinions themselves.
This is why Agnosticism defines the end of the spectrum.
BUT I don’t think a focus on “truth” is a priority on these domains. Rather, we must acknowledge that our opinions are in service to an ideal we consider (perhaps irrationally) to be worth being irrational about.
Truly objective domains
In domains where there likely is a true answer (even if it’s out of our reach), the mantra “truth is always right” can help us understand which framework is likely to be more correct. You can even use this diagram:
The left-end is dominated by frameworks that do one of two things:
1. higher number of untestable predictions than theoretically possible
2. lower number of testable predictions than theoretically possible
Whereas the right-side (the “truth” side) seeks to achieve balance and only makes testable predictions that are theoretically possible
This is not to say that the framework that represents the smallest circle of untestable predictions is the final truth. Just that the likelihood of it being a good approximation of reality is higher.
Let’s revisit the flat-earth debate to see where the truth might lie. Here is what I believe the illustration above would look like:
The irony that my own framework (above) leads to a set of untestable or inaccurate predictions is not lost on me.
Furthermore, it is also unavoidable. The real evolution of this theory will be determined by whether there are other frameworks that have smaller circles associated with them.