Friday, October 6, 2017

The subjective nature of truth

As a scientist, I've spent a lot of time concerning myself with how to convey scientific facts to those outside the scientific community. As a concerned U.S. citizen in the last 9 months, I've spent time trying to figure out what are the political facts of this world and how we trust them. As an educator, I'm particularly interested in how we teach our children to recognize true facts from false ones.

I think I've learned two simple concepts about fact-finding that I want to share. First, as humans, we don't trust facts as being true because the evidence is credible, but rather because we believe the source of the evidence is credible. Second, our default mode as humans is be extremely subjective about how we process information about the world.

I remember being taught about this subject in school.
We learned how to differentiate facts from opinions. This is an important skill to develop, and although some people might still lack this skill, the bigger problem is that this skill on its own is insufficient in our modern world.

Take the OED definition of "fact": "A thing that is known or proved to be true." In other words, a fact is by nature true. The definition from Wikipedia is perhaps a little better: "something that is postulated to have occurred or to be correct." In other words, Wikipedia does not assume a fact is true, but recognize that it is expected to be true.

Here's the problem though- what do we call something that has the sentence structure of a fact, but is not true? For example, take these statements:

A) Climate Scientists agree that climate change is happening.
B) Climate Scientists disagree about whether climate change is happening.

What we tend to say (depending on your worldview), is that A is a fact, but B is not a fact. This can be confusing though, because both have exactly the same structure. They are both not opinions. But because being a fact is tied to being true in its definition, only one of these can be a fact. What happens then, when two different organization (say, the scientific community and the Heartland Institute) make differing claims about which one is "a fact"?

The disagreement clearly comes from whether the statement is true or not, rather than from whether the statement is structured like a fact or an opinion. And many spreaders of false information have learned that it's quite easy to structure false information as a fact, and to invent false evidence that on a superficial level meets the criteria needed to be a supported fact. Which clearly means our fact vs. opinion skills are insufficient to help us in this situation. I think it might be more useful to declare both of these statements as facts, but one as a true fact and one as a false fact. Of course this technique has been adopted to try to distinguish between "real news" and "fake news" to little success- both sides simply claim that their side is the "real news" and the other fake. It doesn't get at the heart of the problem: which statement is true? Which authority do you trust?

This is a problem that people have studied. Take the study described in this article, which actually uses the above statements. They found that many people could not identify the first statement as the true fact. They then implemented an "intervention" in which the identity and motivations of the organizations behind the statements was shared with the participants. Specifically, they told people that "Some politically-motivated groups use misleading tactics to try to convince the public that there is a lot of disagreement among scientists." After giving the participants this new knowledge, low and behold, the participants were more likely to prefer the first statement, the one that is a true fact. The recommendation from the study: give participants info about the sources behind the statements, so that they could determine who to trust, and therefore which statement is most true.

But let's break down what happened here. In order to get participants to accept a certain fact as true, they were given a set of statements, a mix of facts and opinions, about the people saying the facts. It was only based on receiving further information about the spreaders of the facts that they could judge the original information as true or false. The article called this approach "innoculation", but we typically call this approach "propaganda." Which raises the question- how to we verify that the propaganda was trustworthy? What if false facts are given about the spreader to convince someone to trust and accept the original fact as true?

Here's another example, a NYT article which describes a study about the perceived income disparity between whites and blacks. This study found that people believed that racial income disparity was more equal than it really was, and that the perception of the income gap was worst amongst wealthy white people. In other words, the study participants perceived false facts to be true. The study additionally found that asking participants to think about situations in which racial discrimination still exists today would cause more accurate answers on the survey. Which of course is again an intervention that solves the situation through a creative form of propaganda. This solution is perhaps even worse in this cause, because it is quite easy to imagine the opposite propaganda- "Think about situations today in which racial equality has been achieved. Now take this survey." It's pretty easy to imagine what kind of answers such a prompt would likely lead to. It's also curious to note that the second prompt recalls information that is no less accurate or truthful than the first prompt.

The concerning thing about both of these examples is that propaganda is not a viable solution for spreading truth. Propaganda can, and is, being used just as effectively by the truth-sayers as by the false-sayers. There is no special guarantee that a truth-sayer will have more convincing propaganda than a false-sayer. In fact propaganda that is most convincing is often the one that makes a compelling emotional appeal, not the one that is more true or trustworthy. Truth, unfortunately, has little to do with creating trust, but trust has everything to do with accepting truth. Thus the two solutions suggested in these articles are insufficient- we must do better.

This is really an issue of trust. As humans, we trust people that we know and tend not to trust strangers. There's this heartwarming story of a climate change activist teacher, Mr. Sutter, who goes to Ohio to reach those that need his knowledge the most. I highly recommend reading the entire exchange, but he focuses a lot on one student that he had trouble reaching for most of the year because she plainly did not trust him. He was an outsider, a stranger, offering information that disagreed with information shared by those that she did trust. This was until basically he gained her trust and found a lasting connection to an issue that touched her values, and she was convinced.

It's a lovely story, and I applaud the tireless work of this teacher and all those teachers who share his passion and energy. But as a sustainable, long term solution, it suffers from the same problem as the two examples above. A false-sayer can achieve the same objectives by making connections with people who accept true facts, and convincing them that false facts are true because they are said to be true by someone they trust. In fact on this issue of climate change, there are teachers that are just as good as Mr. Sutter is at connecting with impressionable minds, and just as committed as he is to spreading the opposite message about climate change.

All of the examples above can be described as a broader principle of psychological priming, or giving people information or a stimulus that will affect how they perceive and act in a situation. For a well-studied example, priming girls before they take a math test with a "stereotype threat" can greatly affect their scores relatively to boys. You can give girls the exact same math test, but either a) remind them of their gender, or b) remind them of other attributes that they have that are generally associated with high test scores, like going to an elite college. As you might expect at this point, a) causes lower performance on the test and b) causes nearly equal performance to that of boys. In this case, presenting, or reminding, someone of a fact affects their performance of a task. It's analogous to our earlier examples: tangential facts to an issue affect someone's performance/viewpoint of that issue. It's interesting to note that the truthfulness of the priming statement often has little to do with the success of the intervention, it's more about tapping into something that resonates with cultural stereotypes and someone's belief's about themselves- one reason why false propaganda can be just as effective as true propaganda. This effect is not particular to girls and math tests, but also has been demonstrated with women and chess, white men and math tests, women and science, etc.

The fact of the matter is that as humans, we are by default prone to make snap judgments of situations based on information that we send through highly subjective mental processing. I suppose there might be an interesting evolutionary psychology argument to make about why the human species has evolved this capacity. But whatever the cause, this is our default- to subjectively accept or reject new information based on whether it fits with our worldview or comes from someone we trust. And if we want a sustainable solution to helping humans become better at distinguishing true facts, we need to work to override this default. And that is hard work.

In encountering solutions to the fake news problem, I don't necessarily see this skill being developed. Recently, there were several games developed to teach people about fake news. Those of you that know me well know that I greatly appreciate an effective learning game. But these three games to me just don't pass that bar, because they focus on the wrong part of the problem.

Take the one of the three that I found the most enjoyable, Factitious developed by the American University JoLT lab. In this game, you are given a headline, a snippet of a news article, and the article's source, and then asked to determine if the article is real news or fake news. Here's the problem though: most headlines are stated like facts. Many of the snippets offer evidence for that fact. The problem is, headlines are stated like facts, but that doesn't help you determine if the facts are true or false. And the evidence does support the headline, but that doesn't help you determine if the evidence is true or not. A logical statement built on false assumptions can be logically sound while reaching an false conclusion. In today's world, only evaluating the logic and not the assumptions is not an effective way to evaluate news as real or fake.

Sample view from Factitious

This leaves two possibilities left in the game. First, you can determine whether the information present fits with your worldview. But this of course is a highly subjective process and only allows you to accept confirming evidence; if your worldview is inaccurate, then your conclusion about the fact's truthfulness will be inaccurate also. Second, you can evaluate the trustworthiness of the source. But as stated above, this is a subjective process, and whether you trust a source or not is also determined by propaganda, or facts and opinions about the source itself that would need to be investigated in turn.

It's alarming to me how many of Factitious's in-game feedback comes down to "well, this headline came from this source, and that source is trustworthy, therefore you should trust this headline as real." This is essentially an "argument from authority" which is one of the more prominent logical fallacies. And it's no long term judge for truth. I'd much rather see a game that emphasizes critical thinking than one that emphasizes making arguments from authority to determine truth.



The argument from authority problem is not specific to newspapers though. It can explain part of the role that social media has played in this problem. Until recently, we had to turn to big media as a source for authority to get our news. But with the rise of social media, we can have "news" articles that are retweeted or liked millions of times. It's natural for us to take these 1 million likes as 1 million tiny stamps of approval declaring the trustworthiness of this source. I mean, how could something that's not true be forwarded by this many people? Thus, 1 million likes creates an argument from authority that often feels more compelling than the authority commanded by only one measly news outlet run by a handful of people. Meaning, if we only teach people to recognize true facts by trusting authorities, we're in for increasingly hard times as the world becomes more digitally connected. And quite frankly the solution of calling for tech companies to censor "false" content is a terrible solution, as it just sets up yet another authority (the tech company) which must be trusted to only filter in the "real" news. As Jessica Lessin put it in a counter article: "I simply don’t trust Facebook, or any one company, with the responsibility for determining what is true."

The real solution here has to be better education in scientific thinking, or critical reasoning, or one of the other names that this skill commonly goes by. It has to be overriding our default modes of subjective information processing. Unfortunately knowing what needs to be done is different from knowing how to do it. If you have ideas, I'd love to hear them.

1 comment:

Carolyn said...

Hi. Sorry to respond with a completely unrelated thought but I am hoping you can help me track down an article from ScienceFare. Unfortunately, I don't know who the author is but I have the old link: http://sciencefare.org/2012/04/11/sugar-shock/. The article was about water content and heat in caramel making. I wish I had saved it offline. Any help would be appreciated.