As soon as Winston had dealt with each of the messages, he clipped his speakwritten corrections to the appropriate copy of ‘The Times’ and pushed them into the pneumatic tube. Then, with a movement which was as nearly as possible unconscious, he crumpled up the original message and any notes that he himself had made and dropped them into the memory hole to be devoured by the flames.
— George Orwell, 1984.
While there is a widespread belief that Wikipedia is an authoritative source of information about a topic, there is also a widespread belief that Wikipedia cannot be trusted. This contradiction is due to a fault in Wikipedia’s structure that cannot be fixed.
I have often commented that Wikipedia is actually a search engine for facts, but Wikipedia is actually a search engine for approved facts. As an encyclopedia, Wikipedia editors are conflicted between raw data about a topic, which may be true or false, and their vision of the truth. But how are facts on Wikipedia determined to be true?
The truth is out there
Let’s say a new research paper claims that global warming is less than expected. A Wikipedia editor adds a sentence about the research and a link to the paper to the “Global Warming” page. Immediately, the information is removed by other editors. The editor adds it again and it is removed again. These are the first shots in an “edit war”. When the editor adds the information back into the page he is given a warning and told to go to the article’s talk page where editors who do not like the conclusions of the research question its insertion. This process is how Wikipedia was built in the first place: someone added some facts and someone challenged them and added their fact in response. However, once a page is established this system breaks down.
Ministry of Truth
The person who added the information wonders why the information is being removed when, in his opinion, the page is supposed to be authoritative about global warming. Editors who don’t like the conclusions of the report use all methods at their disposal to justify its removal: they say that the researchers are not credible, the data is flawed. They invoke any and all Wikipedia rules to deny the inclusion of the sentence on the page, a process known as “Wikilawyering”. The talk page grows to be many thousands of lines long. The original editor embarks on on a crash course in Wikipedia rules and acronyms like NPOV, RS, OR, and finds the rules to be arbitrary and inconsistent, and their application arbitrary and inconsistent. All articles are supposed to have a neutral point of view, he cries. He appeals to an administrator on the site and is told he has to “find consensus,” but he already knows the consensus is against him.
After many hours of discussion, the editor withdraws. It’s too much trouble for one sentence; too much trouble for one report. He wonders just how many other people like him have given up trying to add opposing views to Wikipedia, and he wonders about the implications when millions are denied access to opposing views.
Down the Memory Hole
This situation is not unusual, in fact I would say it is the norm on most Wikipedia pages. I have personally been involved in at least two situations similar to this (details later) and I am sure every Wikipedia editor has a similar story to tell.
The important point here is not that the sentence was removed from Wikipedia, but that the sentence is not seen while the discussion is happening. Casual readers of the page, who outnumber editors by at least 100 to one, will never know that the report existed. Editors will claim that this is the way Wikipedia is supposed to work, and that this process keeps false information off of the page. But that is a circular argument when they are the ones determining what is true and what is false. It’s also a great disservice to the readers, and feeds their suspicions that the information they are reading is unreliable.
To remedy this problem we need to separate data from its analysis and stop editors, and especially admins, acting as judge, jury and executioner when it comes to content.
Newslines: Separation of information and truth
Newslines grew out of WeCheck, my first attempt to make a collaborative fact checking site. I realized that to properly analyze the facts of a situation I needed to have as much data available as possible, and that the conclusions could, and probably would change as more data came through. To help me understand how events had unfolded I created the first newsline about the 2012 Benghazi Attack. The newsline quickly became the No. 1 news timeline about the attacks.
There is no need to determine the truth of an event on a newsline. As as long as it has been reported in the news somewhere, true OR false it can go in. Even National Enquirer stories are allowed (if marked as such). Users can then discuss the truth of each item in the comments.
In the near future Newslines will revive the collaborative fact check by adding a sub-page to each newsline specifically for fact checking issue within it. By creating a space specifically for fact checks we allow determination of “truth” to be based solely on the merits of the facts presented, and not by arbitrary rules that allow administrators to browbeat newcomers with procedural bullshit.
Also, in Newslines we separate the writer of a post from its editor. All posts are approved by a random editor who has no vested interest in the topic. The editor has a simple task: Does the post conform to our style guide and is it newsworthy? The truth can be discussed later.
Together we hope these changes will give readers the confidence that news does not disappear down the memory hole.
We have launched the Newslines rewards system. Earn $1 for each approved post made on Newslines. Paid daily. Click here to find out more.