With Facebook on the block for spreading misinformation, how long until the internet can weed out lies itself? Peter Cochrane theorises
There was a time when ‘the truth' would last a long time!
The earth is flat. No, it's a sphere. No it's flat. No, it's a sphere... Whoops it's an oblate spheroid.. Hmm a dynamic oscillatory oblate spheroid.
This one took thousands of years to settle down, but ‘a dynamic oscillatory oblate spheroid' is the best picture we have with all the scientific instrumentation we have and the observations made to date. And so it is for scientific truths - they are only as good as our models and experimental verification, and often, a closer look with better equipment changes our perspective and our models.
But, that is what science is about: achieving the most accurate picture and understanding of the universe which we live in - and therefore these ‘truths' tend to be transitory!
Well, science is only one class of truth, but there are many more, For example: conventional truth set by man made rules of law: for example, incest is illegal. Grammar: 'i' before ‘e' except after ‘c'. Mathematics: the inverse of zero is infinity, or it may be indeterminate. Then we have political, managerial and agency truths: this politician or manager said this or that, a government enacted this or that.
Then there are belief systems and doctrinal truths: There is only one God, and he created heaven and earth.
Other belief systems embrace UFOs, ghosts, vampires, the SuperNatural, communication with the dead et al.
And so it goes on. We live with a vast range of truths.
The Truth Engine So, in the age of the internet and an exponential explosion of information, how do we know something is true? We don't!
But it looks as though we might get our machines to cough up the best ‘quantification' of truth in the form of a probability statement. In principle, this ought to be very easy! Just take a stated fact or accepted truth and test it ‘pro-con' by the number of postings.
A straightforward statistical analysis and so very simple - right? Wrong! The nature of information creation today sees plagiarism on a global scale. A single news report may spawn hundreds of variants of many different flavours, biases, and, worse - include a concatenation of errors!
In the face of such complexity, how might a "Truth Engine" be constructed? Several uni-versities and Google are on the case, and I suspect IBM are in the fray too!
The general line of attack goes something like this: Take a stated truth published on the internet, seek out every related item, and then construct a derivation tree based on the date of posting, authorship, organisation, and textural content. Apply filters based on evident copying, author and organisation credentials and credibility, reference materials and quality of analysis. Filtering out ‘the noise' can easily see over 20 million apparently original postings reduced to under three million credible sources.
Now, it is feasible to apply a meaningful statistical analysis to reveal a ‘mean opinion' with a ‘normal(ish) distribution' and standard deviation that says there is an x percent confidence that this particular truth is correct. If that x percent is over 99 per cent, then we can assume great credence, but if x per cent is less than 70 percent, we should be very wary in making our judgement call!
Rumour has it that Google will deploy ‘truth engine technology' in its search engine so that we get the ‘best picture' on any search. I can't wait - what a life saver! However, there are still many algorithmic obstacles to overcome, and more refinement is definitely required.
And watch out - the process could introduce a tendency to become a strange attractor for falsehoods and errors that persist for far longer than they should. Any truth test has to be dynamic and continuous; truth is not static, and it has to be tested in light of new evidence non-stop!
Source: computing.co.uk