Could this method be applied toward distilling truth from a knowledge graph generated from a corpus of both real & fake news? Or from a corpus of inconsistently-reliable witness statements about any given phenomenon?
I'm not an expert on this by any means but I don't think so. The original paper suggests bounds on the number of "lies" and the original context (described in the abstract) is a question/answer game. It is likely posed within a very constrained framework.
Fundamentally there is no way to extract knowledge about objective reality from a corpus of contradictory text of unknown quality. The contradictions can yield clues if you are willing to make assumptions. But even statements backed by unanimous agreement can be 100% false.
Could this method be applied toward distilling truth from a knowledge graph generated from a corpus of both real & fake news? Or from a corpus of inconsistently-reliable witness statements about any given phenomenon?
I'm not an expert on this by any means but I don't think so. The original paper suggests bounds on the number of "lies" and the original context (described in the abstract) is a question/answer game. It is likely posed within a very constrained framework.
Fundamentally there is no way to extract knowledge about objective reality from a corpus of contradictory text of unknown quality. The contradictions can yield clues if you are willing to make assumptions. But even statements backed by unanimous agreement can be 100% false.
Fair, thanks for that.