A Consolidated Open Knowledge Representation for Multiple Texts

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Vered Shwartz, Gabriel Stanovsky, Ido Dagan, Eugenio Martinez Camara, Meni Adler, Ori Shapira, Iryna Gurevych, Rachel Wities, Shyam Upadhyay, Dan Roth
Journal/Conference Name WS 2017 4
Paper Category
Paper Abstract We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open text-based manner. We do so by consolidating OIE extractions using entity and predicate coreference, while modeling information containment between coreferring elements via lexical entailment. We suggest that generating OKR structures can be a useful step in the NLP pipeline, to give semantic applications an easy handle on consolidated information across multiple texts.
Date of publication 2017
Code Programming Language Python
Comment

Copyright Researcher 2022