What is it about?
The proliferation of knowledge graphs and recent advances in Artificial Intelligence have raised great expectations related to the combination of symbolic and distributional semantics in cognitive tasks. This is particularly the case of knowledge-based approaches to natural language processing. Engineered by humans, such knowledge graphs are frequently well curated and of high quality, but at the same time can be labor-intensive, brittle or biased. The work reported in this paper aims to address such limitations, bringing together bottom-up, corpus-based knowledge and top-down, structured knowledge graphs by capturing as embeddings in a joint space the semantics of both words and concepts from large document corpora. To evaluate our results, we perform the largest and most comprehensive empirical study around this topic that we are aware of.
Featured Image
Why is it important?
This paper advance on fundamental questions related to the combination of knowledge graphs and neural representations in the form of word and concept embeddings. Such questions include: - How can neural methods extend previously captured knowledge explicitly represented as knowledge graphs in cost-efficient and practical ways? - What are the main building blocks and techniques enabling such hybrid approach to NLP? - How can structured and neural representations be seamlessly integrated? - How can the quality of the resulting hybrid representations be inspected and evaluated? How does this impact on the performance of NLP tasks, the processing of other data modalities, like visual data, and their interplay?
Perspectives
Read the Original
This page is a summary of: Vecsigrafo: Corpus-based word-concept embeddings, Semantic Web, September 2019, IOS Press,
DOI: 10.3233/sw-190361.
You can read the full text:
Resources
Contributors
The following have contributed to this page