What is it about?

Base on the RoBERTa (Robustly Optimized BERT Pretraining Approach) model, we introduce the weight vectors, absolute and relative position information, and contextual information of feature words to establish a universal text classification model. Employing Bayesian optimization algorithms helps the model find optimal hyperparameters while reducing computational costs.

Featured Image

Why is it important?

1. To enhance the representation of feature words in semantic and context, the vectors of feature word weights for research question sentences based on syntactic structure features is constructed. 2. To capture and learn the semantics and contextual information of corpus features and achieve direct interaction between feature words and the model, the vector of feature words weight is concatenated with the output of the RoBERTa model. 3. To obtain optimal hyperparameters, a hyperparameter loss function and the Bayesian optimization algorithm are constructed.

Perspectives

Writing this article was a great pleasure as it has co-authors with whom I have had long standing collaborations.

Meng Wang
National Science and Technology Library

Read the Original

This page is a summary of: Leveraging Weight Vectors of Feature Words for Research Question Identification in Scientific Articles, December 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3677389.3702521.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page