Webb21 juni 2024 · phoBERT: 0.931: 0.931: MaxEnt (paper) 87.9: 87.9: We haven't tune the model but still get better result than the one in the UIT-VSFC paper. To tune the model, … WebbModel’s architecture is based on PhoBERT. • Outperformed the mostrecentresearch paper on Vietnamese text summarization on the same dataset. With rouge-1,rouge-2 and rouge-L are 0.61, 0.30 and...
PhoBERT: Pre-trained language models for Vietnamese
WebbThis paper proposed several transformer-based approaches for Reliable Intelligence Identification on Vietnamese social network sites at VLSP 2024 evaluation campaign. We exploit both of... WebbIn this paper, we conduct a quantitative and qualitative study of incentivized review services by infiltrating an underground incentivized review service geared towards Amazon.com. On a dataset of 1600 products seeking incentivized reviews, we first demonstrate the ineffectiveness of off-the-shelf fake review detection as well as … chronext hotline
GitHub - VinAIResearch/PhoBERT: PhoBERT: Pre-trained language mod…
Webb12 juli 2024 · In this paper, we propose a PhoBERT-based convolutional neural networks (CNN) for text classification. The output of contextualized embeddings of the PhoBERT’s … Webb7 apr. 2024 · In this paper, we present the first manually-annotated COVID-19 domain-specific dataset for Vietnamese. Particularly, our dataset is annotated for the named … WebbPhoBERT (from VinAI Research) released with the paper PhoBERT: Pre-trained language models for Vietnamese by Dat Quoc Nguyen and Anh Tuan Nguyen. PLBart (from UCLA NLP) released with the paper Unified Pre-training for Program Understanding and Generation by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. chronext index