CovTiNet: Covid text identification network using attention-based positional embedding feature fusion

Document Type

Journal Article

Publication Title

Neural Computing and Applications

Publisher

Springer

School

Security Research Institute

RAS ID

56484

Comments

Hossain, M. R., Hoque, M. M., Siddique, N., & Sarker, I. H. (2023). CovTiNet: Covid text identification network using attention-based positional embedding feature fusion. Neural Computing and Application, 35, 13503-13527.

https://doi.org/10.1007/s00521-023-08442-y

Abstract

Covid text identification (CTI) is a crucial research concern in natural language processing (NLP). Social and electronic media are simultaneously adding a large volume of Covid-affiliated text on the World Wide Web due to the effortless access to the Internet, electronic gadgets and the Covid outbreak. Most of these texts are uninformative and contain misinformation, disinformation and malinformation that create an infodemic. Thus, Covid text identification is essential for controlling societal distrust and panic. Though very little Covid-related research (such as Covid disinformation, misinformation and fake news) has been reported in high-resource languages (e.g. English), CTI in low-resource languages (like Bengali) is in the preliminary stage to date. However, automatic CTI in Bengali text is challenging due to the deficit of benchmark corpora, complex linguistic constructs, immense verb inflexions and scarcity of NLP tools. On the other hand, the manual processing of Bengali Covid texts is arduous and costly due to their messy or unstructured forms. This research proposes a deep learning-based network (CovTiNet) to identify Covid text in Bengali. The CovTiNet incorporates an attention-based position embedding feature fusion for text-to-feature representation and attention-based CNN for Covid text identification. Experimental results show that the proposed CovTiNet achieved the highest accuracy of 96.61±.001% on the developed dataset (BCovC) compared to the other methods and baselines (i.e. BERT-M, IndicBERT, ELECTRA-Bengali, DistilBERT-M, BiLSTM, DCNN, CNN, LSTM, VDCNN and ACNN).

DOI

10.1007/s00521-023-08442-y

Access Rights

free_to_read

Share

 
COinS