Search from over 60,000 research works

Advanced Search

UoR at SemEval-2021 task 7: utilizing pre-trained DistilBert model and multi-scale CNN for humor detection

[thumbnail of Open Access]
Preview
2021.semeval-1.166.pdf - Published Version (162kB) | Preview
Available under license: Creative Commons Attribution
[thumbnail of SemEval2021_Task7.pdf]
SemEval2021_Task7.pdf - Accepted Version (116kB)
Restricted to Repository staff only
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Liu, Z., Haines, C. and Liang, H. (2021) UoR at SemEval-2021 task 7: utilizing pre-trained DistilBert model and multi-scale CNN for humor detection. In: SemEval-2021, 5-6 August 2021, Bangkok.

Abstract/Summary

Humor detection is an interesting but difficult task in NLP. Humor might not be obvious in text because it may be embedded into context, hide behind the literal meaning of the phrase and require prior knowledge to understand. We explored different shallow and deep methods to create a humour detection classifier for task 7-1a. Models like Logistic Regression, LSTM, MLP, CNN were used, and pre-trained models like DistilBert were introduced to generate accurate vector representation for textual data. We focused on applying a multi-scale strategy on modelling, and compared different models. Our best model is the DistilBert+MultiScale CNN which used different sizes of CNN kernel to get multiple scales of features. This method achieved 93.7% F1-score and 92.1% accuracy on the test set.

Item Type Conference or Workshop Item (Paper)
URI https://reading-clone.eprints-hosting.org/id/eprint/97213
Item Type Conference or Workshop Item
Refereed Yes
Divisions Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
Download/View statistics View download statistics for this item

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Search Google Scholar