NLP
· NLU
· NLG
Summarization
· Sentiment analysis
NER
· POS
· MNT
· QA
Text categorization
· Semantic parsing
GOTO PyTorch!
-
[2013/01] Efficient Estimation of Word Representations in Vector Space
-
[2014/12] Dependency-Based Word Embeddings
-
[2015/07] Neural Machine Translation of Rare Words with Subword Units
-
[2014/07] GloVe: Global Vectors for Word Representation : GloVe
-
[2016/06] Siamese CBOW: Optimizing Word Embeddings for Sentence Representations : Siamese CBOW
-
[2016/07] Enriching Word Vectors with Subword Information : fastText
-
[2014/09] Sequence to Sequence Learningwith Neural Networks : seq2seq
-
[2017/07] Attention Is All You Need : Transformer
-
[2017/08] Learned in Translation: Contextualized Word Vectors : CoVe
-
[2018/01]Universal Language Model Fine-tuning for Text Classification : ULMFIT
-
[2018/02] Deep contextualized word representations : ELMo
-
[2018/06] Improving Language Understanding by Generative Pre-Training : GPT-1
-
[2018/10] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding : BERT
-
[2019/02] Language Models are Unsupervised Multitask Learners : GPT-2
-
[2019/04] Language Models with Transformers
-
[2019/01] Cross-lingual Language Model Pretraining XLM
-
[2019/01] Multi-Task Deep Neural Networks for Natural Language Understanding : MT-DNN
-
[2019/01] Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context : Transformer-XL
-
[2019/06] XLNet: Generalized Autoregressive Pretraining for Language Understanding : XLNet
-
[2019/09] Fine-Tuning Language Models from Human Preferences
-
[2019/01] BioBERT: a pre-trained biomedical language representation model for biomedical text mining : BioBERT
-
[2019/03] SciBERT: A Pretrained Language Model for Scientific Text : SciBERT
-
[2019/04] ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission : ClinicalBERT
-
[2019/06] HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization : HIBERT
-
[2019/07] SpanBERT: Improving Pre-training by Representing and Predicting Spans : SpanBERT
-
[2019/08] Pre-Training with Whole Word Masking for Chinese BERT
-
[2019/07] R-Transformer: Recurrent Neural Network Enhanced Transformer : R-Transformer
-
[2019/09] FREELB: ENHANCED ADVERSARIAL TRAINING FOR LANGUAGE UNDERSTANDING : FREELB
-
[2019/09] Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
-
[2019/10] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer : T5
-
[2018/07] Subword-level Word Vector Representations for Korean
-
[2019/08] Zero-shot Word Sense Disambiguation using Sense Definition Embeddings
-
[2019/06] Bridging the Gap between Training and Inference for Neural Machine Translation
-
[2019/06] Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts
-
[2019/07] A Simple Theoretical Model of Importance for Summarization
-
[2019/05] Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems
-
[2019/07] We need to talk about standard splits
-
[2019/07] ERNIE 2.0: A Continual Pre-training Framework for Language Understanding : ERNIE 2.0
-
[2019/07] Multi-Task Deep Neural Networks for Natural Language Understanding : mt-dnn
-
[2019/05] SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems : SuperGLUE
-
[2020/01] Towards a Human-like Open-Domain Chatbot + Google AI Blog
mathematics | machine learning |
---|---|
mathematics for machine learning | Pattern Recognition and Machine Learning |