
KLUE
Korean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language understanding capability of Korean language models.
cc-by-sa-4.0
100k<n<1M
Fill-Mask
Token Classification
Question Answering
Sign in to see dataset files
orCreate an account