KLUE: Korean Language Understanding Evaluation
Paper
•
2105.09680
•
Published
•
1
This model is a fine-tuned version of klue/bert-base on the klue dataset. It achieves the following results on the evaluation set:
KLUE BERT base is a pre-trained BERT Model on Korean Language. The developers of KLUE BERT base developed the model in the context of the development of the Korean Language Understanding Evaluation (KLUE) Benchmark.
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("klue/bert-base")
tokenizer = AutoTokenizer.from_pretrained("klue/bert-base")
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| 0.0638 | 1.0 | 2626 | 0.0807 | 0.8623 | 0.8702 | 0.8662 | 0.9747 |
| 0.0402 | 2.0 | 5252 | 0.0780 | 0.8756 | 0.8896 | 0.8825 | 0.9770 |
| 0.025 | 3.0 | 7878 | 0.0843 | 0.8839 | 0.8967 | 0.8902 | 0.9781 |
Base model
klue/bert-base