Semantic Text Analyser BERT-like language model for formal language understanding
SeTABERTa is a new multilingual langue model pertained from scratch using various Open Access text repositories: EU legislation, research articles, EU public documents and US patents. 2/3 of training data is English. The other part of data covers EU24 languages. The model was trained on JRC Big Data Platform. The model can be fine-tuned for other tasks.
DAUDARAVICIUS Vidas;
FRIIS-CHRISTENSEN Anders;
2024-03-15
European Commission
JRC137020
https://publications.jrc.ec.europa.eu/repository/handle/JRC137020,
Additional supporting files
| File name | Description | File type | |