LiquidAI/LFM2-350M trained on Natural Questions pairs
This is a sentence-transformers model finetuned from LiquidAI/LFM2-350M on the natural-questions dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: LiquidAI/LFM2-350M
- Maximum Sequence Length: 128000 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128000, 'do_lower_case': False, 'architecture': 'LFM2Model'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("tomaarsen/LFM2-350M-nq-prompts")
queries = [
"where does the last name francisco come from",
]
documents = [
'Francisco Francisco is the Spanish and Portuguese form of the masculine given name Franciscus (corresponding to English Francis).',
'Book of Esther The Book of Esther, also known in Hebrew as "the Scroll" (Megillah), is a book in the third section (Ketuvim, "Writings") of the Jewish Tanakh (the Hebrew Bible) and in the Christian Old Testament. It is one of the five Scrolls (Megillot) in the Hebrew Bible. It relates the story of a Hebrew woman in Persia, born as Hadassah but known as Esther, who becomes queen of Persia and thwarts a genocide of her people. The story forms the core of the Jewish festival of Purim, during which it is read aloud twice: once in the evening and again the following morning. The books of Esther and Song of Songs are the only books in the Hebrew Bible that do not explicitly mention God.[2]',
'Times Square Times Square is a major commercial intersection, tourist destination, entertainment center and neighborhood in the Midtown Manhattan section of New York City at the junction of Broadway and Seventh Avenue. It stretches from West 42nd to West 47th Streets.[1] Brightly adorned with billboards and advertisements, Times Square is sometimes referred to as "The Crossroads of the World",[2] "The Center of the Universe",[3] "the heart of The Great White Way",[4][5][6] and the "heart of the world".[7] One of the world\'s busiest pedestrian areas,[8] it is also the hub of the Broadway Theater District[9] and a major center of the world\'s entertainment industry.[10] Times Square is one of the world\'s most visited tourist attractions, drawing an estimated 50 million visitors annually.[11] Approximately 330,000 people pass through Times Square daily,[12] many of them tourists,[13] while over 460,000 pedestrians walk through Times Square on its busiest days.[7]',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
Evaluation
Metrics
Information Retrieval
| Metric |
NanoMSMARCO |
NanoNFCorpus |
NanoNQ |
| cosine_accuracy@1 |
0.28 |
0.4 |
0.48 |
| cosine_accuracy@3 |
0.46 |
0.5 |
0.68 |
| cosine_accuracy@5 |
0.64 |
0.58 |
0.78 |
| cosine_accuracy@10 |
0.74 |
0.68 |
0.82 |
| cosine_precision@1 |
0.28 |
0.4 |
0.48 |
| cosine_precision@3 |
0.1533 |
0.36 |
0.2267 |
| cosine_precision@5 |
0.128 |
0.324 |
0.156 |
| cosine_precision@10 |
0.074 |
0.266 |
0.086 |
| cosine_recall@1 |
0.28 |
0.023 |
0.47 |
| cosine_recall@3 |
0.46 |
0.0616 |
0.64 |
| cosine_recall@5 |
0.64 |
0.0975 |
0.72 |
| cosine_recall@10 |
0.74 |
0.133 |
0.78 |
| cosine_ndcg@10 |
0.4909 |
0.3236 |
0.6322 |
| cosine_mrr@10 |
0.4131 |
0.4758 |
0.5984 |
| cosine_map@100 |
0.4235 |
0.1254 |
0.5838 |
Nano BEIR
- Dataset:
NanoBEIR_mean
- Evaluated with
NanoBEIREvaluator with these parameters:{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"query_prompts": {
"msmarco": "query: ",
"nfcorpus": "query: ",
"nq": "query: "
},
"corpus_prompts": {
"msmarco": "document: ",
"nfcorpus": "document: ",
"nq": "document: "
}
}
| Metric |
Value |
| cosine_accuracy@1 |
0.3867 |
| cosine_accuracy@3 |
0.5467 |
| cosine_accuracy@5 |
0.6667 |
| cosine_accuracy@10 |
0.7467 |
| cosine_precision@1 |
0.3867 |
| cosine_precision@3 |
0.2467 |
| cosine_precision@5 |
0.2027 |
| cosine_precision@10 |
0.142 |
| cosine_recall@1 |
0.2577 |
| cosine_recall@3 |
0.3872 |
| cosine_recall@5 |
0.4858 |
| cosine_recall@10 |
0.551 |
| cosine_ndcg@10 |
0.4822 |
| cosine_mrr@10 |
0.4958 |
| cosine_map@100 |
0.3776 |
Training Details
Training Dataset
natural-questions
Evaluation Dataset
natural-questions
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 256
per_device_eval_batch_size: 256
learning_rate: 2e-05
num_train_epochs: 1
warmup_ratio: 0.1
seed: 12
bf16: True
prompts: {'query': 'query: ', 'answer': 'document: '}
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 256
per_device_eval_batch_size: 256
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 2e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 1
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 12
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: True
fp16: False
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: None
hub_always_push: False
hub_revision: None
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
include_for_metrics: []
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
liger_kernel_config: None
eval_use_gather_object: False
average_tokens_across_devices: False
prompts: {'query': 'query: ', 'answer': 'document: '}
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
router_mapping: {}
learning_rate_mapping: {}
Training Logs
| Epoch |
Step |
Training Loss |
Validation Loss |
NanoMSMARCO_cosine_ndcg@10 |
NanoNFCorpus_cosine_ndcg@10 |
NanoNQ_cosine_ndcg@10 |
NanoBEIR_mean_cosine_ndcg@10 |
| -1 |
-1 |
- |
- |
0.0086 |
0.0233 |
0.0063 |
0.0128 |
| 0.0026 |
1 |
4.6189 |
- |
- |
- |
- |
- |
| 0.0129 |
5 |
4.1284 |
- |
- |
- |
- |
- |
| 0.0258 |
10 |
3.6638 |
- |
- |
- |
- |
- |
| 0.0387 |
15 |
2.3118 |
- |
- |
- |
- |
- |
| 0.0515 |
20 |
1.0986 |
- |
- |
- |
- |
- |
| 0.0644 |
25 |
0.5063 |
- |
- |
- |
- |
- |
| 0.0773 |
30 |
0.2891 |
- |
- |
- |
- |
- |
| 0.0902 |
35 |
0.2138 |
- |
- |
- |
- |
- |
| 0.1031 |
40 |
0.1967 |
- |
- |
- |
- |
- |
| 0.1160 |
45 |
0.1745 |
- |
- |
- |
- |
- |
| 0.1289 |
50 |
0.1479 |
0.1425 |
0.4927 |
0.3162 |
0.5375 |
0.4488 |
| 0.1418 |
55 |
0.1257 |
- |
- |
- |
- |
- |
| 0.1546 |
60 |
0.1215 |
- |
- |
- |
- |
- |
| 0.1675 |
65 |
0.1475 |
- |
- |
- |
- |
- |
| 0.1804 |
70 |
0.1066 |
- |
- |
- |
- |
- |
| 0.1933 |
75 |
0.1056 |
- |
- |
- |
- |
- |
| 0.2062 |
80 |
0.1181 |
- |
- |
- |
- |
- |
| 0.2191 |
85 |
0.118 |
- |
- |
- |
- |
- |
| 0.2320 |
90 |
0.1031 |
- |
- |
- |
- |
- |
| 0.2448 |
95 |
0.0775 |
- |
- |
- |
- |
- |
| 0.2577 |
100 |
0.0906 |
0.1009 |
0.4791 |
0.3151 |
0.6007 |
0.4650 |
| 0.2706 |
105 |
0.0921 |
- |
- |
- |
- |
- |
| 0.2835 |
110 |
0.1105 |
- |
- |
- |
- |
- |
| 0.2964 |
115 |
0.0906 |
- |
- |
- |
- |
- |
| 0.3093 |
120 |
0.1002 |
- |
- |
- |
- |
- |
| 0.3222 |
125 |
0.0952 |
- |
- |
- |
- |
- |
| 0.3351 |
130 |
0.0652 |
- |
- |
- |
- |
- |
| 0.3479 |
135 |
0.079 |
- |
- |
- |
- |
- |
| 0.3608 |
140 |
0.0951 |
- |
- |
- |
- |
- |
| 0.3737 |
145 |
0.0918 |
- |
- |
- |
- |
- |
| 0.3866 |
150 |
0.065 |
0.0772 |
0.5115 |
0.3070 |
0.6105 |
0.4763 |
| 0.3995 |
155 |
0.1065 |
- |
- |
- |
- |
- |
| 0.4124 |
160 |
0.0871 |
- |
- |
- |
- |
- |
| 0.4253 |
165 |
0.0623 |
- |
- |
- |
- |
- |
| 0.4381 |
170 |
0.0771 |
- |
- |
- |
- |
- |
| 0.4510 |
175 |
0.0795 |
- |
- |
- |
- |
- |
| 0.4639 |
180 |
0.0814 |
- |
- |
- |
- |
- |
| 0.4768 |
185 |
0.0794 |
- |
- |
- |
- |
- |
| 0.4897 |
190 |
0.0744 |
- |
- |
- |
- |
- |
| 0.5026 |
195 |
0.0612 |
- |
- |
- |
- |
- |
| 0.5155 |
200 |
0.0684 |
0.0692 |
0.4818 |
0.3173 |
0.6161 |
0.4717 |
| 0.5284 |
205 |
0.0635 |
- |
- |
- |
- |
- |
| 0.5412 |
210 |
0.0768 |
- |
- |
- |
- |
- |
| 0.5541 |
215 |
0.0544 |
- |
- |
- |
- |
- |
| 0.5670 |
220 |
0.0654 |
- |
- |
- |
- |
- |
| 0.5799 |
225 |
0.0729 |
- |
- |
- |
- |
- |
| 0.5928 |
230 |
0.0923 |
- |
- |
- |
- |
- |
| 0.6057 |
235 |
0.0763 |
- |
- |
- |
- |
- |
| 0.6186 |
240 |
0.0687 |
- |
- |
- |
- |
- |
| 0.6314 |
245 |
0.0657 |
- |
- |
- |
- |
- |
| 0.6443 |
250 |
0.0708 |
0.0643 |
0.4843 |
0.3152 |
0.6023 |
0.4673 |
| 0.6572 |
255 |
0.0555 |
- |
- |
- |
- |
- |
| 0.6701 |
260 |
0.0792 |
- |
- |
- |
- |
- |
| 0.6830 |
265 |
0.0681 |
- |
- |
- |
- |
- |
| 0.6959 |
270 |
0.0855 |
- |
- |
- |
- |
- |
| 0.7088 |
275 |
0.0788 |
- |
- |
- |
- |
- |
| 0.7216 |
280 |
0.0631 |
- |
- |
- |
- |
- |
| 0.7345 |
285 |
0.0676 |
- |
- |
- |
- |
- |
| 0.7474 |
290 |
0.0536 |
- |
- |
- |
- |
- |
| 0.7603 |
295 |
0.0814 |
- |
- |
- |
- |
- |
| 0.7732 |
300 |
0.062 |
0.0606 |
0.4630 |
0.3235 |
0.6256 |
0.4707 |
| 0.7861 |
305 |
0.0777 |
- |
- |
- |
- |
- |
| 0.7990 |
310 |
0.0801 |
- |
- |
- |
- |
- |
| 0.8119 |
315 |
0.0566 |
- |
- |
- |
- |
- |
| 0.8247 |
320 |
0.0711 |
- |
- |
- |
- |
- |
| 0.8376 |
325 |
0.0643 |
- |
- |
- |
- |
- |
| 0.8505 |
330 |
0.0422 |
- |
- |
- |
- |
- |
| 0.8634 |
335 |
0.0614 |
- |
- |
- |
- |
- |
| 0.8763 |
340 |
0.06 |
- |
- |
- |
- |
- |
| 0.8892 |
345 |
0.0584 |
- |
- |
- |
- |
- |
| 0.9021 |
350 |
0.0457 |
0.0583 |
0.4952 |
0.3214 |
0.6268 |
0.4811 |
| 0.9149 |
355 |
0.0838 |
- |
- |
- |
- |
- |
| 0.9278 |
360 |
0.0657 |
- |
- |
- |
- |
- |
| 0.9407 |
365 |
0.0658 |
- |
- |
- |
- |
- |
| 0.9536 |
370 |
0.0757 |
- |
- |
- |
- |
- |
| 0.9665 |
375 |
0.0603 |
- |
- |
- |
- |
- |
| 0.9794 |
380 |
0.0647 |
- |
- |
- |
- |
- |
| 0.9923 |
385 |
0.0575 |
- |
- |
- |
- |
- |
| -1 |
-1 |
- |
- |
0.4909 |
0.3236 |
0.6322 |
0.4822 |
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 1.043 kWh
- Carbon Emitted: 0.405 kg of CO2
- Hours Used: 3.425 hours
Training Hardware
- On Cloud: No
- GPU Model: 1 x NVIDIA GeForce RTX 3090
- CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
- RAM Size: 31.78 GB
Framework Versions
- Python: 3.11.6
- Sentence Transformers: 5.1.0.dev0
- Transformers: 4.53.0
- PyTorch: 2.7.1+cu126
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CachedMultipleNegativesRankingLoss
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}