YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Example

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import re

model_name = "TabCanNotTab/SALV-Qwen2.5-Coder-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = """
Please act as a professional verilog designer.

Implement a module of an 8-bit adder with multiple bit-level adders in combinational logic. 

Module name:  
    adder_8bit               
Input ports:
    a[7:0]: 8-bit input operand A.
    b[7:0]: 8-bit input operand B.
    cin: Carry-in input.
Output ports:
    sum[7:0]: 8-bit output representing the sum of A and B.
    cout: Carry-out output.

Implementation:
The module utilizes a series of bit-level adders (full adders) to perform the addition operation.

Give me the complete code.
"""

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer(text, return_tensors="pt").to(model.device)

# inference
outputs = model.generate(
    **model_inputs,
    max_new_tokens=2048,
    do_sample=True,
    temperature=0.5,
    top_p=0.95
)

# get response text
input_length = model_inputs.input_ids.shape[1]
generated_tokens = outputs[0][input_length:]
response = tokenizer.decode(generated_tokens, skip_special_tokens=True)

# get code text
pattern = r"```verilog\s*(.*?)\s*```"
matches = re.findall(pattern, response, re.DOTALL)
if matches:
    code=matches[-1]
    print(code)
else:
    print("No Verilog code found in the response!")
Downloads last month
256
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TabCanNotTab/SALV-Qwen2.5-Coder-7B-Instruct

Quantizations
1 model