File size: 2,597 Bytes
21c8391 20e3ee1 21c8391 03c638b 561748e 3224843 561748e 03c638b 3ad2c2b 3224843 03c638b e2ea8f0 3ad2c2b 03c638b e2ea8f0 561748e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
license: openrail
inference:
parameters:
temperature: 0.7
max_length: 24
datasets:
- Ateeqq/Title-Keywords-SEO
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
tags:
- text-generation-inference
widget:
- text: >-
generate title: Importance, Dataset, AI
example_title: Example 1
- text: >-
generate title: Amazon, Product, Business
example_title: Example 2
- text: >-
generate title: History, Computer, Software
example_title: Example 3
---
# Generate Title using Keywords
Title Generator is an online tool that helps you create great titles for your content. By entering specific keywords or information about content, you receive topic suggestions that increase content appeal.
Developed by https://exnrt.com
- Fine Tuned: T5-Base
- Parameters: 223M
- Train Dataset Length: 10,000
- Validation Dataset Length: 2000
- Batch Size: 1
- Epochs: 2
- Train Loss: 1.6578
- Validation Loss: 1.8115
You can also use `t5-small` (77M params) available in [mini](https://huggingface.co/Ateeqq/keywords-title-generator/tree/main/mini) folder.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("Ateeqq/keywords-title-generator", token='your_token')
model = AutoModelForSeq2SeqLM.from_pretrained("Ateeqq/keywords-title-generator", token='your_token').to(device)
def generate_title(keywords):
input_ids = tokenizer(keywords, return_tensors="pt", padding="longest", truncation=True, max_length=24).input_ids.to(device)
outputs = model.generate(
input_ids,
num_beams=5,
num_beam_groups=5,
num_return_sequences=5,
repetition_penalty=10.0,
diversity_penalty=3.0,
no_repeat_ngram_size=2,
temperature=0.7,
max_length=24
)
return tokenizer.batch_decode(outputs, skip_special_tokens=True)
keywords = 'model, Fine-tuning, Machine Learning'
generate_title(keywords)
```
### Output:
```
['How to Fine-tune Your Machine Learning Model for Better Performance',
'Fine-tuning your Machine Learning model with a simple technique',
'Using fine tuning to fine-tune your machine learning model',
'Machine Learning: Fine-tuning your model to fit the needs of machine learning',
'The Art of Fine-Tuning Your Machine Learning Model']
```
### Disclaimer:
It grants a non-exclusive, non-transferable license to use the this model. This means you can't freely share it with others or sell the model itself. However you can use the model for commercial purposes. |