slug
stringlengths 15
15
| content
listlengths 1
129
| rawContent
stringlengths 1
2k
| author
dict | attachments
listlengths 0
49
| mentions
listlengths 0
49
| reactions
listlengths 0
12
| publishedAt
stringlengths 24
24
| updatedAt
stringlengths 24
24
| commentators
listlengths 0
47
| url
stringlengths 25
46
| totalUniqueImpressions
int64 1
41.5k
| numComments
int64 0
621
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
175081092992561 | [
{
"type": "text",
"value": "Just updated ",
"raw": "Just updated ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/MohamedRashad/timm-leaderboard",
"resource": {
"type": "space",
"id": "MohamedRashad/timm-leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/MohamedRashad/timm-leaderboard",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " with fuzzy search for people who want to search for a certian vision model",
"raw": " with fuzzy search for people who want to search for a certian vision model",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Just updated https://huggingface.co/spaces/MohamedRashad/timm-leaderboard with fuzzy search for people who want to search for a certian vision model | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1628885133347-6116d0584ef9fdfbf45dc4d9.jpeg",
"fullname": "Mohamed Rashad",
"name": "MohamedRashad",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 140,
"isFollowing": false
} | [] | [] | [] | 2024-07-01T16:34:56.000Z | 2024-07-01T16:34:56.349Z | [] | /posts/MohamedRashad/175081092992561 | 806 | 0 |
756732380261553 | [
{
"type": "text",
"value": "Search Hugging Face datasets by column names with a new experimental API! This API allows you to:",
"raw": "Search Hugging Face datasets by column names with a new experimental API! This API allows you to:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Search for question-answering datasets that include context",
"raw": "- Search for question-answering datasets that include context",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Find alpaca-style datasets",
"raw": "- Find alpaca-style datasets",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Locate DPO datasets",
"raw": "- Locate DPO datasets",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Try it out here: ",
"raw": "Try it out here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/librarian-bots/dataset-column-search-api",
"resource": {
"type": "space",
"id": "librarian-bots/dataset-column-search-api",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/librarian-bots/dataset-column-search-api",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", or explore real-world applications in this notebook: ",
"raw": ", or explore real-world applications in this notebook: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/librarian-bots/dataset-column-search-api/blob/main/dataset_search_client_notebook.ipynb",
"resource": {
"type": "space",
"id": "librarian-bots/dataset-column-search-api",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/librarian-bots/dataset-column-search-api/blob/main/dataset_search_client_notebook.ipynb",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Search Hugging Face datasets by column names with a new experimental API! This API allows you to:
- Search for question-answering datasets that include context
- Find alpaca-style datasets
- Locate DPO datasets
Try it out here: https://huggingface.co/spaces/librarian-bots/dataset-column-search-api, or explore real-world applications in this notebook: https://huggingface.co/spaces/librarian-bots/dataset-column-search-api/blob/main/dataset_search_client_notebook.ipynb | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 404,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"clem",
"osanseviero",
"merterbak",
"John6666",
"louisbrulenaudet"
],
"count": 5
},
{
"reaction": "👍",
"users": [
"fffiloni"
],
"count": 1
}
] | 2024-07-01T16:33:50.000Z | 2024-07-01T16:33:50.207Z | [] | /posts/davanstrien/756732380261553 | 2,096 | 0 |
701707924042367 | [
{
"type": "text",
"value": "Real-time DEtection Transformer (RT-DETR) landed in transformers 🤩 with Apache 2.0 license 😍",
"raw": "Real-time DEtection Transformer (RT-DETR) landed in transformers 🤩 with Apache 2.0 license 😍",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔖 models: ",
"raw": "🔖 models: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/PekingU",
"resource": null,
"url": null,
"href": "https://huggingface.co/PekingU",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔖 demo: ",
"raw": "🔖 demo: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/merve/RT-DETR-tracking-coco",
"resource": {
"type": "space",
"id": "merve/RT-DETR-tracking-coco",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/merve/RT-DETR-tracking-coco",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📝 paper: ",
"raw": "📝 paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2304.08069",
"resource": {
"type": "paper",
"id": "2304.08069",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2304.08069",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "DETRs Beat YOLOs on Real-time Object Detection (2304.08069)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📖 notebook: ",
"raw": "📖 notebook: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/merveenoyan/example_notebooks/blob/main/RT_DETR_Notebook.ipynb",
"resource": null,
"url": null,
"href": "https://github.com/merveenoyan/example_notebooks/blob/main/RT_DETR_Notebook.ipynb",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "YOLO models are known to be super fast for real-time computer vision, but they have a downside with being volatile to NMS 🥲",
"raw": "YOLO models are known to be super fast for real-time computer vision, but they have a downside with being volatile to NMS 🥲",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Transformer-based models on the other hand are computationally not as efficient 🥲",
"raw": "Transformer-based models on the other hand are computationally not as efficient 🥲",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Isn't there something in between? Enter RT-DETR!",
"raw": "Isn't there something in between? Enter RT-DETR!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The authors combined CNN backbone, multi-stage hybrid decoder (combining convs and attn) with a transformer decoder. In the paper, authors also claim one can adjust speed by changing decoder layers without retraining altogether. ",
"raw": "The authors combined CNN backbone, multi-stage hybrid decoder (combining convs and attn) with a transformer decoder. In the paper, authors also claim one can adjust speed by changing decoder layers without retraining altogether. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The authors find out that the model performs better in terms of speed and accuracy compared to the previous state-of-the-art. 🤩",
"raw": "The authors find out that the model performs better in terms of speed and accuracy compared to the previous state-of-the-art. 🤩",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Real-time DEtection Transformer (RT-DETR) landed in transformers 🤩 with Apache 2.0 license 😍
🔖 models: https://huggingface.co/PekingU
🔖 demo: https://huggingface.co/spaces/merve/RT-DETR-tracking-coco
📝 paper: https://huggingface.co/papers/2304.08069
📖 notebook: https://github.com/merveenoyan/example_notebooks/blob/main/RT_DETR_Notebook.ipynb
YOLO models are known to be super fast for real-time computer vision, but they have a downside with being volatile to NMS 🥲
Transformer-based models on the other hand are computationally not as efficient 🥲
Isn't there something in between? Enter RT-DETR!
The authors combined CNN backbone, multi-stage hybrid decoder (combining convs and attn) with a transformer decoder. In the paper, authors also claim one can adjust speed by changing decoder layers without retraining altogether.
The authors find out that the model performs better in terms of speed and accuracy compared to the previous state-of-the-art. 🤩
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/YxCWX0s_aPy5exTVxOMTB.jpeg"
}
] | [] | [
{
"reaction": "🔥",
"users": [
"merterbak",
"hllj",
"clem",
"osanseviero",
"multimodalart",
"not-lain",
"netynet",
"sudzdpn",
"louisbrulenaudet",
"mathieu-chauvet",
"Kabil007",
"Rbrq"
],
"count": 12
},
{
"reaction": "👍",
"users": [
"talkative",
"Tom-Neverwinter",
"ooaykac",
"GPT007"
],
"count": 4
}
] | 2024-07-01T15:20:56.000Z | 2024-07-01T15:20:56.270Z | [] | /posts/merve/701707924042367 | 4,992 | 0 |
736464349744598 | [
{
"type": "text",
"value": "**How I train a LoRA: m3lt style training overview**",
"raw": "**How I train a LoRA: m3lt style training overview**",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I've just written an article that takes a step by step approach to outlining the method that I used to train the 'm3lt' lora, a blended style model. ",
"raw": "I've just written an article that takes a step by step approach to outlining the method that I used to train the 'm3lt' lora, a blended style model. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I've used the LoRA Ease trainer by ",
"raw": "I've used the LoRA Ease trainer by ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@multimodalart",
"resource": null,
"url": null,
"href": null,
"user": "multimodalart",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " :D",
"raw": " :D",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/alvdansen/training-lora-m3lt",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/alvdansen/training-lora-m3lt",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/multimodalart/lora-ease",
"resource": {
"type": "space",
"id": "multimodalart/lora-ease",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/multimodalart/lora-ease",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | **How I train a LoRA: m3lt style training overview**
I've just written an article that takes a step by step approach to outlining the method that I used to train the 'm3lt' lora, a blended style model.
I've used the LoRA Ease trainer by @multimodalart :D
https://huggingface.co/blog/alvdansen/training-lora-m3lt
https://huggingface.co/spaces/multimodalart/lora-ease | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/1U0NMk0JzEOHJ-SSGQLIH.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/4w6SowJY_q1nEy_43i14u.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/RZgoHZpNkfo7k_4W0Rz1U.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649143001781-624bebf604abc7ebb01789af.jpeg",
"fullname": "Apolinário from multimodal AI art",
"name": "multimodalart",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 3149
}
] | [
{
"reaction": "❤️",
"users": [
"ijohn07",
"clem",
"blender66cat",
"not-lain",
"thedarktrumpet",
"ashemvets",
"glpx",
"IkeaMan",
"stinkyyy"
],
"count": 9
},
{
"reaction": "🚀",
"users": [
"victor",
"multimodalart",
"rishiguin",
"clem",
"not-lain",
"louisbrulenaudet"
],
"count": 6
},
{
"reaction": "🔥",
"users": [
"victor",
"multimodalart",
"clem",
"not-lain"
],
"count": 4
}
] | 2024-07-01T14:13:37.000Z | 2024-09-18T17:21:32.617Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/X7QKoiXbUtEZSG9jyvfk3.jpeg",
"fullname": "Victor Mustar",
"name": "victor",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 2578,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6639d768d48a3da6e4c9fbe1/4u6zXlZkiu8YQbnZuP4hr.jpeg",
"fullname": "Kim",
"name": "KKKKKIIIIIIIMMMMM",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "/avatars/181782227fdd2ae04a504af7c79a19bc.svg",
"fullname": "Maiya",
"name": "DualChimerra",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/alvdansen/736464349744598 | 3,090 | 5 |
215969412331682 | [
{
"type": "text",
"value": "5,000 new repos (models, datasets, spaces) are created EVERY DAY on HF now. The community is amazing!",
"raw": "5,000 new repos (models, datasets, spaces) are created EVERY DAY on HF now. The community is amazing!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 5,000 new repos (models, datasets, spaces) are created EVERY DAY on HF now. The community is amazing! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1734,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"umuthopeyildirim",
"osanseviero",
"victor",
"tail-call",
"paulml",
"jeremy-london",
"brunatrevelin",
"Clausss",
"ZeroWw",
"xi0v",
"yuriachermann",
"merterbak",
"nroggendorff",
"sted97",
"Jaromir",
"not-lain",
"netynet",
"Heresynetwork",
"SixOpen",
"tarob0ba",
"3thn",
"digiplay",
"prithivMLmods",
"GPT007",
"Nymbo"
],
"count": 25
},
{
"reaction": "🚀",
"users": [
"victor",
"ngxson",
"Clausss",
"ZeroWw",
"xi0v",
"nold",
"yuriachermann",
"nroggendorff",
"InferenceIllusionist",
"sted97",
"danielus",
"not-lain",
"netynet",
"Heresynetwork",
"moock",
"3thn",
"kramp",
"GPT007",
"nampdn-ai",
"IAmTheCollector"
],
"count": 20
},
{
"reaction": "🤗",
"users": [
"digiplay",
"louisbrulenaudet",
"GPT007",
"emirhanbilgic"
],
"count": 4
},
{
"reaction": "👍",
"users": [
"digiplay",
"GPT007"
],
"count": 2
},
{
"reaction": "🤝",
"users": [
"digiplay",
"GPT007"
],
"count": 2
},
{
"reaction": "😎",
"users": [
"GPT007"
],
"count": 1
}
] | 2024-07-01T12:40:49.000Z | 2024-07-01T12:40:49.355Z | [] | /posts/clem/215969412331682 | 5,769 | 0 |
799935689571130 | [
{
"type": "text",
"value": "🚨Exciting news for the Multilingual Synthetic Data Community!🚨",
"raw": "🚨Exciting news for the Multilingual Synthetic Data Community!🚨",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I’ve taken inspiration from the MAGPIE paper on Llama-3-8B-instruct and extended its capabilities. Here’s what’s new!",
"raw": "I’ve taken inspiration from the MAGPIE paper on Llama-3-8B-instruct and extended its capabilities. Here’s what’s new!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🗞 The MAGPIE paper showcased that if you use the instruction-tuned version (",
"raw": "🗞 The MAGPIE paper showcased that if you use the instruction-tuned version (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "inline_code",
"value": null,
"raw": "`Llama-3-8B-instruct`",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "Llama-3-8B-instruct",
"label": null
},
{
"type": "text",
"value": ") to generate synthetic instructions and then fine-tune the base version (",
"raw": ") to generate synthetic instructions and then fine-tune the base version (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "inline_code",
"value": null,
"raw": "`Llama-3-8B`",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "Llama-3-8B",
"label": null
},
{
"type": "text",
"value": ") on this dataset, you can improve even the it-tuned version",
"raw": ") on this dataset, you can improve even the it-tuned version",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🤔 While reading a script by Sebastian Raschka, PhD, I wondered: Could these advancements be replicated in other languages? Specifically, could they benefit non-English datasets?",
"raw": "🤔 While reading a script by Sebastian Raschka, PhD, I wondered: Could these advancements be replicated in other languages? Specifically, could they benefit non-English datasets?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🎉 And the answer is YES! At least for Spanish. I've successfully adapted the techniques for Spanish, proving the model's flexibility and multilingual capabilities.",
"raw": "🎉 And the answer is YES! At least for Spanish. I've successfully adapted the techniques for Spanish, proving the model's flexibility and multilingual capabilities.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "👩💻 To make this accessible, I created a basic script (heavily inspired by the Sebastian Raschka one) that allows you to generate similar datasets using ",
"raw": "👩💻 To make this accessible, I created a basic script (heavily inspired by the Sebastian Raschka one) that allows you to generate similar datasets using ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "inline_code",
"value": null,
"raw": "`ollama`",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "ollama",
"label": null
},
{
"type": "text",
"value": " models (initially phi and llama3) automatically and upload it to the Hugging Face Hub!",
"raw": " models (initially phi and llama3) automatically and upload it to the Hugging Face Hub!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "[Script](",
"raw": "[Script](",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://gist.github.com/mrm8488/4650a5e3cc45523798a527a3446eb312",
"resource": null,
"url": null,
"href": "https://gist.github.com/mrm8488/4650a5e3cc45523798a527a3446eb312",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ")",
"raw": ")",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔍 Explore the datasets 📚 generated using our new script! ",
"raw": "🔍 Explore the datasets 📚 generated using our new script! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- [Llama-3-8B](",
"raw": "- [Llama-3-8B](",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/datasets/mrm8488/dataset_llama3_5000_samples_es_4231_filtered",
"resource": null,
"url": null,
"href": "https://huggingface.co/datasets/mrm8488/dataset_llama3_5000_samples_es_4231_filtered",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ")",
"raw": ")",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- [Phi-3-medium](",
"raw": "- [Phi-3-medium](",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/datasets/mrm8488/dataset_phi3-medium_5000_samples_es_3906_filtered",
"resource": null,
"url": null,
"href": "https://huggingface.co/datasets/mrm8488/dataset_phi3-medium_5000_samples_es_3906_filtered",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ")",
"raw": ")",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- [Phi-3-mini](",
"raw": "- [Phi-3-mini](",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/datasets/mrm8488/dataset_phi3_5000_samples_es_3282_filtered",
"resource": null,
"url": null,
"href": "https://huggingface.co/datasets/mrm8488/dataset_phi3_5000_samples_es_3282_filtered",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ")",
"raw": ")",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Note: These datasets have basic filtering. Apply additional quality filters before using them to fine-tune large language models.",
"raw": "Note: These datasets have basic filtering. Apply additional quality filters before using them to fine-tune large language models.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Inspiration and base script:",
"raw": "Inspiration and base script:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/rasbt/LLMs-from-scratch/blob/main/ch07/05_dataset-generation/llama3-ollama.ipynb",
"resource": null,
"url": null,
"href": "https://github.com/rasbt/LLMs-from-scratch/blob/main/ch07/05_dataset-generation/llama3-ollama.ipynb",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.linkedin.com/feed/update/urn:li:activity:7210982019751661568/",
"resource": null,
"url": null,
"href": "https://www.linkedin.com/feed/update/urn:li:activity:7210982019751661568/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🚨Exciting news for the Multilingual Synthetic Data Community!🚨
I’ve taken inspiration from the MAGPIE paper on Llama-3-8B-instruct and extended its capabilities. Here’s what’s new!
🗞 The MAGPIE paper showcased that if you use the instruction-tuned version (`Llama-3-8B-instruct`) to generate synthetic instructions and then fine-tune the base version (`Llama-3-8B`) on this dataset, you can improve even the it-tuned version
🤔 While reading a script by Sebastian Raschka, PhD, I wondered: Could these advancements be replicated in other languages? Specifically, could they benefit non-English datasets?
🎉 And the answer is YES! At least for Spanish. I've successfully adapted the techniques for Spanish, proving the model's flexibility and multilingual capabilities.
👩💻 To make this accessible, I created a basic script (heavily inspired by the Sebastian Raschka one) that allows you to generate similar datasets using `ollama` models (initially phi and llama3) automatically and upload it to the Hugging Face Hub!
[Script](https://gist.github.com/mrm8488/4650a5e3cc45523798a527a3446eb312)
🔍 Explore the datasets 📚 generated using our new script!
- [Llama-3-8B](https://huggingface.co/datasets/mrm8488/dataset_llama3_5000_samples_es_4231_filtered)
- [Phi-3-medium](https://huggingface.co/datasets/mrm8488/dataset_phi3-medium_5000_samples_es_3906_filtered)
- [Phi-3-mini](https://huggingface.co/datasets/mrm8488/dataset_phi3_5000_samples_es_3282_filtered)
Note: These datasets have basic filtering. Apply additional quality filters before using them to fine-tune large language models.
Inspiration and base script:
https://github.com/rasbt/LLMs-from-scratch/blob/main/ch07/05_dataset-generation/llama3-ollama.ipynb
https://www.linkedin.com/feed/update/urn:li:activity:7210982019751661568/
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5e4318d616b09a31220980d6/24rMJ_vPh3gW9ZEmj64xr.png",
"fullname": "Manuel Romero",
"name": "mrm8488",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 2176,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"Davipar",
"osanseviero",
"apol",
"anakin87",
"davanstrien",
"victor",
"ucsahin",
"floschne",
"GPT007",
"vikas",
"Marvin73",
"pavaldeveloper",
"mrm8488",
"Taylor658",
"adorkin"
],
"count": 15
}
] | 2024-07-01T12:19:29.000Z | 2024-11-09T16:20:26.480Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2846,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5e4318d616b09a31220980d6/24rMJ_vPh3gW9ZEmj64xr.png",
"fullname": "Manuel Romero",
"name": "mrm8488",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 2176,
"isFollowing": false
},
{
"avatarUrl": "/avatars/53ebfcab852efd849a848a26dc65751c.svg",
"fullname": "elsatch",
"name": "elsatch",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 404,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fc6879e1c5ee87b1164876d/Tjnm_lv0Bq0gPbFOTDH6E.jpeg",
"fullname": "Huu Nguyen",
"name": "huu-ontocord",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 41,
"isFollowing": false
}
] | /posts/mrm8488/799935689571130 | 4,298 | 7 |
328570892675734 | [
{
"type": "text",
"value": "I have finished writing a blogpost about building an image-based retrieval system, This is one of the first-ever approaches to building such a pipeline using only open-source ",
"raw": "I have finished writing a blogpost about building an image-based retrieval system, This is one of the first-ever approaches to building such a pipeline using only open-source ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "models/libraries",
"raw": "models/libraries",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 🤗",
"raw": " 🤗",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "You can checkout the blogpost in ",
"raw": "You can checkout the blogpost in ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/not-lain/image-retriever",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/not-lain/image-retriever",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " and the associated space at ",
"raw": " and the associated space at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/not-lain/image-retriever",
"resource": {
"type": "space",
"id": "not-lain/image-retriever",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/not-lain/image-retriever",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " .",
"raw": " .",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✨ If you want to request another blog post consider letting me know down below or you can reach out to me through any of my social media ",
"raw": "✨ If you want to request another blog post consider letting me know down below or you can reach out to me through any of my social media ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📖 Happy reading !",
"raw": "📖 Happy reading !",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I have finished writing a blogpost about building an image-based retrieval system, This is one of the first-ever approaches to building such a pipeline using only open-source models/libraries 🤗
You can checkout the blogpost in https://huggingface.co/blog/not-lain/image-retriever and the associated space at https://huggingface.co/spaces/not-lain/image-retriever .
✨ If you want to request another blog post consider letting me know down below or you can reach out to me through any of my social media
📖 Happy reading !
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/BRKGVgk_dJO34ZOi3Slb_.jpeg",
"fullname": "Lain",
"name": "not-lain",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 919,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/3SiOaP88eF6U9BIONFwFt.mp4"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/AqRy5202vEe2CtJnXSCqP.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"tail-call",
"YaTharThShaRma999",
"umutphp",
"John6666",
"MrOvkill",
"sted97",
"kleberbaum",
"vincentweisser",
"BigOne"
],
"count": 9
}
] | 2024-07-01T12:06:45.000Z | 2024-07-01T12:58:45.828Z | [] | /posts/not-lain/328570892675734 | 2,652 | 0 |
519364102923542 | [
{
"type": "text",
"value": "Yet another rewarding week in Open Source AI:",
"raw": "Yet another rewarding week in Open Source AI:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "1. Google dropped Gemma 27B & 9B - The best open (commercially permissive) LLM out there, according to LYMSYS.",
"raw": "1. Google dropped Gemma 27B & 9B - The best open (commercially permissive) LLM out there, according to LYMSYS.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/google/gemma-2-release-667d6600fd5220e7b967f315",
"resource": {
"type": "collection",
"id": "google/gemma-2-release-667d6600fd5220e7b967f315",
"discussionNum": null
},
"url": "https://huggingface.co/collections/google/gemma-2-release-667d6600fd5220e7b967f315",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "2. Mars5 TTS - Text to Speech with insane prosodies control & voice cloning.",
"raw": "2. Mars5 TTS - Text to Speech with insane prosodies control & voice cloning.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/CAMB-AI/MARS5-TTS",
"resource": {
"type": "model",
"id": "CAMB-AI/MARS5-TTS",
"discussionNum": null
},
"url": "https://huggingface.co/CAMB-AI/MARS5-TTS",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "3. Meta shipped LLM Compiler - beats GPT 4 on code optimisation and compiler reasoning.",
"raw": "3. Meta shipped LLM Compiler - beats GPT 4 on code optimisation and compiler reasoning.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/facebook/llm-compiler-667c5b05557fe99a9edd25cb",
"resource": {
"type": "collection",
"id": "facebook/llm-compiler-667c5b05557fe99a9edd25cb",
"discussionNum": null
},
"url": "https://huggingface.co/collections/facebook/llm-compiler-667c5b05557fe99a9edd25cb",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "4. Arcee-Spark - Qwen2 7B (w/ merging) fine-tuned further to beat GPT 3.5 on MT Bench.",
"raw": "4. Arcee-Spark - Qwen2 7B (w/ merging) fine-tuned further to beat GPT 3.5 on MT Bench.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/arcee-ai/Arcee-Spark",
"resource": {
"type": "model",
"id": "arcee-ai/Arcee-Spark",
"discussionNum": null
},
"url": "https://huggingface.co/arcee-ai/Arcee-Spark",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "5. Gemini Nano out in the wild in Chrome - On device LLM with just 2 lines of code (fully offline)",
"raw": "5. Gemini Nano out in the wild in Chrome - On device LLM with just 2 lines of code (fully offline)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "6. Fal released a fully Open Source GAN based Super-Resolution model (with second version already cooking)",
"raw": "6. Fal released a fully Open Source GAN based Super-Resolution model (with second version already cooking)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/fal/AuraSR",
"resource": {
"type": "model",
"id": "fal/AuraSR",
"discussionNum": null
},
"url": "https://huggingface.co/fal/AuraSR",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "7. NYU release Cambrian 1 - Vision Multimodal LLM that beats pretty much all other closed source competition 8-34B model size",
"raw": "7. NYU release Cambrian 1 - Vision Multimodal LLM that beats pretty much all other closed source competition 8-34B model size",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/nyu-visionx",
"resource": null,
"url": null,
"href": "https://huggingface.co/nyu-visionx",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "And.. much more like Open LLM Leaderboard got a major update, LYMSYS released Chat Vision Arena, OpenAI released a paper on CriticGPT!",
"raw": "And.. much more like Open LLM Leaderboard got a major update, LYMSYS released Chat Vision Arena, OpenAI released a paper on CriticGPT!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "What a lovely week, can’t wait for the next to see what the community is up to! Put it down in comments if I missed something 🔥",
"raw": "What a lovely week, can’t wait for the next to see what the community is up to! Put it down in comments if I missed something 🔥",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Yet another rewarding week in Open Source AI:
1. Google dropped Gemma 27B & 9B - The best open (commercially permissive) LLM out there, according to LYMSYS.
https://huggingface.co/collections/google/gemma-2-release-667d6600fd5220e7b967f315
2. Mars5 TTS - Text to Speech with insane prosodies control & voice cloning.
https://huggingface.co/CAMB-AI/MARS5-TTS
3. Meta shipped LLM Compiler - beats GPT 4 on code optimisation and compiler reasoning.
https://huggingface.co/collections/facebook/llm-compiler-667c5b05557fe99a9edd25cb
4. Arcee-Spark - Qwen2 7B (w/ merging) fine-tuned further to beat GPT 3.5 on MT Bench.
https://huggingface.co/arcee-ai/Arcee-Spark
5. Gemini Nano out in the wild in Chrome - On device LLM with just 2 lines of code (fully offline)
6. Fal released a fully Open Source GAN based Super-Resolution model (with second version already cooking)
https://huggingface.co/fal/AuraSR
7. NYU release Cambrian 1 - Vision Multimodal LLM that beats pretty much all other closed source competition 8-34B model size
https://huggingface.co/nyu-visionx
And.. much more like Open LLM Leaderboard got a major update, LYMSYS released Chat Vision Arena, OpenAI released a paper on CriticGPT!
What a lovely week, can’t wait for the next to see what the community is up to! Put it down in comments if I missed something 🔥 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655385361868-61b85ce86eb1f2c5e6233736.jpeg",
"fullname": "Vaibhav Srivastav",
"name": "reach-vb",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 439,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"julien-c",
"netynet",
"m-ric",
"jeffboudier",
"SixOpen",
"hsnugavin",
"brandonisntafraid",
"pcuenq",
"louisbrulenaudet",
"victor",
"polinaeterna",
"apolloparty",
"clem",
"tomaarsen",
"ogimgio",
"qnixsynapse",
"linoyts",
"John6666",
"Nelathan",
"ruben-wleon",
"ddiddi",
"Joseph717171"
],
"count": 22
},
{
"reaction": "👍",
"users": [
"dashfunnydashdash",
"ddiddi",
"Joseph717171",
"bradshimmin"
],
"count": 4
}
] | 2024-07-01T08:37:06.000Z | 2024-07-01T16:16:12.171Z | [
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
}
] | /posts/reach-vb/519364102923542 | 5,256 | 1 |
355530853393852 | [
{
"type": "text",
"value": "The paper is now available at ICML 2024. This paper introduces a foundational approach to analyze deep reinforcement learning decision making. Truly excited to share these results! ",
"raw": "The paper is now available at ICML 2024. This paper introduces a foundational approach to analyze deep reinforcement learning decision making. Truly excited to share these results! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper: ",
"raw": "Paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://openreview.net/pdf?id=s9RKqT7jVM",
"resource": null,
"url": null,
"href": "https://openreview.net/pdf?id=s9RKqT7jVM",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "HF: ",
"raw": "HF: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2406.16979",
"resource": {
"type": "paper",
"id": "2406.16979",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2406.16979",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "Understanding and Diagnosing Deep Reinforcement Learning (2406.16979)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Twitter: ",
"raw": "Twitter: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://x.com/EzgiKorkmazAI/status/1798990328390996002",
"resource": null,
"url": null,
"href": "https://x.com/EzgiKorkmazAI/status/1798990328390996002",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | The paper is now available at ICML 2024. This paper introduces a foundational approach to analyze deep reinforcement learning decision making. Truly excited to share these results!
Paper: https://openreview.net/pdf?id=s9RKqT7jVM
HF: https://huggingface.co/papers/2406.16979
Twitter: https://x.com/EzgiKorkmazAI/status/1798990328390996002 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/667c1a5acb6800a191024eb9/AqL8mQZsZjpZKi9FxtkIH.png",
"fullname": "Ezgi Korkmaz",
"name": "ezgikorkmaz",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 32,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/667c1a5acb6800a191024eb9/_CcQ0OnkjMiJd5ZhDLNTa.png"
}
] | [] | [
{
"reaction": "🚀",
"users": [
"jonahbc",
"merterbak"
],
"count": 2
},
{
"reaction": "👀",
"users": [
"TroglodyteDerivations"
],
"count": 1
}
] | 2024-07-01T08:24:55.000Z | 2024-07-01T08:24:55.381Z | [] | /posts/ezgikorkmaz/355530853393852 | 1,308 | 0 |
192985344026510 | [
{
"type": "text",
"value": "Announcing the creation of the \"HF for Legal\" organization, an open-source community dedicated to demystifying language models for legal professionals 🤗",
"raw": "Announcing the creation of the \"HF for Legal\" organization, an open-source community dedicated to demystifying language models for legal professionals 🤗",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Whether you're a practicing attorney, a legal scholar, or a technologist interested in legal applications of AI, HF for Legal may be your hub for exploration, learning, and free innovation ⚗️",
"raw": "Whether you're a practicing attorney, a legal scholar, or a technologist interested in legal applications of AI, HF for Legal may be your hub for exploration, learning, and free innovation ⚗️",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "On the occasion of this launch, you'll be able to find several notebooks I've been developing over the last few months for TSDAE pre-training of embedding models, the generation of indexes for semantic search, based on the formidable work of ",
"raw": "On the occasion of this launch, you'll be able to find several notebooks I've been developing over the last few months for TSDAE pre-training of embedding models, the generation of indexes for semantic search, based on the formidable work of ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@tomaarsen",
"resource": null,
"url": null,
"href": null,
"user": "tomaarsen",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " and ",
"raw": " and ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@nreimers",
"resource": null,
"url": null,
"href": null,
"user": "nreimers",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", adapted to the field of French law, or the addition of information retrieval tasks to the MTEB.",
"raw": ", adapted to the field of French law, or the addition of information retrieval tasks to the MTEB.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Join us in our mission to make AI more accessible and understandable for the legal world, ensuring that the power of language models can be harnessed effectively and ethically.",
"raw": "Join us in our mission to make AI more accessible and understandable for the legal world, ensuring that the power of language models can be harnessed effectively and ethically.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Link to the org: ",
"raw": "Link to the org: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/HFforLegal",
"resource": null,
"url": null,
"href": "https://huggingface.co/HFforLegal",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Special thanks to ",
"raw": "Special thanks to ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@clem",
"resource": null,
"url": null,
"href": null,
"user": "clem",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " for encouraging me to start this organization. Let's hope we can bring together all the enthusiasts who work in this field.",
"raw": " for encouraging me to start this organization. Let's hope we can bring together all the enthusiasts who work in this field.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Let's code and share together! 🚀🔗",
"raw": "Let's code and share together! 🚀🔗",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Announcing the creation of the "HF for Legal" organization, an open-source community dedicated to demystifying language models for legal professionals 🤗
Whether you're a practicing attorney, a legal scholar, or a technologist interested in legal applications of AI, HF for Legal may be your hub for exploration, learning, and free innovation ⚗️
On the occasion of this launch, you'll be able to find several notebooks I've been developing over the last few months for TSDAE pre-training of embedding models, the generation of indexes for semantic search, based on the formidable work of @tomaarsen and @nreimers, adapted to the field of French law, or the addition of information retrieval tasks to the MTEB.
Join us in our mission to make AI more accessible and understandable for the legal world, ensuring that the power of language models can be harnessed effectively and ethically.
Link to the org: https://huggingface.co/HFforLegal
Special thanks to @clem for encouraging me to start this organization. Let's hope we can bring together all the enthusiasts who work in this field.
Let's code and share together! 🚀🔗 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6459fa0f5b3111fbe83286e1/UhCa7JNbtTjC6dgOjZtH0.jpeg",
"fullname": "Louis Brulé Naudet",
"name": "louisbrulenaudet",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 176,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6459fa0f5b3111fbe83286e1/M2C22f9PhTnPv9TbmNTRm.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1734
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1596792577829-5eff4688ff69163f6f59e66c.jpeg",
"fullname": "Nils Reimers",
"name": "nreimers",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 78
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png",
"fullname": "Tom Aarsen",
"name": "tomaarsen",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 1045
}
] | [
{
"reaction": "🔥",
"users": [
"tomaarsen",
"clem",
"not-lain",
"netynet",
"GFA-D2",
"ucalyptus",
"Joseph717171",
"Elizezen",
"jambracken",
"vincentweisser",
"ruben-wleon",
"TuringsSolutions",
"nroggendorff"
],
"count": 13
},
{
"reaction": "❤️",
"users": [
"Joseph717171",
"TuringsSolutions"
],
"count": 2
},
{
"reaction": "🤗",
"users": [
"Joseph717171",
"nroggendorff"
],
"count": 2
}
] | 2024-07-01T07:56:18.000Z | 2024-07-01T07:56:18.525Z | [] | /posts/louisbrulenaudet/192985344026510 | 2,932 | 0 |
701696175282841 | [
{
"type": "text",
"value": "How to alter the behavior of a Language Model without fine-tuning or prompting? Say hello to 🎤 yo-Llama 🦙! ",
"raw": "How to alter the behavior of a Language Model without fine-tuning or prompting? Say hello to 🎤 yo-Llama 🦙! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Model ",
"raw": "Model ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/anakin87/yo-Llama-3-8B-Instruct",
"resource": {
"type": "model",
"id": "anakin87/yo-Llama-3-8B-Instruct",
"discussionNum": null
},
"url": "https://huggingface.co/anakin87/yo-Llama-3-8B-Instruct",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This experiment steers Llama-3-8B-Instruct to respond in a rap style.",
"raw": "This experiment steers Llama-3-8B-Instruct to respond in a rap style.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "How? Amplifying the rap direction in the activation space. 😎",
"raw": "How? Amplifying the rap direction in the activation space. 😎",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "𝐖𝐡𝐚𝐭 𝐬𝐩𝐚𝐫𝐤𝐞𝐝 𝐭𝐡𝐢𝐬 𝐢𝐝𝐞𝐚?",
"raw": "𝐖𝐡𝐚𝐭 𝐬𝐩𝐚𝐫𝐤𝐞𝐝 𝐭𝐡𝐢𝐬 𝐢𝐝𝐞𝐚?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Lately, I got interested in mechanistic interpretability of LLMs.",
"raw": "Lately, I got interested in mechanistic interpretability of LLMs.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "💡 A recent paper, \"Refusal in Language Models Is Mediated by a Single Direction,\" showed how to find the refusal direction in the activation space of Chat Language Models and either erase or amplify it.",
"raw": "💡 A recent paper, \"Refusal in Language Models Is Mediated by a Single Direction,\" showed how to find the refusal direction in the activation space of Chat Language Models and either erase or amplify it.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "A clever jailbreak method for open weights models.",
"raw": "A clever jailbreak method for open weights models.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Then, ",
"raw": "Then, ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@failspy",
"resource": null,
"url": null,
"href": null,
"user": "failspy",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " took it a step further by modifying the models to amplify different traits, such as making a model seem grumpy or irritable.",
"raw": " took it a step further by modifying the models to amplify different traits, such as making a model seem grumpy or irritable.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "𝐇𝐨𝐰 𝐝𝐢𝐝 𝐈 𝐜𝐫𝐞𝐚𝐭𝐞 𝐲𝐨-𝐋𝐥𝐚𝐦𝐚?",
"raw": "𝐇𝐨𝐰 𝐝𝐢𝐝 𝐈 𝐜𝐫𝐞𝐚𝐭𝐞 𝐲𝐨-𝐋𝐥𝐚𝐦𝐚?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "(📓 notebook in the HF repository, heavily inspired by Failspy's work)",
"raw": "(📓 notebook in the HF repository, heavily inspired by Failspy's work)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "1️⃣ Load the Llama-3-8B-Instruct model.",
"raw": "1️⃣ Load the Llama-3-8B-Instruct model.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "2️⃣ Load 1024 examples from Alpaca (instruction dataset).",
"raw": "2️⃣ Load 1024 examples from Alpaca (instruction dataset).",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "3️⃣ Prepare a system prompt to make the original model act like a rapper.",
"raw": "3️⃣ Prepare a system prompt to make the original model act like a rapper.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "4️⃣ Run inference on the examples, with and without the system prompt, and cache the activations.",
"raw": "4️⃣ Run inference on the examples, with and without the system prompt, and cache the activations.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "5️⃣ Compute the rap feature directions (one for each layer) from the activations.",
"raw": "5️⃣ Compute the rap feature directions (one for each layer) from the activations.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "6️⃣ Apply the feature directions one by one, checking the results on some examples.",
"raw": "6️⃣ Apply the feature directions one by one, checking the results on some examples.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "7️⃣ Pick the best-performing feature direction.",
"raw": "7️⃣ Pick the best-performing feature direction.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "8️⃣ Apply this feature direction and voilà!",
"raw": "8️⃣ Apply this feature direction and voilà!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "yo-Llama-3-8B-Instruct is born! 🥳🎶",
"raw": "yo-Llama-3-8B-Instruct is born! 🥳🎶",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This was a fun experiment. ",
"raw": "This was a fun experiment. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📚 Resources",
"raw": "📚 Resources",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Refusal in Language Models Is Mediated by a Single Direction - ",
"raw": "Refusal in Language Models Is Mediated by a Single Direction - ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2406.11717",
"resource": null,
"url": null,
"href": "https://arxiv.org/abs/2406.11717",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Uncensor any LLM with abliteration: great practical blog post by ",
"raw": "Uncensor any LLM with abliteration: great practical blog post by ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@mlabonne",
"resource": null,
"url": null,
"href": null,
"user": "mlabonne",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/mlabonne/abliteration",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/mlabonne/abliteration",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Practical materials by ",
"raw": "Practical materials by ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@failspy",
"resource": null,
"url": null,
"href": null,
"user": "failspy",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- abliterator library ",
"raw": "- abliterator library ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/FailSpy/abliterator",
"resource": null,
"url": null,
"href": "https://github.com/FailSpy/abliterator",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Llama-MopeyMule-3-8B-Instruct model (+ notebook) ",
"raw": "- Llama-MopeyMule-3-8B-Instruct model (+ notebook) ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule",
"resource": {
"type": "model",
"id": "failspy/Llama-3-8B-Instruct-MopeyMule",
"discussionNum": null
},
"url": "https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | How to alter the behavior of a Language Model without fine-tuning or prompting? Say hello to 🎤 yo-Llama 🦙!
Model https://huggingface.co/anakin87/yo-Llama-3-8B-Instruct
This experiment steers Llama-3-8B-Instruct to respond in a rap style.
How? Amplifying the rap direction in the activation space. 😎
𝐖𝐡𝐚𝐭 𝐬𝐩𝐚𝐫𝐤𝐞𝐝 𝐭𝐡𝐢𝐬 𝐢𝐝𝐞𝐚?
Lately, I got interested in mechanistic interpretability of LLMs.
💡 A recent paper, "Refusal in Language Models Is Mediated by a Single Direction," showed how to find the refusal direction in the activation space of Chat Language Models and either erase or amplify it.
A clever jailbreak method for open weights models.
Then, @failspy took it a step further by modifying the models to amplify different traits, such as making a model seem grumpy or irritable.
𝐇𝐨𝐰 𝐝𝐢𝐝 𝐈 𝐜𝐫𝐞𝐚𝐭𝐞 𝐲𝐨-𝐋𝐥𝐚𝐦𝐚?
(📓 notebook in the HF repository, heavily inspired by Failspy's work)
1️⃣ Load the Llama-3-8B-Instruct model.
2️⃣ Load 1024 examples from Alpaca (instruction dataset).
3️⃣ Prepare a system prompt to make the original model act like a rapper.
4️⃣ Run inference on the examples, with and without the system prompt, and cache the activations.
5️⃣ Compute the rap feature directions (one for each layer) from the activations.
6️⃣ Apply the feature directions one by one, checking the results on some examples.
7️⃣ Pick the best-performing feature direction.
8️⃣ Apply this feature direction and voilà!
yo-Llama-3-8B-Instruct is born! 🥳🎶
This was a fun experiment.
📚 Resources
Refusal in Language Models Is Mediated by a Single Direction - https://arxiv.org/abs/2406.11717
Uncensor any LLM with abliteration: great practical blog post by @mlabonne https://huggingface.co/blog/mlabonne/abliteration
Practical materials by @failspy
- abliterator library https://github.com/FailSpy/abliterator
- Llama-MopeyMule-3-8B-Instruct model (+ notebook) https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626505d493e0b04d75710566/9rfJc9ORXU9J5a42Ev3v6.png",
"fullname": "Stefano Fiorucci",
"name": "anakin87",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 66,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/626505d493e0b04d75710566/LZVb3o2aEg8ZKj4vBLnKE.gif"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6617589592abaae4ecc0a272/kz5CJg8gfQTnXchGQe-NV.png",
"fullname": "fs",
"name": "failspy",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 387
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b8e2ba285851687028d395/JtUGAwVh_4cDEsjNcfpye.png",
"fullname": "Maxime Labonne",
"name": "mlabonne",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3452
}
] | [
{
"reaction": "❤️",
"users": [
"mlabonne",
"diwank",
"YaTharThShaRma999",
"Korakoe"
],
"count": 4
}
] | 2024-07-01T07:40:13.000Z | 2024-07-01T07:42:22.882Z | [] | /posts/anakin87/701696175282841 | 1,034 | 0 |
756717091740072 | [
{
"type": "text",
"value": "🚀 Llama-3-ELYZA-JP-8B",
"raw": "🚀 Llama-3-ELYZA-JP-8B",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "ELYZA, Inc. has developed two large language models (LLMs) for Japanese called \"Llama-3-ELYZA-JP-70B\" with 70 billion parameters and \"Llama-3-ELYZA-JP-8B\" with 8 billion parameters, based on Meta's \"Llama 3\" series. These models have been fine-tuned through additional pre-training and post-training to improve Japanese language capabilities significantly.",
"raw": "ELYZA, Inc. has developed two large language models (LLMs) for Japanese called \"Llama-3-ELYZA-JP-70B\" with 70 billion parameters and \"Llama-3-ELYZA-JP-8B\" with 8 billion parameters, based on Meta's \"Llama 3\" series. These models have been fine-tuned through additional pre-training and post-training to improve Japanese language capabilities significantly.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Key Points:",
"raw": "Key Points:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Performance: ",
"raw": "Performance: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Llama-3-ELYZA-JP-70B surpasses global models such as GPT-4, Claude 3 Sonnet, and Gemini 1.5 Flash.",
"raw": "- Llama-3-ELYZA-JP-70B surpasses global models such as GPT-4, Claude 3 Sonnet, and Gemini 1.5 Flash.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Llama-3-ELYZA-JP-8B matches models like GPT-3.5 Turbo and Claude 3 Haiku despite having fewer parameters.",
"raw": "- Llama-3-ELYZA-JP-8B matches models like GPT-3.5 Turbo and Claude 3 Haiku despite having fewer parameters.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Availability:",
"raw": "Availability:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- The 8B model is available on Hugging Face Hub and can be used for both research and commercial purposes under the Llama 3 Community License.",
"raw": "- The 8B model is available on Hugging Face Hub and can be used for both research and commercial purposes under the Llama 3 Community License.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Methodology:",
"raw": "Methodology:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ELYZA enhanced the Japanese performance of the Llama 3 models through additional training with high-quality Japanese corpora and Instruction Tuning with proprietary datasets.",
"raw": "- ELYZA enhanced the Japanese performance of the Llama 3 models through additional training with high-quality Japanese corpora and Instruction Tuning with proprietary datasets.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Benchmarks:",
"raw": "Benchmarks:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Evaluations using ELYZA Tasks 100 and Japanese MT-Bench showed significant improvements in Japanese language generation.",
"raw": "- Evaluations using ELYZA Tasks 100 and Japanese MT-Bench showed significant improvements in Japanese language generation.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Inference Speed:",
"raw": "Inference Speed:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- To address inference speed issues due to model size, ELYZA implemented Speculative Decoding, which achieved up to 1.6 times faster inference for the 70B model.",
"raw": "- To address inference speed issues due to model size, ELYZA implemented Speculative Decoding, which achieved up to 1.6 times faster inference for the 70B model.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Overall, ELYZA's models demonstrate state-of-the-art performance in Japanese language tasks and are optimized for both efficiency and effectiveness.",
"raw": "Overall, ELYZA's models demonstrate state-of-the-art performance in Japanese language tasks and are optimized for both efficiency and effectiveness.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Model URL:",
"raw": "Model URL:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B",
"resource": {
"type": "model",
"id": "elyza/Llama-3-ELYZA-JP-8B",
"discussionNum": null
},
"url": "https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B-AWQ",
"resource": {
"type": "model",
"id": "elyza/Llama-3-ELYZA-JP-8B-AWQ",
"discussionNum": null
},
"url": "https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B-AWQ",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B-GGUF",
"resource": {
"type": "model",
"id": "elyza/Llama-3-ELYZA-JP-8B-GGUF",
"discussionNum": null
},
"url": "https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B-GGUF",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Blog post (in Japanese):",
"raw": "Blog post (in Japanese):",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://note.com/elyza/n/n360b6084fdbd",
"resource": null,
"url": null,
"href": "https://note.com/elyza/n/n360b6084fdbd",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🚀 Llama-3-ELYZA-JP-8B
ELYZA, Inc. has developed two large language models (LLMs) for Japanese called "Llama-3-ELYZA-JP-70B" with 70 billion parameters and "Llama-3-ELYZA-JP-8B" with 8 billion parameters, based on Meta's "Llama 3" series. These models have been fine-tuned through additional pre-training and post-training to improve Japanese language capabilities significantly.
Key Points:
Performance:
- Llama-3-ELYZA-JP-70B surpasses global models such as GPT-4, Claude 3 Sonnet, and Gemini 1.5 Flash.
- Llama-3-ELYZA-JP-8B matches models like GPT-3.5 Turbo and Claude 3 Haiku despite having fewer parameters.
Availability:
- The 8B model is available on Hugging Face Hub and can be used for both research and commercial purposes under the Llama 3 Community License.
Methodology:
- ELYZA enhanced the Japanese performance of the Llama 3 models through additional training with high-quality Japanese corpora and Instruction Tuning with proprietary datasets.
Benchmarks:
- Evaluations using ELYZA Tasks 100 and Japanese MT-Bench showed significant improvements in Japanese language generation.
Inference Speed:
- To address inference speed issues due to model size, ELYZA implemented Speculative Decoding, which achieved up to 1.6 times faster inference for the 70B model.
Overall, ELYZA's models demonstrate state-of-the-art performance in Japanese language tasks and are optimized for both efficiency and effectiveness.
Model URL:
- https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B
- https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B-AWQ
- https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B-GGUF
Blog post (in Japanese):
https://note.com/elyza/n/n360b6084fdbd | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61a32e422172c41f121589d2/8jExNd-9fenpqw_Z1rvL6.jpeg",
"fullname": "Kaito Sugimoto",
"name": "kaisugi",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 12,
"isFollowing": false
} | [] | [] | [] | 2024-07-01T04:03:13.000Z | 2024-07-01T04:05:07.428Z | [] | /posts/kaisugi/756717091740072 | 729 | 0 |
611371069193643 | [
{
"type": "text",
"value": "📢 I've tested google/Gemma-2-9b-it in Target Sentiment Analysis (TSA), in zero-shot learning mode on RuSentNE-2023 dataset with texts translated into English (🇺🇸).",
"raw": "📢 I've tested google/Gemma-2-9b-it in Target Sentiment Analysis (TSA), in zero-shot learning mode on RuSentNE-2023 dataset with texts translated into English (🇺🇸).",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔎 Findings: The key contribution with the most recent Gemma-2 release is reasoning alignment between different langauges. This is basically the first model under 10B category which shows equal results in English and non-english texts. In the case of texts in English it performs similar to LLaMa-3-8B / Mistal-7B",
"raw": "🔎 Findings: The key contribution with the most recent Gemma-2 release is reasoning alignment between different langauges. This is basically the first model under 10B category which shows equal results in English and non-english texts. In the case of texts in English it performs similar to LLaMa-3-8B / Mistal-7B",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Model: ",
"raw": "Model: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/google/gemma-2-9b-it",
"resource": {
"type": "model",
"id": "google/gemma-2-9b-it",
"discussionNum": null
},
"url": "https://huggingface.co/google/gemma-2-9b-it",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Benchmark: ",
"raw": "Benchmark: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/nicolay-r/RuSentNE-LLM-Benchmark",
"resource": null,
"url": null,
"href": "https://github.com/nicolay-r/RuSentNE-LLM-Benchmark",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 📢 I've tested google/Gemma-2-9b-it in Target Sentiment Analysis (TSA), in zero-shot learning mode on RuSentNE-2023 dataset with texts translated into English (🇺🇸).
🔎 Findings: The key contribution with the most recent Gemma-2 release is reasoning alignment between different langauges. This is basically the first model under 10B category which shows equal results in English and non-english texts. In the case of texts in English it performs similar to LLaMa-3-8B / Mistal-7B
Model: https://huggingface.co/google/gemma-2-9b-it
Benchmark: https://github.com/nicolay-r/RuSentNE-LLM-Benchmark | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64e62d11d27a8292c3637f86/aptDeBHpCJxcREj6KPLN1.jpeg",
"fullname": "Nicolay Rusnachenko",
"name": "nicolay-r",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 49,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64e62d11d27a8292c3637f86/GdSXOk3hPMJ0AbaxgE1yV.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64e62d11d27a8292c3637f86/Zf3gc6DD7wkxIpMMF_6uI.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"victor"
],
"count": 1
}
] | 2024-06-30T21:14:32.000Z | 2024-06-30T21:15:11.434Z | [] | /posts/nicolay-r/611371069193643 | 712 | 0 |
370398505829604 | [
{
"type": "text",
"value": "📅 Remember when at the beginning of the year ",
"raw": "📅 Remember when at the beginning of the year ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@Google",
"resource": null,
"url": null,
"href": null,
"user": "Google",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " gave an update on knowledge distillation! Introducing a way of learning from Self-Generated Mistakes?",
"raw": " gave an update on knowledge distillation! Introducing a way of learning from Self-Generated Mistakes?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📊 Resulting in significant improvements across tasks:",
"raw": "📊 Resulting in significant improvements across tasks:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- 📄 2.1x in summarization",
"raw": "- 📄 2.1x in summarization",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- 🌐 1.7x in translation ",
"raw": "- 🌐 1.7x in translation ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- 🧠 1.9x in reasoning tasks",
"raw": "- 🧠 1.9x in reasoning tasks",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🚀 Well, it looks like Google wasn't messing around! According to the Gemma 2 tech report, knowledge distillation was used to pre-train the 9B model, while the 27B model was pre-trained from scratch.",
"raw": "🚀 Well, it looks like Google wasn't messing around! According to the Gemma 2 tech report, knowledge distillation was used to pre-train the 9B model, while the 27B model was pre-trained from scratch.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📈 For post-training, the Gemma 2 team generated completions from a stronger teacher model (unspecified in the report, but presumably Gemini Ultra), and then trained the student models on this synthetic data with SFT. This is quite common as seen in many open models, such as Zephyr and OpenHermes.",
"raw": "📈 For post-training, the Gemma 2 team generated completions from a stronger teacher model (unspecified in the report, but presumably Gemini Ultra), and then trained the student models on this synthetic data with SFT. This is quite common as seen in many open models, such as Zephyr and OpenHermes.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🤔 Sounds too good to be true? These models suffer from distribution mismatch between output sequences seen during training and those generated by the student during inference.",
"raw": "🤔 Sounds too good to be true? These models suffer from distribution mismatch between output sequences seen during training and those generated by the student during inference.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📰 This is where the January 2024 paper \"On-Policy Distillation of Language Models\" comes in...",
"raw": "📰 This is where the January 2024 paper \"On-Policy Distillation of Language Models\" comes in...",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔍 Gemma 2 team used “on-policy distillation,” where the student generates completions from the SFT prompts. These completions are then used to compute the KL divergence between the teacher’s and student’s logits. By minimizing the KL divergence throughout training, the student learns to model the behavior of the teacher accurately while also minimizing the train-inference mismatch.",
"raw": "🔍 Gemma 2 team used “on-policy distillation,” where the student generates completions from the SFT prompts. These completions are then used to compute the KL divergence between the teacher’s and student’s logits. By minimizing the KL divergence throughout training, the student learns to model the behavior of the teacher accurately while also minimizing the train-inference mismatch.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📚 Gem🔹 of a blog by ",
"raw": "📚 Gem🔹 of a blog by ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@huggingface",
"resource": null,
"url": null,
"href": null,
"user": "huggingface",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " uncovering everything Gemma 2: ",
"raw": " uncovering everything Gemma 2: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/gemma2#knowledge-distillation",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/gemma2#knowledge-distillation",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📄 On-Policy Distillation of Language Models: ",
"raw": "📄 On-Policy Distillation of Language Models: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2306.13649",
"resource": {
"type": "paper",
"id": "2306.13649",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2306.13649",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "On-Policy Distillation of Language Models: Learning from Self-Generated\n Mistakes (2306.13649)"
}
] | 📅 Remember when at the beginning of the year @Google gave an update on knowledge distillation! Introducing a way of learning from Self-Generated Mistakes?
📊 Resulting in significant improvements across tasks:
- 📄 2.1x in summarization
- 🌐 1.7x in translation
- 🧠 1.9x in reasoning tasks
🚀 Well, it looks like Google wasn't messing around! According to the Gemma 2 tech report, knowledge distillation was used to pre-train the 9B model, while the 27B model was pre-trained from scratch.
📈 For post-training, the Gemma 2 team generated completions from a stronger teacher model (unspecified in the report, but presumably Gemini Ultra), and then trained the student models on this synthetic data with SFT. This is quite common as seen in many open models, such as Zephyr and OpenHermes.
🤔 Sounds too good to be true? These models suffer from distribution mismatch between output sequences seen during training and those generated by the student during inference.
📰 This is where the January 2024 paper "On-Policy Distillation of Language Models" comes in...
🔍 Gemma 2 team used “on-policy distillation,” where the student generates completions from the SFT prompts. These completions are then used to compute the KL divergence between the teacher’s and student’s logits. By minimizing the KL divergence throughout training, the student learns to model the behavior of the teacher accurately while also minimizing the train-inference mismatch.
📚 Gem🔹 of a blog by @huggingface uncovering everything Gemma 2: https://huggingface.co/blog/gemma2#knowledge-distillation
📄 On-Policy Distillation of Language Models: https://huggingface.co/papers/2306.13649 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/662bf5bfe93bb73804ef9344/e5TcnWXzw0V2EUjg0yGKG.webp"
}
] | [] | [] | 2024-06-30T21:06:13.000Z | 2024-06-30T21:06:13.881Z | [] | /posts/singhsidhukuldeep/370398505829604 | 506 | 0 |
188611564555373 | [
{
"type": "text",
"value": "LLM-Assisted Patching of Polyfill Supply Chain Attack",
"raw": "LLM-Assisted Patching of Polyfill Supply Chain Attack",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "A recent supply chain attack on polyfill.io affected over 100,000 websites (see ",
"raw": "A recent supply chain attack on polyfill.io affected over 100,000 websites (see ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.patched.codes/blog/patching-the-polyfill-supply-chain-attack",
"resource": null,
"url": null,
"href": "https://www.patched.codes/blog/patching-the-polyfill-supply-chain-attack",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "). To address this issue, we show how developers can leverage Large Language Models (LLMs) for efficient vulnerability patching:",
"raw": "). To address this issue, we show how developers can leverage Large Language Models (LLMs) for efficient vulnerability patching:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "1. Automated Detection: Using Semgrep rules (see ",
"raw": "1. Automated Detection: Using Semgrep rules (see ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://semgrep.dev/playground/r/KxUvD7w/asankhaya_personal_org.polyfill-compromise-copy",
"resource": null,
"url": null,
"href": "https://semgrep.dev/playground/r/KxUvD7w/asankhaya_personal_org.polyfill-compromise-copy",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ") to identify vulnerable code.",
"raw": ") to identify vulnerable code.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "2. LLM-Powered Patching: Utilizing Patchwork (",
"raw": "2. LLM-Powered Patching: Utilizing Patchwork (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/patched-codes/patchwork",
"resource": null,
"url": null,
"href": "https://github.com/patched-codes/patchwork",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "), an open-source solution that employs LLMs to automatically fix vulnerabilities.",
"raw": "), an open-source solution that employs LLMs to automatically fix vulnerabilities.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "3. Custom Workflows: The \"Fixpolyfill\" patchflow (",
"raw": "3. Custom Workflows: The \"Fixpolyfill\" patchflow (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/patched-codes/patchwork-configs/tree/main/patchflows/Fixpolyfill",
"resource": null,
"url": null,
"href": "https://github.com/patched-codes/patchwork-configs/tree/main/patchflows/Fixpolyfill",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ") , tailored for this specific attack, can be easily run across multiple repositories.",
"raw": ") , tailored for this specific attack, can be easily run across multiple repositories.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "4. Scalable Solutions: Options to scan and patch entire GitHub/GitLab organizations, with automated pull request generation.",
"raw": "4. Scalable Solutions: Options to scan and patch entire GitHub/GitLab organizations, with automated pull request generation.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "5. Rapid Response: LLM-assisted patching enables swift action to minimize damage from supply chain attacks.",
"raw": "5. Rapid Response: LLM-assisted patching enables swift action to minimize damage from supply chain attacks.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This approach demonstrates how LLMs can be effectively used to quickly respond to and remediate widespread security vulnerabilities in code.",
"raw": "This approach demonstrates how LLMs can be effectively used to quickly respond to and remediate widespread security vulnerabilities in code.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | LLM-Assisted Patching of Polyfill Supply Chain Attack
A recent supply chain attack on polyfill.io affected over 100,000 websites (see https://www.patched.codes/blog/patching-the-polyfill-supply-chain-attack). To address this issue, we show how developers can leverage Large Language Models (LLMs) for efficient vulnerability patching:
1. Automated Detection: Using Semgrep rules (see https://semgrep.dev/playground/r/KxUvD7w/asankhaya_personal_org.polyfill-compromise-copy) to identify vulnerable code.
2. LLM-Powered Patching: Utilizing Patchwork (https://github.com/patched-codes/patchwork), an open-source solution that employs LLMs to automatically fix vulnerabilities.
3. Custom Workflows: The "Fixpolyfill" patchflow (https://github.com/patched-codes/patchwork-configs/tree/main/patchflows/Fixpolyfill) , tailored for this specific attack, can be easily run across multiple repositories.
4. Scalable Solutions: Options to scan and patch entire GitHub/GitLab organizations, with automated pull request generation.
5. Rapid Response: LLM-assisted patching enables swift action to minimize damage from supply chain attacks.
This approach demonstrates how LLMs can be effectively used to quickly respond to and remediate widespread security vulnerabilities in code. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png",
"fullname": "Asankhaya Sharma",
"name": "codelion",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 46,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/62f32eab52ad88c930bb3f3b/j9Jr3JuX0qTCp1qVBGOEr.png"
}
] | [] | [
{
"reaction": "🤯",
"users": [
"codelion",
"netynet"
],
"count": 2
},
{
"reaction": "👍",
"users": [
"Priyankvadaliya",
"codelion"
],
"count": 2
},
{
"reaction": "👀",
"users": [
"codelion"
],
"count": 1
},
{
"reaction": "🔥",
"users": [
"codelion"
],
"count": 1
}
] | 2024-06-30T12:13:06.000Z | 2024-06-30T12:13:06.499Z | [] | /posts/codelion/188611564555373 | 2,226 | 0 |
394252378846641 | [
{
"type": "text",
"value": "New LoRA Model!",
"raw": "New LoRA Model!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I trained this model on a new spot I'm really excited to share (soon!) ",
"raw": "I trained this model on a new spot I'm really excited to share (soon!) ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This Monday I will be posting my first beginning to end blog showing the tool I've used, dataset, captioning techniques, and parameters to finetune this LoRA.",
"raw": "This Monday I will be posting my first beginning to end blog showing the tool I've used, dataset, captioning techniques, and parameters to finetune this LoRA.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "For now, check out the model in the link below.",
"raw": "For now, check out the model in the link below.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/alvdansen/m3lt",
"resource": {
"type": "model",
"id": "alvdansen/m3lt",
"discussionNum": null
},
"url": "https://huggingface.co/alvdansen/m3lt",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | New LoRA Model!
I trained this model on a new spot I'm really excited to share (soon!)
This Monday I will be posting my first beginning to end blog showing the tool I've used, dataset, captioning techniques, and parameters to finetune this LoRA.
For now, check out the model in the link below.
https://huggingface.co/alvdansen/m3lt | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/8nBz5_dZrYVH4E4ilbLER.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/GH61hOM5EFH9GvZeFgbq5.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/390NIDPGQ6MG5ZIPJAaJG.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/8kr0TXiioLqqCHb2eSNHB.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/dGhxd2G-LIoj6qIG8L0tJ.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/AVWs-06t8GQnHhearyN-U.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/8qL5a27KK5UCl7UubJQd0.jpeg"
}
] | [] | [
{
"reaction": "🔥",
"users": [
"YaTharThShaRma999",
"multimodalart",
"not-lain",
"netynet",
"Limbicnation",
"nehalecky",
"alvdansen",
"9voltfan2009",
"HyperBlaze",
"seasnake",
"HarshitJoshi",
"adarshxs",
"AnkitAI",
"techdonCooper",
"afpro",
"victor",
"yuriachermann",
"louisbrulenaudet",
"beratcmn",
"ashemvets"
],
"count": 20
},
{
"reaction": "👍",
"users": [
"shamy777",
"victor",
"fffiloni"
],
"count": 3
},
{
"reaction": "❤️",
"users": [
"ijohn07",
"stinkyyy"
],
"count": 2
}
] | 2024-06-29T22:09:34.000Z | 2024-07-01T06:49:15.927Z | [
{
"avatarUrl": "/avatars/8473d30b909208e7dd5828620bcb4ce1.svg",
"fullname": "Wallow",
"name": "Viktor1233",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493,
"isFollowing": false
},
{
"avatarUrl": "/avatars/79ea59365e351861711176b81708ed73.svg",
"fullname": "serveryang",
"name": "serveryang",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/alvdansen/394252378846641 | 5,730 | 5 |
807156614095198 | [
{
"type": "text",
"value": "How to Use SwarmUI & Stable Diffusion 3 on Cloud Services Kaggle (free), Massed Compute & RunPod : ",
"raw": "How to Use SwarmUI & Stable Diffusion 3 on Cloud Services Kaggle (free), Massed Compute & RunPod : ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/XFUZof6Skkw",
"resource": null,
"url": null,
"href": "https://youtu.be/XFUZof6Skkw",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Tutorial link : ",
"raw": "Tutorial link : ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/XFUZof6Skkw",
"resource": null,
"url": null,
"href": "https://youtu.be/XFUZof6Skkw",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It has manually written captions / subtitles and also video chapters.",
"raw": "It has manually written captions / subtitles and also video chapters.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "If you are a GPU poor this is the video you need",
"raw": "If you are a GPU poor this is the video you need",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "In this video, I demonstrate how to install and use #SwarmUI on cloud services. If you lack a powerful GPU or wish to harness more GPU power, this video is essential. You'll learn how to install and utilize SwarmUI, one of the most powerful Generative AI interfaces, on Massed Compute, RunPod, and Kaggle (which offers free dual T4 GPU access for 30 hours weekly). This tutorial will enable you to use SwarmUI on cloud GPU providers as easily and efficiently as on your local PC. Moreover, I will show how to use Stable Diffusion 3 (#SD3) on cloud. SwarmUI uses #ComfyUI backend.",
"raw": "In this video, I demonstrate how to install and use #SwarmUI on cloud services. If you lack a powerful GPU or wish to harness more GPU power, this video is essential. You'll learn how to install and utilize SwarmUI, one of the most powerful Generative AI interfaces, on Massed Compute, RunPod, and Kaggle (which offers free dual T4 GPU access for 30 hours weekly). This tutorial will enable you to use SwarmUI on cloud GPU providers as easily and efficiently as on your local PC. Moreover, I will show how to use Stable Diffusion 3 (#SD3) on cloud. SwarmUI uses #ComfyUI backend.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔗 The Public Post (no login or account required) Shown In The Video With The Links ➡️ ",
"raw": "🔗 The Public Post (no login or account required) Shown In The Video With The Links ➡️ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.patreon.com/posts/stableswarmui-3-106135985",
"resource": null,
"url": null,
"href": "https://www.patreon.com/posts/stableswarmui-3-106135985",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔗 Windows Tutorial for Learn How to Use SwarmUI ➡️ ",
"raw": "🔗 Windows Tutorial for Learn How to Use SwarmUI ➡️ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/HKX8_F1Er_w",
"resource": null,
"url": null,
"href": "https://youtu.be/HKX8_F1Er_w",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔗 How to download models very fast to Massed Compute, RunPod and Kaggle and how to upload models or files to Hugging Face very fast tutorial ➡️ ",
"raw": "🔗 How to download models very fast to Massed Compute, RunPod and Kaggle and how to upload models or files to Hugging Face very fast tutorial ➡️ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/X5WVZ0NMaTg",
"resource": null,
"url": null,
"href": "https://youtu.be/X5WVZ0NMaTg",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔗 SECourses Discord ➡️ ",
"raw": "🔗 SECourses Discord ➡️ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://discord.com/servers/software-engineering-courses-secourses-772774097734074388",
"resource": null,
"url": null,
"href": "https://discord.com/servers/software-engineering-courses-secourses-772774097734074388",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔗 Stable Diffusion GitHub Repo (Please Star, Fork and Watch) ➡️ ",
"raw": "🔗 Stable Diffusion GitHub Repo (Please Star, Fork and Watch) ➡️ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/FurkanGozukara/Stable-Diffusion",
"resource": null,
"url": null,
"href": "https://github.com/FurkanGozukara/Stable-Diffusion",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Coupon Code for Massed Compute : SECourses",
"raw": "Coupon Code for Massed Compute : SECourses",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Coupon works on Alt Config RTX A6000 and also RTX A6000 GPUs",
"raw": "Coupon works on Alt Config RTX A6000 and also RTX A6000 GPUs",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | How to Use SwarmUI & Stable Diffusion 3 on Cloud Services Kaggle (free), Massed Compute & RunPod : https://youtu.be/XFUZof6Skkw
Tutorial link : https://youtu.be/XFUZof6Skkw
It has manually written captions / subtitles and also video chapters.
If you are a GPU poor this is the video you need
In this video, I demonstrate how to install and use #SwarmUI on cloud services. If you lack a powerful GPU or wish to harness more GPU power, this video is essential. You'll learn how to install and utilize SwarmUI, one of the most powerful Generative AI interfaces, on Massed Compute, RunPod, and Kaggle (which offers free dual T4 GPU access for 30 hours weekly). This tutorial will enable you to use SwarmUI on cloud GPU providers as easily and efficiently as on your local PC. Moreover, I will show how to use Stable Diffusion 3 (#SD3) on cloud. SwarmUI uses #ComfyUI backend.
🔗 The Public Post (no login or account required) Shown In The Video With The Links ➡️ https://www.patreon.com/posts/stableswarmui-3-106135985
🔗 Windows Tutorial for Learn How to Use SwarmUI ➡️ https://youtu.be/HKX8_F1Er_w
🔗 How to download models very fast to Massed Compute, RunPod and Kaggle and how to upload models or files to Hugging Face very fast tutorial ➡️ https://youtu.be/X5WVZ0NMaTg
🔗 SECourses Discord ➡️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388
🔗 Stable Diffusion GitHub Repo (Please Star, Fork and Watch) ➡️ https://github.com/FurkanGozukara/Stable-Diffusion
Coupon Code for Massed Compute : SECourses
Coupon works on Alt Config RTX A6000 and also RTX A6000 GPUs
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gözükara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/Hdn_e91CWCWCbZ5NM-yJE.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/OdNz9P50htXhKs5jLpP-T.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/lax7r8Ggr4GLPFJWIkQro.png"
}
] | [] | [
{
"reaction": "🔥",
"users": [
"alvdansen"
],
"count": 1
}
] | 2024-06-29T18:15:12.000Z | 2024-06-29T18:15:12.268Z | [] | /posts/MonsterMMORPG/807156614095198 | 1,133 | 0 |
946914898178824 | [
{
"type": "text",
"value": "Hello!",
"raw": "Hello!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I've been playing with Claude, and we decided to tackle a real thorn in my side.",
"raw": "I've been playing with Claude, and we decided to tackle a real thorn in my side.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "\"The Truthiness Model\" - Analyze arbitrary input text for \"truthiness\", or likelihood of containing true information according to seed text.",
"raw": "\"The Truthiness Model\" - Analyze arbitrary input text for \"truthiness\", or likelihood of containing true information according to seed text.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "P.S. Yes, v1 was broken. I saw the loss rate going down and go excited. Anyway, it just needed some data and a rollback, me and Claude got WAY too carried away trying to tack on features.",
"raw": "P.S. Yes, v1 was broken. I saw the loss rate going down and go excited. Anyway, it just needed some data and a rollback, me and Claude got WAY too carried away trying to tack on features.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Anyway, fixed now, and working! :D",
"raw": "Anyway, fixed now, and working! :D",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "http://samuelmeyerscode.serveblog.net/?p=49",
"resource": null,
"url": null,
"href": "http://samuelmeyerscode.serveblog.net/?p=49",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hello!
I've been playing with Claude, and we decided to tackle a real thorn in my side.
"The Truthiness Model" - Analyze arbitrary input text for "truthiness", or likelihood of containing true information according to seed text.
P.S. Yes, v1 was broken. I saw the loss rate going down and go excited. Anyway, it just needed some data and a rollback, me and Claude got WAY too carried away trying to tack on features.
Anyway, fixed now, and working! :D
http://samuelmeyerscode.serveblog.net/?p=49 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652ff5ee7aab9cfb619400bf/cIdrUic40uXoRbAylFiM8.png",
"fullname": "Samuel L Meyers",
"name": "MrOvkill",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👀",
"users": [
"Ramikan-BR"
],
"count": 1
}
] | 2024-06-29T15:56:59.000Z | 2024-06-30T09:04:18.689Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6316fb937b0ee0136e5f1220/poHBoJ7QAF_s2CCaosdvQ.jpeg",
"fullname": "Firstname Lastname",
"name": "takeraparterer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
},
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652ff5ee7aab9cfb619400bf/cIdrUic40uXoRbAylFiM8.png",
"fullname": "Samuel L Meyers",
"name": "MrOvkill",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
}
] | /posts/MrOvkill/946914898178824 | 845 | 8 |
613909959828143 | [
{
"type": "text",
"value": "🤗 Hi HF Community!",
"raw": "🤗 Hi HF Community!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🧬 As you may now, Evolutionary Scale recently released ",
"raw": "🧬 As you may now, Evolutionary Scale recently released ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/EvolutionaryScale/esm3-sm-open-v1",
"resource": {
"type": "model",
"id": "EvolutionaryScale/esm3-sm-open-v1",
"discussionNum": null
},
"url": "https://huggingface.co/EvolutionaryScale/esm3-sm-open-v1",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " model here on the Hub, \"a frontier generative model for biology, able to jointly reason across three fundamental biological properties of proteins: sequence, structure, and function\" - as it is described on the dedicated GitHub page.",
"raw": " model here on the Hub, \"a frontier generative model for biology, able to jointly reason across three fundamental biological properties of proteins: sequence, structure, and function\" - as it is described on the dedicated GitHub page.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "⚡ If you are curious about it and you want to try it out, you can do it with a space I built, ",
"raw": "⚡ If you are curious about it and you want to try it out, you can do it with a space I built, ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/as-cle-bert/proteins-with-esm",
"resource": {
"type": "space",
"id": "as-cle-bert/proteins-with-esm",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/as-cle-bert/proteins-with-esm",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Hope this helps with your research!🚀",
"raw": "Hope this helps with your research!🚀",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🤗 Hi HF Community!
🧬 As you may now, Evolutionary Scale recently released https://huggingface.co/EvolutionaryScale/esm3-sm-open-v1 model here on the Hub, "a frontier generative model for biology, able to jointly reason across three fundamental biological properties of proteins: sequence, structure, and function" - as it is described on the dedicated GitHub page.
⚡ If you are curious about it and you want to try it out, you can do it with a space I built, https://huggingface.co/spaces/as-cle-bert/proteins-with-esm
Hope this helps with your research!🚀 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e330e7edc2f7306e252448/ucpk9c8x0UafGM4mXTrRy.jpeg",
"fullname": "Astra Clelia Bertelli",
"name": "as-cle-bert",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 639,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"introvoyz041",
"monsoon-nlp",
"prithivMLmods",
"not-lain",
"BrotherMichaels",
"arslan2012",
"Zhofang"
],
"count": 7
},
{
"reaction": "👀",
"users": [
"not-lain",
"BrotherMichaels",
"Zhofang"
],
"count": 3
}
] | 2024-06-28T23:02:31.000Z | 2024-06-28T23:50:43.212Z | [] | /posts/as-cle-bert/613909959828143 | 3,628 | 0 |
767070883111998 | [
{
"type": "text",
"value": "We all agree ",
"raw": "We all agree ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/google/gemma-2-27b-it",
"resource": {
"type": "model",
"id": "google/gemma-2-27b-it",
"discussionNum": null
},
"url": "https://huggingface.co/google/gemma-2-27b-it",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " is trash, right?",
"raw": " is trash, right?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | We all agree https://huggingface.co/google/gemma-2-27b-it is trash, right? | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659f000b83abded48e190901/BnXL_XYbVX6PHngfQLECW.png",
"fullname": "Noa Roggendorff",
"name": "nroggendorff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"hoangkien1703",
"imshan",
"KingNish",
"danielus",
"SharryOG",
"AIIAR",
"cnmoro",
"MarinaraSpaghetti",
"netynet",
"GPT007",
"jiweiwuita"
],
"count": 11
}
] | 2024-06-28T19:34:55.000Z | 2024-06-30T23:59:20.882Z | [
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3b2d6b5cd4b5d7f01864f/-yQzbKYaAL9TEIoZh1VaX.png",
"fullname": "Suigintou",
"name": "Shinku",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/630920925a5c889aaedc7f33/w00N19M21l2FXe6ZasSYc.jpeg",
"fullname": "Kristaller486",
"name": "kristaller486",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/663ff22b57090e5742421243/ZjbgNoTK8zrfVFc0A2y9M.png",
"fullname": "Devin J. Dawson",
"name": "unclemusclez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 6,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/AXnwP_G2WkJ0gkBepd_t7.png",
"fullname": "Marc Kovka",
"name": "GPT007",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 7,
"isFollowing": false
},
{
"avatarUrl": "/avatars/df932694e10a6ef0e8d78aebc2ce253c.svg",
"fullname": "Peter Beks",
"name": "Kwissbeats",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/634262af8d8089ebaefd410e/pr6KcEebXTo5V2XAlpQNw.png",
"fullname": "Fizz 🏳️⚧️",
"name": "Fizzarolli",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 47,
"isFollowing": false
},
{
"avatarUrl": "/avatars/c82779fdf94f80cdb5020504f83c818b.svg",
"fullname": "Yatharth Sharma",
"name": "YaTharThShaRma999",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 13,
"isFollowing": false
}
] | /posts/nroggendorff/767070883111998 | 2,969 | 16 |
970469303672405 | [
{
"type": "text",
"value": "📢 I've tested the most recent google/Gemma-2-9b-it in Sentiment Analyis and the obtained results makes me shocked! 🤯 It end up becoming a king 👑 that showcase top-1 across all the models and categories in Target Sentiment Analysis (TSA) on non-english texts (🇷🇺). ",
"raw": "📢 I've tested the most recent google/Gemma-2-9b-it in Sentiment Analyis and the obtained results makes me shocked! 🤯 It end up becoming a king 👑 that showcase top-1 across all the models and categories in Target Sentiment Analysis (TSA) on non-english texts (🇷🇺). ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "That's impressive to say the least, it surpassed all the other models benchmarked before and within categories of 100B and below by F1(PN) and nearly touched GPT-4 by F1(PN0). Google research team did a great job! 👏 ",
"raw": "That's impressive to say the least, it surpassed all the other models benchmarked before and within categories of 100B and below by F1(PN) and nearly touched GPT-4 by F1(PN0). Google research team did a great job! 👏 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Model: ",
"raw": "Model: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/google/gemma-2-9b-it",
"resource": {
"type": "model",
"id": "google/gemma-2-9b-it",
"discussionNum": null
},
"url": "https://huggingface.co/google/gemma-2-9b-it",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Benchmark: ",
"raw": "Benchmark: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/nicolay-r/RuSentNE-LLM-Benchmark",
"resource": null,
"url": null,
"href": "https://github.com/nicolay-r/RuSentNE-LLM-Benchmark",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 📢 I've tested the most recent google/Gemma-2-9b-it in Sentiment Analyis and the obtained results makes me shocked! 🤯 It end up becoming a king 👑 that showcase top-1 across all the models and categories in Target Sentiment Analysis (TSA) on non-english texts (🇷🇺).
That's impressive to say the least, it surpassed all the other models benchmarked before and within categories of 100B and below by F1(PN) and nearly touched GPT-4 by F1(PN0). Google research team did a great job! 👏
Model: https://huggingface.co/google/gemma-2-9b-it
Benchmark: https://github.com/nicolay-r/RuSentNE-LLM-Benchmark
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64e62d11d27a8292c3637f86/aptDeBHpCJxcREj6KPLN1.jpeg",
"fullname": "Nicolay Rusnachenko",
"name": "nicolay-r",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 49,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64e62d11d27a8292c3637f86/wR8zEQLW3F-fU_QqAoQsT.png"
}
] | [] | [] | 2024-06-28T19:29:20.000Z | 2024-06-28T19:49:26.262Z | [] | /posts/nicolay-r/970469303672405 | 873 | 0 |
951391433852192 | [
{
"type": "text",
"value": "Hi Huggingfacers! ",
"raw": "Hi Huggingfacers! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Thrilled to introduce Adam-mini, an optimizer that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint. Adam-mini can also achieve 49.5% higher throughput than AdamW on Llama2-7B pre-training.",
"raw": "Thrilled to introduce Adam-mini, an optimizer that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint. Adam-mini can also achieve 49.5% higher throughput than AdamW on Llama2-7B pre-training.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The design of Adam-mini is inspired by certain Hessian structures we observed on Transformers. ",
"raw": "The design of Adam-mini is inspired by certain Hessian structures we observed on Transformers. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Feel free to try it out! Try switching to Adam-mini with the same hyperparams of AdamW, it would work with only half memory. Hope Adam-mini can help save time, cost, and energy in your tasks! ",
"raw": "Feel free to try it out! Try switching to Adam-mini with the same hyperparams of AdamW, it would work with only half memory. Hope Adam-mini can help save time, cost, and energy in your tasks! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper: \"Adam-mini: Use Fewer Learning Rates To Gain More\" ",
"raw": "Paper: \"Adam-mini: Use Fewer Learning Rates To Gain More\" ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2406.16793",
"resource": null,
"url": null,
"href": "https://arxiv.org/abs/2406.16793",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Code: ",
"raw": "Code: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/zyushun/Adam-mini",
"resource": null,
"url": null,
"href": "https://github.com/zyushun/Adam-mini",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi Huggingfacers!
Thrilled to introduce Adam-mini, an optimizer that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint. Adam-mini can also achieve 49.5% higher throughput than AdamW on Llama2-7B pre-training.
The design of Adam-mini is inspired by certain Hessian structures we observed on Transformers.
Feel free to try it out! Try switching to Adam-mini with the same hyperparams of AdamW, it would work with only half memory. Hope Adam-mini can help save time, cost, and energy in your tasks!
Paper: "Adam-mini: Use Fewer Learning Rates To Gain More" https://arxiv.org/abs/2406.16793
Code: https://github.com/zyushun/Adam-mini
| {
"avatarUrl": "/avatars/df7167db084ab6f10b7aed2423d82a83.svg",
"fullname": "yushun zhang",
"name": "yushun0410",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 12,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6548c940e70ffa3c07554102/4h_t55fbRxY_w28bL69kk.png"
}
] | [] | [
{
"reaction": "🔥",
"users": [
"yushun0410",
"osanseviero",
"DmitryRyumin",
"louisbrulenaudet",
"kristaller486",
"Ramikan-BR",
"not-lain",
"badayvedat",
"wjmcat",
"FAU57",
"davide221",
"gogo8232",
"cnmoro",
"dillfrescott",
"netynet",
"adamelliotfields",
"cloudyu"
],
"count": 17
},
{
"reaction": "🚀",
"users": [
"yushun0410",
"sted97",
"AdemB",
"jakemannix",
"Ramikan-BR",
"Jaward",
"wjmcat",
"secretmoon",
"dillfrescott"
],
"count": 9
},
{
"reaction": "👍",
"users": [
"yushun0410",
"osanseviero",
"Rishav2000",
"Ramikan-BR",
"dashfunnydashdash",
"dillfrescott",
"meeks342"
],
"count": 7
},
{
"reaction": "❤️",
"users": [
"AdemB",
"Ramikan-BR",
"ttkciar",
"SicariusSicariiStuff",
"dillfrescott"
],
"count": 5
},
{
"reaction": "👀",
"users": [
"Ramikan-BR",
"dillfrescott"
],
"count": 2
},
{
"reaction": "🧠",
"users": [
"Ramikan-BR",
"dillfrescott"
],
"count": 2
}
] | 2024-06-28T08:52:44.000Z | 2024-06-28T13:32:37.771Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/644cb09a22d211df644a0a6c/v0EHypMU4X3Oxxf3cao_O.png",
"fullname": "Júlio César",
"name": "Ramikan-BR",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 10,
"isFollowing": false
}
] | /posts/yushun0410/951391433852192 | 4,608 | 1 |
439561083586408 | [
{
"type": "text",
"value": "I started Friday with decentralized AI using Gemma-2, and it all works without blockchain. This is what I did: ",
"raw": "I started Friday with decentralized AI using Gemma-2, and it all works without blockchain. This is what I did: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 1. Pinned Gemma-2 9B in the Interplanetary Filesystem IPFS with the LoRA fine-tuning adapters.",
"raw": " 1. Pinned Gemma-2 9B in the Interplanetary Filesystem IPFS with the LoRA fine-tuning adapters.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 2. Set up a llama-ipfs server to fetch and cache the model and adapters on the fly and inference locally.",
"raw": " 2. Set up a llama-ipfs server to fetch and cache the model and adapters on the fly and inference locally.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Now, I can use my on device AI platform across:",
"raw": "Now, I can use my on device AI platform across:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " • All my macOS automation workflows",
"raw": " • All my macOS automation workflows",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " • All my browsers",
"raw": " • All my browsers",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " • My Copilot++ in VSCode",
"raw": " • My Copilot++ in VSCode",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " • My Open Apple Intelligence (OAI, not to be confused with the other closed OAI owned by a nonprofit foundation and BigTech)",
"raw": " • My Open Apple Intelligence (OAI, not to be confused with the other closed OAI owned by a nonprofit foundation and BigTech)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The llama-ipfs server’s RPC support lets me decentralize inferencing across all my devices, supercharging computing and energy efficiency.",
"raw": "The llama-ipfs server’s RPC support lets me decentralize inferencing across all my devices, supercharging computing and energy efficiency.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Make sure you own your AI. AI in the cloud is not aligned with you, it’s aligned with the company that owns it.",
"raw": "Make sure you own your AI. AI in the cloud is not aligned with you, it’s aligned with the company that owns it.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I started Friday with decentralized AI using Gemma-2, and it all works without blockchain. This is what I did:
1. Pinned Gemma-2 9B in the Interplanetary Filesystem IPFS with the LoRA fine-tuning adapters.
2. Set up a llama-ipfs server to fetch and cache the model and adapters on the fly and inference locally.
Now, I can use my on device AI platform across:
• All my macOS automation workflows
• All my browsers
• My Copilot++ in VSCode
• My Open Apple Intelligence (OAI, not to be confused with the other closed OAI owned by a nonprofit foundation and BigTech)
The llama-ipfs server’s RPC support lets me decentralize inferencing across all my devices, supercharging computing and energy efficiency.
Make sure you own your AI. AI in the cloud is not aligned with you, it’s aligned with the company that owns it. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63f731c7d36951307fcca6bf/DMd5-Pt7YHC0agbAQ1xUc.png",
"fullname": "Mitko Vasilev",
"name": "mitkox",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 117,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63f731c7d36951307fcca6bf/Mfv6ZjGhDwMEO-JpEZtOu.mp4"
}
] | [] | [
{
"reaction": "🔥",
"users": [
"Csplk",
"not-lain",
"HalloZoolee",
"AxelVirtual"
],
"count": 4
},
{
"reaction": "🤯",
"users": [
"Nerius",
"HalloZoolee"
],
"count": 2
},
{
"reaction": "❤️",
"users": [
"not-lain"
],
"count": 1
}
] | 2024-06-28T06:58:38.000Z | 2024-06-29T19:12:15.364Z | [
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
},
{
"avatarUrl": "/avatars/45b6f3e0a07a1abfd4900153bafda6dc.svg",
"fullname": "Fernando M Lemos",
"name": "fmlemos",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/mitkox/439561083586408 | 2,398 | 2 |
583183010107602 | [
{
"type": "text",
"value": "Uploaded two basic SLERP merges of ",
"raw": "Uploaded two basic SLERP merges of ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO",
"resource": {
"type": "model",
"id": "princeton-nlp/Llama-3-Instruct-8B-SimPO",
"discussionNum": null
},
"url": "https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " and ",
"raw": " and ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3",
"resource": {
"type": "model",
"id": "UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3",
"discussionNum": null
},
"url": "https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", alternating the choice of base model, for people to test out and potentially use as merge fuel. (Personally, I am drawn to intelligent and attentive models, hence the experimentation.)",
"raw": ", alternating the choice of base model, for people to test out and potentially use as merge fuel. (Personally, I am drawn to intelligent and attentive models, hence the experimentation.)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge",
"resource": {
"type": "model",
"id": "grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge",
"discussionNum": null
},
"url": "https://huggingface.co/grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/grimjim/Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge",
"resource": {
"type": "model",
"id": "grimjim/Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge",
"discussionNum": null
},
"url": "https://huggingface.co/grimjim/Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Uploaded two basic SLERP merges of https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO and https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3, alternating the choice of base model, for people to test out and potentially use as merge fuel. (Personally, I am drawn to intelligent and attentive models, hence the experimentation.)
https://huggingface.co/grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge
https://huggingface.co/grimjim/Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65c992424936ab38ecf706b0/aq7vuHFPO1S93fwJk0Cuq.jpeg",
"fullname": "Jim Lai",
"name": "grimjim",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 163,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"fengjj123",
"osanseviero",
"John6666",
"Joseph717171"
],
"count": 4
},
{
"reaction": "❤️",
"users": [
"s3nh"
],
"count": 1
}
] | 2024-06-28T02:13:08.000Z | 2024-06-28T02:28:11.688Z | [] | /posts/grimjim/583183010107602 | 2,680 | 0 |
581079116072653 | [
{
"type": "text",
"value": "Florence-2, the new vision foundation model by Microsoft, can now run 100% locally in your browser on WebGPU, thanks to Transformers.js! 🤗🤯",
"raw": "Florence-2, the new vision foundation model by Microsoft, can now run 100% locally in your browser on WebGPU, thanks to Transformers.js! 🤗🤯",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It supports tasks like image captioning, optical character recognition, object detection, and many more! 😍 WOW!",
"raw": "It supports tasks like image captioning, optical character recognition, object detection, and many more! 😍 WOW!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Demo: ",
"raw": "- Demo: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/Xenova/florence2-webgpu",
"resource": {
"type": "space",
"id": "Xenova/florence2-webgpu",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/Xenova/florence2-webgpu",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Models: ",
"raw": "- Models: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/models?library=transformers.js&other=florence2",
"resource": null,
"url": null,
"href": "https://huggingface.co/models?library=transformers.js&other=florence2",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Source code: ",
"raw": "- Source code: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/xenova/transformers.js/tree/v3/examples/florence2-webgpu",
"resource": null,
"url": null,
"href": "https://github.com/xenova/transformers.js/tree/v3/examples/florence2-webgpu",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Florence-2, the new vision foundation model by Microsoft, can now run 100% locally in your browser on WebGPU, thanks to Transformers.js! 🤗🤯
It supports tasks like image captioning, optical character recognition, object detection, and many more! 😍 WOW!
- Demo: https://huggingface.co/spaces/Xenova/florence2-webgpu
- Models: https://huggingface.co/models?library=transformers.js&other=florence2
- Source code: https://github.com/xenova/transformers.js/tree/v3/examples/florence2-webgpu | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b253b7ac5ecaae3d1efe0c/hwiQ0uvz3t-L5a-NtBIO6.png",
"fullname": "Joshua",
"name": "Xenova",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 3736,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/GLzXDQxuhCDMlABvdvg8X.mp4"
}
] | [] | [
{
"reaction": "🔥",
"users": [
"gokaygokay",
"John6666",
"badayvedat",
"bmorphism",
"Hoangthanh",
"sebastianking",
"fengjj123",
"louisbrulenaudet",
"Sylvestre",
"osanseviero",
"sted97",
"Ramikan-BR",
"sudhanshu456",
"lakkeo",
"prithivMLmods",
"JoPmt",
"vladholubiev",
"ralphilius",
"sbrandeis",
"NERDDISCO"
],
"count": 20
},
{
"reaction": "🚀",
"users": [
"fengjj123",
"John6666",
"osanseviero",
"Ramikan-BR",
"omaryshchenko",
"lamhieu"
],
"count": 6
},
{
"reaction": "👀",
"users": [
"fengjj123",
"John6666",
"osanseviero",
"Ramikan-BR"
],
"count": 4
},
{
"reaction": "❤️",
"users": [
"Ramikan-BR",
"ZeroWw",
"onlineservices"
],
"count": 3
},
{
"reaction": "🧠",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "🤝",
"users": [
"surfhb"
],
"count": 1
}
] | 2024-06-27T21:09:58.000Z | 2024-06-28T09:34:39.213Z | [] | /posts/Xenova/581079116072653 | 5,941 | 0 |
356451438291173 | [
{
"type": "text",
"value": "Hi HuggingFacers!🤗",
"raw": "Hi HuggingFacers!🤗",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "💥 If you are Bioinformaticians or Biologists, you may be familiar with BLAST, a search algorithm that allows researchers to identify the group of organisms (species, taxa...) from which DNA/Protein sequences come.",
"raw": "💥 If you are Bioinformaticians or Biologists, you may be familiar with BLAST, a search algorithm that allows researchers to identify the group of organisms (species, taxa...) from which DNA/Protein sequences come.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🥱 You may also be familiar with the difficulties to interpret long and multi-parametric results coming out from BLAST searches: here's where we can operate with LLMs, summarizing the outputs and/or replying to queries about them!",
"raw": "🥱 You may also be familiar with the difficulties to interpret long and multi-parametric results coming out from BLAST searches: here's where we can operate with LLMs, summarizing the outputs and/or replying to queries about them!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🧬 You can now run BLAST for 16S rRNA bacterial sequences here on HF, summarizing and/or asking questions about the results, or make sense of your online BLAST searches uploading description tables, using the last space I built: ",
"raw": "🧬 You can now run BLAST for 16S rRNA bacterial sequences here on HF, summarizing and/or asking questions about the results, or make sense of your online BLAST searches uploading description tables, using the last space I built: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/as-cle-bert/BLAST-SummarAIzer",
"resource": {
"type": "space",
"id": "as-cle-bert/BLAST-SummarAIzer",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/as-cle-bert/BLAST-SummarAIzer",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Have fun and may this be helpful to your research!💻",
"raw": "Have fun and may this be helpful to your research!💻",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi HuggingFacers!🤗
💥 If you are Bioinformaticians or Biologists, you may be familiar with BLAST, a search algorithm that allows researchers to identify the group of organisms (species, taxa...) from which DNA/Protein sequences come.
🥱 You may also be familiar with the difficulties to interpret long and multi-parametric results coming out from BLAST searches: here's where we can operate with LLMs, summarizing the outputs and/or replying to queries about them!
🧬 You can now run BLAST for 16S rRNA bacterial sequences here on HF, summarizing and/or asking questions about the results, or make sense of your online BLAST searches uploading description tables, using the last space I built: https://huggingface.co/spaces/as-cle-bert/BLAST-SummarAIzer
Have fun and may this be helpful to your research!💻 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e330e7edc2f7306e252448/ucpk9c8x0UafGM4mXTrRy.jpeg",
"fullname": "Astra Clelia Bertelli",
"name": "as-cle-bert",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 639,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"iryneko571",
"osanseviero",
"Bettina34",
"introvoyz041",
"louisbrulenaudet"
],
"count": 5
}
] | 2024-06-27T16:48:30.000Z | 2024-06-27T16:48:50.560Z | [] | /posts/as-cle-bert/356451438291173 | 2,008 | 0 |
256938902785571 | [
{
"type": "text",
"value": "Mixture of Agents now in MLC/LMStudio/Ollama",
"raw": "Mixture of Agents now in MLC/LMStudio/Ollama",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I've been a bit obsessed with the recent MoA paper and its implementation. I've noticed a HUGE upgrade in the final output and it seems to really be a great way to harness the power of a team of different LLMs. The downside is that it can be a bit slow to generate responses with the bigger models (but worth it if you want to wait). I wanted to get faster results so I made an MLC version and it actually works out great! Much quicker and the responses definitely are better than compared to just running one.",
"raw": "I've been a bit obsessed with the recent MoA paper and its implementation. I've noticed a HUGE upgrade in the final output and it seems to really be a great way to harness the power of a team of different LLMs. The downside is that it can be a bit slow to generate responses with the bigger models (but worth it if you want to wait). I wanted to get faster results so I made an MLC version and it actually works out great! Much quicker and the responses definitely are better than compared to just running one.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I'm going to keep working on seeing how it can be further integrated (API endpoints, RAG, synthetic data generation, etc) and will share the stuff that I can get to work decently enough :)",
"raw": "I'm going to keep working on seeing how it can be further integrated (API endpoints, RAG, synthetic data generation, etc) and will share the stuff that I can get to work decently enough :)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/severian42/MoA-MLC-Chat",
"resource": null,
"url": null,
"href": "https://github.com/severian42/MoA-MLC-Chat",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/severian42/MoA-Ollama-Chat",
"resource": null,
"url": null,
"href": "https://github.com/severian42/MoA-Ollama-Chat",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/severian42/MoA-LMStudio-Chat",
"resource": null,
"url": null,
"href": "https://github.com/severian42/MoA-LMStudio-Chat",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Mixture of Agents now in MLC/LMStudio/Ollama
I've been a bit obsessed with the recent MoA paper and its implementation. I've noticed a HUGE upgrade in the final output and it seems to really be a great way to harness the power of a team of different LLMs. The downside is that it can be a bit slow to generate responses with the bigger models (but worth it if you want to wait). I wanted to get faster results so I made an MLC version and it actually works out great! Much quicker and the responses definitely are better than compared to just running one.
I'm going to keep working on seeing how it can be further integrated (API endpoints, RAG, synthetic data generation, etc) and will share the stuff that I can get to work decently enough :)
https://github.com/severian42/MoA-MLC-Chat
https://github.com/severian42/MoA-Ollama-Chat
https://github.com/severian42/MoA-LMStudio-Chat | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64740cf7485a7c8e1bd51ac9/CXZCJm2x4ToT83pEIYyQR.png",
"fullname": "Beckett Dillon",
"name": "Severian",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 175,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"Joseph717171",
"irfanfadhullah",
"Olafangensan"
],
"count": 3
},
{
"reaction": "🚀",
"users": [
"Joseph717171"
],
"count": 1
},
{
"reaction": "❤️",
"users": [
"Joseph717171"
],
"count": 1
},
{
"reaction": "🧠",
"users": [
"Joseph717171"
],
"count": 1
},
{
"reaction": "🤗",
"users": [
"Joseph717171"
],
"count": 1
}
] | 2024-06-27T16:00:46.000Z | 2024-06-27T16:00:46.219Z | [] | /posts/Severian/256938902785571 | 1,169 | 0 |
139983484226395 | [
{
"type": "text",
"value": "Hello!",
"raw": "Hello!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.youtube.com/watch?v=6NyDkpfNfUs",
"resource": null,
"url": null,
"href": "https://www.youtube.com/watch?v=6NyDkpfNfUs",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I had some feedback recently, that perhaps it would be beneficial to expand upon the fallacy dataset. I took this deeply to heart, and exploded it 10x.",
"raw": "I had some feedback recently, that perhaps it would be beneficial to expand upon the fallacy dataset. I took this deeply to heart, and exploded it 10x.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/MrOvkill/fallacies-fallacy-base",
"resource": {
"type": "dataset",
"id": "MrOvkill/fallacies-fallacy-base",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/MrOvkill/fallacies-fallacy-base",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Produced synthetically with *ALL* the Gemini models on Vertex AI.",
"raw": "Produced synthetically with *ALL* the Gemini models on Vertex AI.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "*phew* This was a rush. I can promise over 8 it might have been like 16 of straight prompt/copy/paste/fix/re-splice/fix/prompt again/chug caffeine/repeat, but we got there! Thanks for egging me on, all! I appreciate being driven to work! So much better than boredom! 🤗",
"raw": "*phew* This was a rush. I can promise over 8 it might have been like 16 of straight prompt/copy/paste/fix/re-splice/fix/prompt again/chug caffeine/repeat, but we got there! Thanks for egging me on, all! I appreciate being driven to work! So much better than boredom! 🤗",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Have fun!",
"raw": "Have fun!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hello!
https://www.youtube.com/watch?v=6NyDkpfNfUs
I had some feedback recently, that perhaps it would be beneficial to expand upon the fallacy dataset. I took this deeply to heart, and exploded it 10x.
https://huggingface.co/datasets/MrOvkill/fallacies-fallacy-base
Produced synthetically with *ALL* the Gemini models on Vertex AI.
*phew* This was a rush. I can promise over 8 it might have been like 16 of straight prompt/copy/paste/fix/re-splice/fix/prompt again/chug caffeine/repeat, but we got there! Thanks for egging me on, all! I appreciate being driven to work! So much better than boredom! 🤗
Have fun!
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652ff5ee7aab9cfb619400bf/cIdrUic40uXoRbAylFiM8.png",
"fullname": "Samuel L Meyers",
"name": "MrOvkill",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"LeroyDyer"
],
"count": 1
}
] | 2024-06-27T15:23:23.000Z | 2024-06-27T18:40:37.716Z | [] | /posts/MrOvkill/139983484226395 | 645 | 0 |
891674447263162 | [
{
"type": "text",
"value": "🚀🎭🌟 New Research Alert - Portrait4D-v2 (Avatars Collection)! 🌟🎭🚀",
"raw": "🚀🎭🌟 New Research Alert - Portrait4D-v2 (Avatars Collection)! 🌟🎭🚀",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📄 Title: Portrait4D-v2: Pseudo Multi-View Data Creates Better 4D Head Synthesizer 🔝",
"raw": "📄 Title: Portrait4D-v2: Pseudo Multi-View Data Creates Better 4D Head Synthesizer 🔝",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📝 Description: Portrait4D-v2 is a novel method for one-shot 4D head avatar synthesis using pseudo multi-view videos and a vision transformer backbone, achieving superior performance without relying on 3DMM reconstruction.",
"raw": "📝 Description: Portrait4D-v2 is a novel method for one-shot 4D head avatar synthesis using pseudo multi-view videos and a vision transformer backbone, achieving superior performance without relying on 3DMM reconstruction.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "👥 Authors: Yu Deng, Duomin Wang, and Baoyuan Wang",
"raw": "👥 Authors: Yu Deng, Duomin Wang, and Baoyuan Wang",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2403.13570",
"resource": {
"type": "paper",
"id": "2403.13570",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2403.13570",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "Portrait4D-v2: Pseudo Multi-View Data Creates Better 4D Head Synthesizer (2403.13570)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🌐 GitHub Page: ",
"raw": "🌐 GitHub Page: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://yudeng.github.io/Portrait4D-v2/",
"resource": null,
"url": null,
"href": "https://yudeng.github.io/Portrait4D-v2/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📁 Repository: ",
"raw": "📁 Repository: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/YuDeng/Portrait-4D",
"resource": null,
"url": null,
"href": "https://github.com/YuDeng/Portrait-4D",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📺 Video: ",
"raw": "📺 Video: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.youtube.com/watch?v=5YJY6-wcOJo",
"resource": null,
"url": null,
"href": "https://www.youtube.com/watch?v=5YJY6-wcOJo",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🚀 CVPR-2023-24-Papers: ",
"raw": "🚀 CVPR-2023-24-Papers: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/DmitryRyumin/CVPR-2023-24-Papers",
"resource": null,
"url": null,
"href": "https://github.com/DmitryRyumin/CVPR-2023-24-Papers",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📚 More Papers: more cutting-edge research presented at other conferences in the ",
"raw": "📚 More Papers: more cutting-edge research presented at other conferences in the ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers",
"resource": {
"type": "space",
"id": "DmitryRyumin/NewEraAI-Papers",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " curated by ",
"raw": " curated by ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@DmitryRyumin",
"resource": null,
"url": null,
"href": null,
"user": "DmitryRyumin",
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🚀 Added to the Avatars Collection: ",
"raw": "🚀 Added to the Avatars Collection: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36",
"resource": {
"type": "collection",
"id": "DmitryRyumin/avatars-65df37cdf81fec13d4dbac36",
"discussionNum": null
},
"url": "https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔍 Keywords: Portrait4D #4DAvatar #HeadSynthesis #3DModeling #TechInnovation #DeepLearning #ComputerGraphics #ComputerVision #Innovation",
"raw": "🔍 Keywords: Portrait4D #4DAvatar #HeadSynthesis #3DModeling #TechInnovation #DeepLearning #ComputerGraphics #ComputerVision #Innovation",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🚀🎭🌟 New Research Alert - Portrait4D-v2 (Avatars Collection)! 🌟🎭🚀
📄 Title: Portrait4D-v2: Pseudo Multi-View Data Creates Better 4D Head Synthesizer 🔝
📝 Description: Portrait4D-v2 is a novel method for one-shot 4D head avatar synthesis using pseudo multi-view videos and a vision transformer backbone, achieving superior performance without relying on 3DMM reconstruction.
👥 Authors: Yu Deng, Duomin Wang, and Baoyuan Wang
📄 Paper: https://huggingface.co/papers/2403.13570
🌐 GitHub Page: https://yudeng.github.io/Portrait4D-v2/
📁 Repository: https://github.com/YuDeng/Portrait-4D
📺 Video: https://www.youtube.com/watch?v=5YJY6-wcOJo
🚀 CVPR-2023-24-Papers: https://github.com/DmitryRyumin/CVPR-2023-24-Papers
📚 More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin
🚀 Added to the Avatars Collection: https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36
🔍 Keywords: Portrait4D #4DAvatar #HeadSynthesis #3DModeling #TechInnovation #DeepLearning #ComputerGraphics #ComputerVision #Innovation | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg",
"fullname": "Dmitry Ryumin",
"name": "DmitryRyumin",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 374,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/-kmKcnYBCYb1ClUou9DZV.gif"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/CzRDU7CXrxsjy2x8px68u.gif"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Dn4UWAzb2aLuazLv-8UAh.gif"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/hx_WxhoGQqSjajHEgOEJA.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/DvBCgHa3zymisw0xJnHmQ.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/z8hI43qoYL_LC0KqepoLW.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/9NbtJK0wWaB48hxBL9fwn.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/9JAvnbBS5do34NcFxiWTb.png"
},
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/1SC6fsQtPnp1VL9a2FhOm.mp4"
},
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/QQ8aboCLPnCOkDlT44XXs.mp4"
},
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/6xAHnbFuKtQrG95mvCQxw.mp4"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg",
"fullname": "Dmitry Ryumin",
"name": "DmitryRyumin",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 374
}
] | [
{
"reaction": "🔥",
"users": [
"DmitryRyumin",
"prithivMLmods",
"ucalyptus",
"attashe",
"Hoangthanh",
"sebastianking",
"fffiloni",
"kavikumarkoneti",
"julien-c",
"IsmaelMousa",
"John6666",
"nananie143",
"m-conrad-202",
"Ck773",
"3thn",
"seanmiranda",
"ameerazam08",
"victor",
"Ramikan-BR",
"Martins6",
"louisbrulenaudet"
],
"count": 21
},
{
"reaction": "👍",
"users": [
"raonigabriel",
"Hoangthanh",
"julien-c",
"Joze539",
"nananie143",
"3thn",
"seanmiranda",
"JNolet",
"loupzeur",
"victor",
"Ramikan-BR",
"Martins6",
"OjciecTadeusz"
],
"count": 13
},
{
"reaction": "🚀",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "👀",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "❤️",
"users": [
"Ramikan-BR"
],
"count": 1
}
] | 2024-06-27T12:53:37.000Z | 2024-06-28T15:08:30.496Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg",
"fullname": "Julien Chaumond",
"name": "julien-c",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1568,
"isFollowing": false
}
] | /posts/DmitryRyumin/891674447263162 | 3,589 | 1 |
389133085535668 | [
{
"type": "text",
"value": "𝐓𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐞𝐫𝐬 𝐀𝐠𝐞𝐧𝐭𝐬 𝐫𝐞𝐚𝐜𝐡𝐞𝐬 𝐭𝐡𝐞 𝐭𝐨𝐩 𝐨𝐟 𝐆𝐀𝐈𝐀 𝐥𝐞𝐚𝐝𝐞𝐫𝐛𝐨𝐚𝐫𝐝! 🥳",
"raw": "𝐓𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐞𝐫𝐬 𝐀𝐠𝐞𝐧𝐭𝐬 𝐫𝐞𝐚𝐜𝐡𝐞𝐬 𝐭𝐡𝐞 𝐭𝐨𝐩 𝐨𝐟 𝐆𝐀𝐈𝐀 𝐥𝐞𝐚𝐝𝐞𝐫𝐛𝐨𝐚𝐫𝐝! 🥳",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We've been improving Transformers Agents a lot lately.",
"raw": "We've been improving Transformers Agents a lot lately.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "So with ",
"raw": "So with ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@sergeipetrov",
"resource": null,
"url": null,
"href": null,
"user": "sergeipetrov",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " we set out to prove that it's the best agent framework out there.",
"raw": " we set out to prove that it's the best agent framework out there.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "To prove this, we went to beat the 𝗚𝗔𝗜𝗔 𝗹𝗲𝗮𝗱𝗲𝗿𝗯𝗼𝗮𝗿𝗱, the most comprehensive benchmark out there for evaluating LLM agents.",
"raw": "To prove this, we went to beat the 𝗚𝗔𝗜𝗔 𝗹𝗲𝗮𝗱𝗲𝗿𝗯𝗼𝗮𝗿𝗱, the most comprehensive benchmark out there for evaluating LLM agents.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Its questions make you explore different flavours of pain:",
"raw": "Its questions make you explore different flavours of pain:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🛠️ 𝗥𝗲𝗾𝘂𝗶𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝘁𝗼𝗼𝗹𝘀, at least a web browser",
"raw": "🛠️ 𝗥𝗲𝗾𝘂𝗶𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝘁𝗼𝗼𝗹𝘀, at least a web browser",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔢 𝗥𝗶𝗴𝗼𝗿𝗼𝘂𝘀 𝗹𝗼𝗴𝗶𝗰, many questions having strong math aspects",
"raw": "🔢 𝗥𝗶𝗴𝗼𝗿𝗼𝘂𝘀 𝗹𝗼𝗴𝗶𝗰, many questions having strong math aspects",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🖼️ 𝗠𝘂𝗹𝘁𝗶𝗺𝗼𝗱𝗮𝗹, the agent had to handle all file types: 🔊, 🖼️, 🎬...",
"raw": "🖼️ 𝗠𝘂𝗹𝘁𝗶𝗺𝗼𝗱𝗮𝗹, the agent had to handle all file types: 🔊, 🖼️, 🎬...",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "👣 𝗠𝘂𝗹𝘁𝗶-𝘀𝘁𝗲𝗽, with many questions requiring over 10 steps to be solved.",
"raw": "👣 𝗠𝘂𝗹𝘁𝗶-𝘀𝘁𝗲𝗽, with many questions requiring over 10 steps to be solved.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Some Level 3 questions are crazy hard 😳",
"raw": "Some Level 3 questions are crazy hard 😳",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> \"In NASA’s Astronomy Picture of the Day on 2006 January 21, two astronauts are visible, with one appearing much smaller than the other. As of August 2023, out of the astronauts in the NASA Astronaut Group that the smaller astronaut was a member of, which one spent the least time in space, and how many minutes did he spend in space, rounded to the nearest minute?\"",
"raw": "> \"In NASA’s Astronomy Picture of the Day on 2006 January 21, two astronauts are visible, with one appearing much smaller than the other. As of August 2023, out of the astronauts in the NASA Astronaut Group that the smaller astronaut was a member of, which one spent the least time in space, and how many minutes did he spend in space, rounded to the nearest minute?\"",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "(𝘯𝘰 𝘧𝘪𝘭𝘦 𝘢𝘵𝘵𝘢𝘤𝘩𝘦𝘥 𝘰𝘧 𝘤𝘰𝘶𝘳𝘴𝘦, 𝘵𝘩𝘦 𝘢𝘨𝘦𝘯𝘵 𝘩𝘢𝘴 𝘵𝘰 𝘧𝘪𝘯𝘥 𝘢𝘭𝘭 𝘵𝘩𝘦 𝘪𝘯𝘧𝘰)",
"raw": "(𝘯𝘰 𝘧𝘪𝘭𝘦 𝘢𝘵𝘵𝘢𝘤𝘩𝘦𝘥 𝘰𝘧 𝘤𝘰𝘶𝘳𝘴𝘦, 𝘵𝘩𝘦 𝘢𝘨𝘦𝘯𝘵 𝘩𝘢𝘴 𝘵𝘰 𝘧𝘪𝘯𝘥 𝘢𝘭𝘭 𝘵𝘩𝘦 𝘪𝘯𝘧𝘰)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "➡️ We used Transformers Agents' React Code Agent, that writes its actions in code. We created a new planning component that we'll incorporate in the framework. More info soon in a blog post!",
"raw": "➡️ We used Transformers Agents' React Code Agent, that writes its actions in code. We created a new planning component that we'll incorporate in the framework. More info soon in a blog post!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "𝐑𝐞𝐬𝐮𝐥𝐭𝐬:",
"raw": "𝐑𝐞𝐬𝐮𝐥𝐭𝐬:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🚀 Our submission scores #2 overall on the test set and #1 on the validation set. On both sets we're the leading submission based on a public framework, beating Microsoft's Autogen.",
"raw": "🚀 Our submission scores #2 overall on the test set and #1 on the validation set. On both sets we're the leading submission based on a public framework, beating Microsoft's Autogen.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🥇 On both sets we are #1 on the hardest Level 3 questions, reaching nearly 20%.",
"raw": "🥇 On both sets we are #1 on the hardest Level 3 questions, reaching nearly 20%.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "𝙂𝙤 𝙘𝙝𝙚𝙘𝙠 𝙤𝙪𝙩 𝙩𝙝𝙚 𝙡𝙚𝙖𝙙𝙚𝙧𝙗𝙤𝙖𝙧𝙙 👉 ",
"raw": "𝙂𝙤 𝙘𝙝𝙚𝙘𝙠 𝙤𝙪𝙩 𝙩𝙝𝙚 𝙡𝙚𝙖𝙙𝙚𝙧𝙗𝙤𝙖𝙧𝙙 👉 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/gaia-benchmark/leaderboard",
"resource": {
"type": "space",
"id": "gaia-benchmark/leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/gaia-benchmark/leaderboard",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 𝐓𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐞𝐫𝐬 𝐀𝐠𝐞𝐧𝐭𝐬 𝐫𝐞𝐚𝐜𝐡𝐞𝐬 𝐭𝐡𝐞 𝐭𝐨𝐩 𝐨𝐟 𝐆𝐀𝐈𝐀 𝐥𝐞𝐚𝐝𝐞𝐫𝐛𝐨𝐚𝐫𝐝! 🥳
We've been improving Transformers Agents a lot lately.
So with @sergeipetrov we set out to prove that it's the best agent framework out there.
To prove this, we went to beat the 𝗚𝗔𝗜𝗔 𝗹𝗲𝗮𝗱𝗲𝗿𝗯𝗼𝗮𝗿𝗱, the most comprehensive benchmark out there for evaluating LLM agents.
Its questions make you explore different flavours of pain:
🛠️ 𝗥𝗲𝗾𝘂𝗶𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝘁𝗼𝗼𝗹𝘀, at least a web browser
🔢 𝗥𝗶𝗴𝗼𝗿𝗼𝘂𝘀 𝗹𝗼𝗴𝗶𝗰, many questions having strong math aspects
🖼️ 𝗠𝘂𝗹𝘁𝗶𝗺𝗼𝗱𝗮𝗹, the agent had to handle all file types: 🔊, 🖼️, 🎬...
👣 𝗠𝘂𝗹𝘁𝗶-𝘀𝘁𝗲𝗽, with many questions requiring over 10 steps to be solved.
Some Level 3 questions are crazy hard 😳
> "In NASA’s Astronomy Picture of the Day on 2006 January 21, two astronauts are visible, with one appearing much smaller than the other. As of August 2023, out of the astronauts in the NASA Astronaut Group that the smaller astronaut was a member of, which one spent the least time in space, and how many minutes did he spend in space, rounded to the nearest minute?"
(𝘯𝘰 𝘧𝘪𝘭𝘦 𝘢𝘵𝘵𝘢𝘤𝘩𝘦𝘥 𝘰𝘧 𝘤𝘰𝘶𝘳𝘴𝘦, 𝘵𝘩𝘦 𝘢𝘨𝘦𝘯𝘵 𝘩𝘢𝘴 𝘵𝘰 𝘧𝘪𝘯𝘥 𝘢𝘭𝘭 𝘵𝘩𝘦 𝘪𝘯𝘧𝘰)
➡️ We used Transformers Agents' React Code Agent, that writes its actions in code. We created a new planning component that we'll incorporate in the framework. More info soon in a blog post!
𝐑𝐞𝐬𝐮𝐥𝐭𝐬:
🚀 Our submission scores #2 overall on the test set and #1 on the validation set. On both sets we're the leading submission based on a public framework, beating Microsoft's Autogen.
🥇 On both sets we are #1 on the hardest Level 3 questions, reaching nearly 20%.
𝙂𝙤 𝙘𝙝𝙚𝙘𝙠 𝙤𝙪𝙩 𝙩𝙝𝙚 𝙡𝙚𝙖𝙙𝙚𝙧𝙗𝙤𝙖𝙧𝙙 👉 https://huggingface.co/spaces/gaia-benchmark/leaderboard | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg",
"fullname": "Aymeric Roucher",
"name": "m-ric",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 476,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/D78gS9F1gE6mwdbpyzT5K.jpeg",
"fullname": "Sergei Petrov",
"name": "sergeipetrov",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 42
}
] | [
{
"reaction": "🔥",
"users": [
"osanseviero",
"dangmn",
"sidbin"
],
"count": 3
}
] | 2024-06-27T12:42:26.000Z | 2024-07-09T14:08:43.592Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64a3efa56866210ffc6f83f1/WXRnceaov5ciVhCVJ4KRC.jpeg",
"fullname": "Siddharth",
"name": "sidbin",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg",
"fullname": "Aymeric Roucher",
"name": "m-ric",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 476,
"isFollowing": false
}
] | /posts/m-ric/389133085535668 | 774 | 2 |
153695739796256 | [
{
"type": "text",
"value": "We are happy to introduce MedIT SUN 1B, a downscaled version of the MedIT SUN 2.5B Llama 3.2 variant.",
"raw": "We are happy to introduce MedIT SUN 1B, a downscaled version of the MedIT SUN 2.5B Llama 3.2 variant.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Give it a try!",
"raw": "Give it a try!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/meditsolutions/Llama-3.2-SUN-1B-chat",
"resource": {
"type": "model",
"id": "meditsolutions/Llama-3.2-SUN-1B-chat",
"discussionNum": null
},
"url": "https://huggingface.co/meditsolutions/Llama-3.2-SUN-1B-chat",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | We are happy to introduce MedIT SUN 1B, a downscaled version of the MedIT SUN 2.5B Llama 3.2 variant.
Give it a try!
https://huggingface.co/meditsolutions/Llama-3.2-SUN-1B-chat | {
"avatarUrl": "/avatars/4fe71fbf6a7aa19380e38345b9de9d04.svg",
"fullname": "Mariusz Kurman",
"name": "mkurman",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👀",
"users": [
"John6666"
],
"count": 1
}
] | 2024-11-03T20:34:28.000Z | 2024-11-03T20:34:37.873Z | [] | /posts/mkurman/153695739796256 | 676 | 0 |
353004317978723 | [
{
"type": "text",
"value": "Do you guys want to see my training code for ",
"raw": "Do you guys want to see my training code for ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/nroggendorff/smallama",
"resource": {
"type": "model",
"id": "nroggendorff/smallama",
"discussionNum": null
},
"url": "https://huggingface.co/nroggendorff/smallama",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ?",
"raw": " ?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Do you guys want to see my training code for https://huggingface.co/nroggendorff/smallama ? | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659f000b83abded48e190901/BnXL_XYbVX6PHngfQLECW.png",
"fullname": "Noa Roggendorff",
"name": "nroggendorff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
} | [] | [] | [
{
"reaction": "😎",
"users": [
"John6666"
],
"count": 1
}
] | 2024-11-03T20:04:09.000Z | 2024-11-04T13:54:09.503Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659f000b83abded48e190901/BnXL_XYbVX6PHngfQLECW.png",
"fullname": "Noa Roggendorff",
"name": "nroggendorff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6640bbd0220cfa8cbfdce080/wiAHUu5ewawyipNs0YFBR.png",
"fullname": "John Smith",
"name": "John6666",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 384,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg",
"fullname": "leroy Samuel Dyer",
"name": "LeroyDyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 82,
"isFollowing": false
}
] | /posts/nroggendorff/353004317978723 | 598 | 3 |
941309448600578 | [
{
"type": "text",
"value": "Imagine being able to talk directly to your API connection. \"I have a field in the CRM named Customer_ID that needs to map to a field in the ERP named ERP_Customer_ID.\" Imagine being able to give your API connections both a brain and swarm of agents as a body to execute any task or function. This isn't science fiction, this is the revolutionary power of Liquid API. A product 10 years in the making!",
"raw": "Imagine being able to talk directly to your API connection. \"I have a field in the CRM named Customer_ID that needs to map to a field in the ERP named ERP_Customer_ID.\" Imagine being able to give your API connections both a brain and swarm of agents as a body to execute any task or function. This isn't science fiction, this is the revolutionary power of Liquid API. A product 10 years in the making!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/cHI_k1Dkdr4",
"resource": null,
"url": null,
"href": "https://youtu.be/cHI_k1Dkdr4",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Imagine being able to talk directly to your API connection. "I have a field in the CRM named Customer_ID that needs to map to a field in the ERP named ERP_Customer_ID." Imagine being able to give your API connections both a brain and swarm of agents as a body to execute any task or function. This isn't science fiction, this is the revolutionary power of Liquid API. A product 10 years in the making!
https://youtu.be/cHI_k1Dkdr4
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 148,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64274b69ba6cef0a6ebb0fd6/6FvfD9bfR9oHm10LvOgRH.jpeg"
}
] | [] | [
{
"reaction": "👀",
"users": [
"John6666",
"HamedEmine"
],
"count": 2
}
] | 2024-11-03T18:50:51.000Z | 2024-11-04T18:20:45.252Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1653051419389-62878fdc70af5d9106e3e892.png",
"fullname": "K S",
"name": "MultiTrickFox",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 5,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 148,
"isFollowing": false
}
] | /posts/TuringsSolutions/941309448600578 | 566 | 2 |
555798537911917 | [
{
"type": "text",
"value": "New Mann-E model just released:",
"raw": "New Mann-E model just released:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/mann-e/mann-e_flux",
"resource": {
"type": "model",
"id": "mann-e/mann-e_flux",
"discussionNum": null
},
"url": "https://huggingface.co/mann-e/mann-e_flux",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I will be glad if you test it!",
"raw": "I will be glad if you test it!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | New Mann-E model just released:
https://huggingface.co/mann-e/mann-e_flux
I will be glad if you test it! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/637251142f98dcc049b349de/kkRLjyaO55_nFrTNWRZFQ.jpeg",
"fullname": "Haghiri",
"name": "Muhammadreza",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 27,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👀",
"users": [
"John6666"
],
"count": 1
}
] | 2024-11-03T16:44:25.000Z | 2024-11-03T16:44:25.016Z | [] | /posts/Muhammadreza/555798537911917 | 495 | 0 |
342387295885636 | [
{
"type": "text",
"value": "hi everyone,",
"raw": "hi everyone,",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "i have trained a Qwen 14b model on a smaller dataset, but its now very tricky because i have got nowhere to use it via inference (the paid for inference on hf costs quite a lot), does anyone know of anywhere where i can deploy my model and use it via api for a reasonable cost, or ideally none. thanks",
"raw": "i have trained a Qwen 14b model on a smaller dataset, but its now very tricky because i have got nowhere to use it via inference (the paid for inference on hf costs quite a lot), does anyone know of anywhere where i can deploy my model and use it via api for a reasonable cost, or ideally none. thanks",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | hi everyone,
i have trained a Qwen 14b model on a smaller dataset, but its now very tricky because i have got nowhere to use it via inference (the paid for inference on hf costs quite a lot), does anyone know of anywhere where i can deploy my model and use it via api for a reasonable cost, or ideally none. thanks | {
"avatarUrl": "/avatars/7be1913712fdd1ffe75967ed19007720.svg",
"fullname": "stock mining",
"name": "automatedstockminingorg",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 9,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👀",
"users": [
"John6666",
"robertomachorro",
"hakutaku",
"victor"
],
"count": 4
}
] | 2024-11-03T08:10:19.000Z | 2024-11-04T12:14:05.992Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6640bbd0220cfa8cbfdce080/wiAHUu5ewawyipNs0YFBR.png",
"fullname": "John Smith",
"name": "John6666",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 384,
"isFollowing": false
},
{
"avatarUrl": "/avatars/1c2788196f8786f8fc259e60403a64f5.svg",
"fullname": "Jelle De Loecker",
"name": "skerit",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg",
"fullname": "leroy Samuel Dyer",
"name": "LeroyDyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 82,
"isFollowing": false
},
{
"avatarUrl": "/avatars/c0d7fc43144c8ec3ca2aac1cef0d6f98.svg",
"fullname": "Jack Smith",
"name": "hakutaku",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6374bb2119c264fe6fb3153c/sE9OAyFexJkGoWea_8Oy_.png",
"fullname": "Nyaribari Reuben",
"name": "foscraft",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "/avatars/5565505abdd4ab3dbc958c9e63ba12ff.svg",
"fullname": "Simoes",
"name": "joaomsimoes",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/automatedstockminingorg/342387295885636 | 2,331 | 6 |
350161263239420 | [
{
"type": "text",
"value": "LLaMA-O1: Open Large Reasoning Model Frameworks For Training, Inference and Evaluation With PyTorch and HuggingFace",
"raw": "LLaMA-O1: Open Large Reasoning Model Frameworks For Training, Inference and Evaluation With PyTorch and HuggingFace",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Large Reasoning Models powered by Monte Carlo Tree Search (MCTS), Self-Play Reinforcement Learning, PPO, AlphaGo Zero's dua policy paradigm and Large Language Models! ",
"raw": "Large Reasoning Models powered by Monte Carlo Tree Search (MCTS), Self-Play Reinforcement Learning, PPO, AlphaGo Zero's dua policy paradigm and Large Language Models! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/SimpleBerry/LLaMA-O1/",
"resource": null,
"url": null,
"href": "https://github.com/SimpleBerry/LLaMA-O1/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "What will happen when you compound MCTS ❤ LLM ❤ Self-Play ❤RLHF?",
"raw": "What will happen when you compound MCTS ❤ LLM ❤ Self-Play ❤RLHF?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Just a little bite of strawberry!🍓",
"raw": "Just a little bite of strawberry!🍓",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Past related works:",
"raw": "Past related works:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2410.02884",
"resource": {
"type": "paper",
"id": "2410.02884",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2410.02884",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level\n Mathematical Reasoning (2410.02884)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2406.07394",
"resource": {
"type": "paper",
"id": "2406.07394",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2406.07394",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo\n Tree Self-refine with LLaMa-3 8B (2406.07394)"
}
] | LLaMA-O1: Open Large Reasoning Model Frameworks For Training, Inference and Evaluation With PyTorch and HuggingFace
Large Reasoning Models powered by Monte Carlo Tree Search (MCTS), Self-Play Reinforcement Learning, PPO, AlphaGo Zero's dua policy paradigm and Large Language Models!
https://github.com/SimpleBerry/LLaMA-O1/
What will happen when you compound MCTS ❤ LLM ❤ Self-Play ❤RLHF?
Just a little bite of strawberry!🍓
Past related works:
https://huggingface.co/papers/2410.02884
https://huggingface.co/papers/2406.07394 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64bce15bafd1e46c5504ad38/bQFX1iFbXEBXcQvUNL811.png",
"fullname": "Di Zhang",
"name": "qq8933",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 106,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64bce15bafd1e46c5504ad38/mrGEDFPp9QC7jZ7cOXBVH.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64bce15bafd1e46c5504ad38/b859bDNIaOVTFjif1f7cU.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"iwancobain",
"John6666",
"zsqzz",
"jwu323",
"ALYTV",
"Azzedde",
"Svngoku",
"timmylai",
"nbroad",
"csabakecskemeti",
"Syzygianinfern0",
"sekkit",
"flozi00",
"qftop",
"victor",
"ajibawa-2023",
"seyf1elislam",
"KvrParaskevi",
"dingo-actual",
"createtheimaginable",
"ai-everyday"
],
"count": 21
},
{
"reaction": "🔥",
"users": [
"prithivMLmods",
"createtheimaginable",
"jwu323"
],
"count": 3
}
] | 2024-11-03T02:03:27.000Z | 2024-11-05T05:20:37.552Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/MNuArctG3OwNdey9j44Os.jpeg",
"fullname": "Paraskevi Kivroglou",
"name": "KvrParaskevi",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 6,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64bce15bafd1e46c5504ad38/bQFX1iFbXEBXcQvUNL811.png",
"fullname": "Di Zhang",
"name": "qq8933",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 106,
"isFollowing": false
}
] | /posts/qq8933/350161263239420 | 5,382 | 2 |
757664580544837 | [
{
"type": "text",
"value": "Hi there HuggingFacers!🤗",
"raw": "Hi there HuggingFacers!🤗",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Are you working with Streamlit on Spaces and struggling with authentication and user management?🧐",
"raw": "Are you working with Streamlit on Spaces and struggling with authentication and user management?🧐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Well, you can check out my last community article (",
"raw": "Well, you can check out my last community article (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/as-cle-bert/streamlit-supabase-auth-ui",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/as-cle-bert/streamlit-supabase-auth-ui",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ") on a new python package I've been working on, that connects Supabase to Streamlit UI, in order to create a seamless authentication for your seamless Streamlit apps!🚀",
"raw": ") on a new python package I've been working on, that connects Supabase to Streamlit UI, in order to create a seamless authentication for your seamless Streamlit apps!🚀",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "You can find a demo of it on Spaces: ",
"raw": "You can find a demo of it on Spaces: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/as-cle-bert/streamlit-supabase-auth-ui",
"resource": {
"type": "space",
"id": "as-cle-bert/streamlit-supabase-auth-ui",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/as-cle-bert/streamlit-supabase-auth-ui",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Have fun!🍕",
"raw": "Have fun!🍕",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi there HuggingFacers!🤗
Are you working with Streamlit on Spaces and struggling with authentication and user management?🧐
Well, you can check out my last community article (https://huggingface.co/blog/as-cle-bert/streamlit-supabase-auth-ui) on a new python package I've been working on, that connects Supabase to Streamlit UI, in order to create a seamless authentication for your seamless Streamlit apps!🚀
You can find a demo of it on Spaces: https://huggingface.co/spaces/as-cle-bert/streamlit-supabase-auth-ui
Have fun!🍕 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e330e7edc2f7306e252448/ucpk9c8x0UafGM4mXTrRy.jpeg",
"fullname": "Astra Clelia Bertelli",
"name": "as-cle-bert",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 639,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65e330e7edc2f7306e252448/R2Nu4rNbJB-lBe7wQaQoZ.png"
}
] | [] | [
{
"reaction": "👀",
"users": [
"John6666"
],
"count": 1
}
] | 2024-11-03T01:53:16.000Z | 2024-11-03T01:53:16.219Z | [] | /posts/as-cle-bert/757664580544837 | 712 | 0 |
624245127298035 | [
{
"type": "text",
"value": "OmniGen 1-Click Automatic Installers for Windows, RunPod and Massed Compute",
"raw": "OmniGen 1-Click Automatic Installers for Windows, RunPod and Massed Compute",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "OmniGen is a unified image generation model that can generate a wide range of images from multi-modal prompts. It is designed to be simple, flexible, and easy to use",
"raw": "OmniGen is a unified image generation model that can generate a wide range of images from multi-modal prompts. It is designed to be simple, flexible, and easy to use",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Installers are here : ",
"raw": "Installers are here : ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.patreon.com/posts/omnigen-1-click-115233922",
"resource": null,
"url": null,
"href": "https://www.patreon.com/posts/omnigen-1-click-115233922",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Look attached images to understand what capabilities it has. It is simply amazing so many features.",
"raw": "Look attached images to understand what capabilities it has. It is simply amazing so many features.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "What is OmniGen : ",
"raw": "What is OmniGen : ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/VectorSpaceLab/OmniGen",
"resource": null,
"url": null,
"href": "https://github.com/VectorSpaceLab/OmniGen",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Windows Requirements",
"raw": "Windows Requirements",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Python 3.10.11, CUDA 12.4, Git, FFMPEG, cuDNN 9.x, C++ Tools",
"raw": "Python 3.10.11, CUDA 12.4, Git, FFMPEG, cuDNN 9.x, C++ Tools",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "A tutorial that shows how to install all above : ",
"raw": "A tutorial that shows how to install all above : ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/DrhUHnYfwC0",
"resource": null,
"url": null,
"href": "https://youtu.be/DrhUHnYfwC0",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "How To Install & Use",
"raw": "How To Install & Use",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "After installing requirements by following above tutorial, double-click Windows_Install.bat and install",
"raw": "After installing requirements by following above tutorial, double-click Windows_Install.bat and install",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "After that use Windows_Start.bat to start the app",
"raw": "After that use Windows_Start.bat to start the app",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "When offload_model is enabled (checked) on the Gradio interface, it uses 5.4 GB VRAM, 2x slower",
"raw": "When offload_model is enabled (checked) on the Gradio interface, it uses 5.4 GB VRAM, 2x slower",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "When offload_model is not used (not checked) it uses 12.2 GB VRAM",
"raw": "When offload_model is not used (not checked) it uses 12.2 GB VRAM",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "When separate_cfg_infer is not checked, and offload_model is not checked, it uses 18.7 GB VRAM",
"raw": "When separate_cfg_infer is not checked, and offload_model is not checked, it uses 18.7 GB VRAM",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "To install on RunPod and Massed Compute please follow Massed_Compute_Instructions_READ.txt and Runpod_Instructions_READ.txt",
"raw": "To install on RunPod and Massed Compute please follow Massed_Compute_Instructions_READ.txt and Runpod_Instructions_READ.txt",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Look at the examples on the Gradio interface closely to understand how to use",
"raw": "Look at the examples on the Gradio interface closely to understand how to use",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | OmniGen 1-Click Automatic Installers for Windows, RunPod and Massed Compute
OmniGen is a unified image generation model that can generate a wide range of images from multi-modal prompts. It is designed to be simple, flexible, and easy to use
Installers are here : https://www.patreon.com/posts/omnigen-1-click-115233922
Look attached images to understand what capabilities it has. It is simply amazing so many features.
What is OmniGen : https://github.com/VectorSpaceLab/OmniGen
Windows Requirements
Python 3.10.11, CUDA 12.4, Git, FFMPEG, cuDNN 9.x, C++ Tools
A tutorial that shows how to install all above : https://youtu.be/DrhUHnYfwC0
How To Install & Use
After installing requirements by following above tutorial, double-click Windows_Install.bat and install
After that use Windows_Start.bat to start the app
When offload_model is enabled (checked) on the Gradio interface, it uses 5.4 GB VRAM, 2x slower
When offload_model is not used (not checked) it uses 12.2 GB VRAM
When separate_cfg_infer is not checked, and offload_model is not checked, it uses 18.7 GB VRAM
To install on RunPod and Massed Compute please follow Massed_Compute_Instructions_READ.txt and Runpod_Instructions_READ.txt
Look at the examples on the Gradio interface closely to understand how to use | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gözükara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/9jJqr0eQ__GVOAVi6RK-v.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/ObPa3ED-koiopiILWpCEL.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/bqfftCZBIM1oPiuZAJizx.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/ECkZO8sSE7xkMItnEwMKP.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/9xNfZcE64otFihHNYcIia.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/wXc7MB8y_1NoO3Eyi9Nul.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/DUeqB5h292pOsPZLXDYPV.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/DE4guYhS5R_sIqXeRWB2N.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/9o08lMKsIUL_hHpMv5dd8.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/bchgqPK4FuDGtChyRO-km.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/R19gM1TBiUXx25TaEPE6P.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/a6fJyqKWzktM7XItQi6YS.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/ypb4Gpon3fPC9h6hyizIw.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/TyvnlVHiHfquSt-Ocnxov.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/TpdHw9jlDnr7Oda4qocka.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/j_of6o5GVgtbI99D1qyUF.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/_mEcBVoCzFeu-5BKK22yY.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/VPAbn4venQ3CzTz8pW0m_.jpeg"
}
] | [] | [
{
"reaction": "🤯",
"users": [
"MonsterMMORPG",
"CYGDEN",
"salemseidmohamed",
"Sethblocks",
"clem"
],
"count": 5
},
{
"reaction": "👀",
"users": [
"MonsterMMORPG",
"John6666",
"CYGDEN",
"clem"
],
"count": 4
},
{
"reaction": "❤️",
"users": [
"MonsterMMORPG",
"CYGDEN",
"Vitorvolk",
"clem"
],
"count": 4
},
{
"reaction": "🔥",
"users": [
"MonsterMMORPG",
"CYGDEN"
],
"count": 2
},
{
"reaction": "🤗",
"users": [
"MonsterMMORPG",
"CYGDEN"
],
"count": 2
},
{
"reaction": "😎",
"users": [
"MonsterMMORPG",
"CYGDEN"
],
"count": 2
},
{
"reaction": "➕",
"users": [
"MonsterMMORPG",
"CYGDEN"
],
"count": 2
},
{
"reaction": "🧠",
"users": [
"MonsterMMORPG",
"CYGDEN"
],
"count": 2
},
{
"reaction": "👍",
"users": [
"MonsterMMORPG",
"CYGDEN"
],
"count": 2
},
{
"reaction": "🚀",
"users": [
"MonsterMMORPG"
],
"count": 1
},
{
"reaction": "🤝",
"users": [
"MonsterMMORPG"
],
"count": 1
}
] | 2024-11-03T00:48:46.000Z | 2024-11-03T00:48:46.811Z | [] | /posts/MonsterMMORPG/624245127298035 | 2,460 | 0 |
982010938778742 | [
{
"type": "text",
"value": "Smol models ftw! AMD released AMD OLMo 1B - beats OpenELM, tiny llama on MT Bench, Alpaca Eval - Apache 2.0 licensed 🔥",
"raw": "Smol models ftw! AMD released AMD OLMo 1B - beats OpenELM, tiny llama on MT Bench, Alpaca Eval - Apache 2.0 licensed 🔥",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Trained with 1.3 trillion (dolma 1.7) tokens on 16 nodes, each with 4 MI250 GPUs",
"raw": "> Trained with 1.3 trillion (dolma 1.7) tokens on 16 nodes, each with 4 MI250 GPUs",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Three checkpoints:",
"raw": "> Three checkpoints:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- AMD OLMo 1B: Pre-trained model",
"raw": "- AMD OLMo 1B: Pre-trained model",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- AMD OLMo 1B SFT: Supervised fine-tuned on Tulu V2, OpenHermes-2.5, WebInstructSub, and Code-Feedback datasets",
"raw": "- AMD OLMo 1B SFT: Supervised fine-tuned on Tulu V2, OpenHermes-2.5, WebInstructSub, and Code-Feedback datasets",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- AMD OLMo 1B SFT DPO: Aligned with human preferences using Direct Preference Optimization (DPO) on UltraFeedback dataset",
"raw": "- AMD OLMo 1B SFT DPO: Aligned with human preferences using Direct Preference Optimization (DPO) on UltraFeedback dataset",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Key Insights: ",
"raw": "Key Insights: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Pre-trained with less than half the tokens of OLMo-1B",
"raw": "> Pre-trained with less than half the tokens of OLMo-1B",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Post-training steps include two-phase SFT and DPO alignment",
"raw": "> Post-training steps include two-phase SFT and DPO alignment",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Data for SFT:",
"raw": "> Data for SFT:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Phase 1: Tulu V2",
"raw": "- Phase 1: Tulu V2",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Phase 2: OpenHermes-2.5, WebInstructSub, and Code-Feedback",
"raw": "- Phase 2: OpenHermes-2.5, WebInstructSub, and Code-Feedback",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Model checkpoints on the Hub & Integrated with Transformers ⚡️",
"raw": "> Model checkpoints on the Hub & Integrated with Transformers ⚡️",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Congratulations & kudos to AMD on a brilliant smol model release! 🤗",
"raw": "Congratulations & kudos to AMD on a brilliant smol model release! 🤗",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/amd/amd-olmo-6723e7d04a49116d8ec95070",
"resource": {
"type": "collection",
"id": "amd/amd-olmo-6723e7d04a49116d8ec95070",
"discussionNum": null
},
"url": "https://huggingface.co/collections/amd/amd-olmo-6723e7d04a49116d8ec95070",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Smol models ftw! AMD released AMD OLMo 1B - beats OpenELM, tiny llama on MT Bench, Alpaca Eval - Apache 2.0 licensed 🔥
> Trained with 1.3 trillion (dolma 1.7) tokens on 16 nodes, each with 4 MI250 GPUs
> Three checkpoints:
- AMD OLMo 1B: Pre-trained model
- AMD OLMo 1B SFT: Supervised fine-tuned on Tulu V2, OpenHermes-2.5, WebInstructSub, and Code-Feedback datasets
- AMD OLMo 1B SFT DPO: Aligned with human preferences using Direct Preference Optimization (DPO) on UltraFeedback dataset
Key Insights:
> Pre-trained with less than half the tokens of OLMo-1B
> Post-training steps include two-phase SFT and DPO alignment
> Data for SFT:
- Phase 1: Tulu V2
- Phase 2: OpenHermes-2.5, WebInstructSub, and Code-Feedback
> Model checkpoints on the Hub & Integrated with Transformers ⚡️
Congratulations & kudos to AMD on a brilliant smol model release! 🤗
https://huggingface.co/collections/amd/amd-olmo-6723e7d04a49116d8ec95070 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655385361868-61b85ce86eb1f2c5e6233736.jpeg",
"fullname": "Vaibhav Srivastav",
"name": "reach-vb",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 439,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61b85ce86eb1f2c5e6233736/ElAtLjRyGDjUarACUyqlP.jpeg"
}
] | [] | [
{
"reaction": "🚀",
"users": [
"AtAndDev",
"John6666",
"CYGDEN",
"louisbrulenaudet",
"kimleang123",
"Dolfini",
"Joseph717171",
"not-lain",
"KvrParaskevi"
],
"count": 9
},
{
"reaction": "🔥",
"users": [
"AtAndDev",
"CYGDEN",
"Joseph717171",
"rizky-gumelar",
"not-lain"
],
"count": 5
}
] | 2024-11-02T17:40:04.000Z | 2024-11-02T17:40:19.088Z | [] | /posts/reach-vb/982010938778742 | 2,942 | 0 |
380906232971746 | [
{
"type": "text",
"value": "🧠 𝗖𝗟𝗘𝗔𝗥: 𝗳𝗶𝗿𝘀𝘁 𝗺𝘂𝗹𝘁𝗶𝗺𝗼𝗱𝗮𝗹 𝗯𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝘁𝗼 𝗺𝗮𝗸𝗲 𝗺𝗼𝗱𝗲𝗹𝘀 𝗳𝗼𝗿𝗴𝗲𝘁 𝘄𝗵𝗮𝘁 𝘄𝗲 𝘄𝗮𝗻𝘁 𝘁𝗵𝗲𝗺 𝘁𝗼 𝗳𝗼𝗿𝗴𝗲𝘁",
"raw": "🧠 𝗖𝗟𝗘𝗔𝗥: 𝗳𝗶𝗿𝘀𝘁 𝗺𝘂𝗹𝘁𝗶𝗺𝗼𝗱𝗮𝗹 𝗯𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝘁𝗼 𝗺𝗮𝗸𝗲 𝗺𝗼𝗱𝗲𝗹𝘀 𝗳𝗼𝗿𝗴𝗲𝘁 𝘄𝗵𝗮𝘁 𝘄𝗲 𝘄𝗮𝗻𝘁 𝘁𝗵𝗲𝗺 𝘁𝗼 𝗳𝗼𝗿𝗴𝗲𝘁",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "With privacy concerns rising, we sometimes need our models to \"forget\" specific information - like a person's data - while keeping everything else intact. Researchers just released CLEAR, the first benchmark to test how well this works with both text and images.",
"raw": "With privacy concerns rising, we sometimes need our models to \"forget\" specific information - like a person's data - while keeping everything else intact. Researchers just released CLEAR, the first benchmark to test how well this works with both text and images.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "❌ Bad news: Current methods either fail to truly forget or end up forgetting way too much. It's like trying to remove a single ingredient from a baked cake!",
"raw": "❌ Bad news: Current methods either fail to truly forget or end up forgetting way too much. It's like trying to remove a single ingredient from a baked cake!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✨ But there's hope: Adding simple mathematical constraints (L1 regularization) during the forgetting process significantly improves results.",
"raw": "✨ But there's hope: Adding simple mathematical constraints (L1 regularization) during the forgetting process significantly improves results.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🎯 Key insights:",
"raw": "🎯 Key insights:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✅ The benchmark tests forgetting on 200 fictional personas",
"raw": "✅ The benchmark tests forgetting on 200 fictional personas",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "‣ 3,770 visual Q&A pairs",
"raw": "‣ 3,770 visual Q&A pairs",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "‣ 4,000 textual Q&A pairs",
"raw": "‣ 4,000 textual Q&A pairs",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "‣ Additional real-world tests",
"raw": "‣ Additional real-world tests",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🛑 Most current forgetting methods don't work well with both text and images",
"raw": "🛑 Most current forgetting methods don't work well with both text and images",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "‣ They either remember what they should forget",
"raw": "‣ They either remember what they should forget",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "‣ Or they forget too much unrelated information",
"raw": "‣ Or they forget too much unrelated information",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✨ Simple mathematical constraints work surprisingly well",
"raw": "✨ Simple mathematical constraints work surprisingly well",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "‣ L1 regularization prevents excessive forgetting",
"raw": "‣ L1 regularization prevents excessive forgetting",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "‣ Works especially well with the LLMU method",
"raw": "‣ Works especially well with the LLMU method",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "👉 Read the full paper here: ",
"raw": "👉 Read the full paper here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2410.18057",
"resource": {
"type": "paper",
"id": "2410.18057",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2410.18057",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "CLEAR: Character Unlearning in Textual and Visual Modalities (2410.18057)"
}
] | 🧠 𝗖𝗟𝗘𝗔𝗥: 𝗳𝗶𝗿𝘀𝘁 𝗺𝘂𝗹𝘁𝗶𝗺𝗼𝗱𝗮𝗹 𝗯𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝘁𝗼 𝗺𝗮𝗸𝗲 𝗺𝗼𝗱𝗲𝗹𝘀 𝗳𝗼𝗿𝗴𝗲𝘁 𝘄𝗵𝗮𝘁 𝘄𝗲 𝘄𝗮𝗻𝘁 𝘁𝗵𝗲𝗺 𝘁𝗼 𝗳𝗼𝗿𝗴𝗲𝘁
With privacy concerns rising, we sometimes need our models to "forget" specific information - like a person's data - while keeping everything else intact. Researchers just released CLEAR, the first benchmark to test how well this works with both text and images.
❌ Bad news: Current methods either fail to truly forget or end up forgetting way too much. It's like trying to remove a single ingredient from a baked cake!
✨ But there's hope: Adding simple mathematical constraints (L1 regularization) during the forgetting process significantly improves results.
🎯 Key insights:
✅ The benchmark tests forgetting on 200 fictional personas
‣ 3,770 visual Q&A pairs
‣ 4,000 textual Q&A pairs
‣ Additional real-world tests
🛑 Most current forgetting methods don't work well with both text and images
‣ They either remember what they should forget
‣ Or they forget too much unrelated information
✨ Simple mathematical constraints work surprisingly well
‣ L1 regularization prevents excessive forgetting
‣ Works especially well with the LLMU method
👉 Read the full paper here: https://huggingface.co/papers/2410.18057 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg",
"fullname": "Aymeric Roucher",
"name": "m-ric",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 476,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63d10d4e8eaa4831005e92b5/j7qbmAMixW9v9FdqonSWK.png"
}
] | [] | [
{
"reaction": "👀",
"users": [
"John6666",
"CYGDEN",
"xpgx1"
],
"count": 3
}
] | 2024-11-02T14:52:11.000Z | 2024-11-02T14:52:11.476Z | [] | /posts/m-ric/380906232971746 | 1,533 | 0 |
995227900466142 | [
{
"type": "text",
"value": "I am delighted to announce the publication of my LegalKit, a French labeled dataset built for legal ML training 🤗",
"raw": "I am delighted to announce the publication of my LegalKit, a French labeled dataset built for legal ML training 🤗",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This dataset comprises multiple query-document pairs (+50k) curated for training sentence embedding models within the domain of French law.",
"raw": "This dataset comprises multiple query-document pairs (+50k) curated for training sentence embedding models within the domain of French law.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The labeling process follows a systematic approach to ensure consistency and relevance:",
"raw": "The labeling process follows a systematic approach to ensure consistency and relevance:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Initial Query Generation: Three instances of the LLaMA-3-70B model independently generate three different queries based on the same document.",
"raw": "- Initial Query Generation: Three instances of the LLaMA-3-70B model independently generate three different queries based on the same document.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Selection of Optimal Query: A fourth instance of the LLaMA-3-70B model, using a dedicated selection prompt, evaluates the generated queries and selects the most suitable one.",
"raw": "- Selection of Optimal Query: A fourth instance of the LLaMA-3-70B model, using a dedicated selection prompt, evaluates the generated queries and selects the most suitable one.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Final Label Assignment: The chosen query is used to label the document, aiming to ensure that the label accurately reflects the content and context of the original text.",
"raw": "- Final Label Assignment: The chosen query is used to label the document, aiming to ensure that the label accurately reflects the content and context of the original text.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Dataset: ",
"raw": "Dataset: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/louisbrulenaudet/legalkit",
"resource": {
"type": "dataset",
"id": "louisbrulenaudet/legalkit",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/louisbrulenaudet/legalkit",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Stay tuned for further updates and release information 🔥",
"raw": "Stay tuned for further updates and release information 🔥",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@clem",
"resource": null,
"url": null,
"href": null,
"user": "clem",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", if we can create an \"HF for Legal\" organization, similar to what exists for journalists, I am available!",
"raw": ", if we can create an \"HF for Legal\" organization, similar to what exists for journalists, I am available!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Note : My special thanks to ",
"raw": "Note : My special thanks to ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@alvdansen",
"resource": null,
"url": null,
"href": null,
"user": "alvdansen",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " for their illustration models ❤️",
"raw": " for their illustration models ❤️",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I am delighted to announce the publication of my LegalKit, a French labeled dataset built for legal ML training 🤗
This dataset comprises multiple query-document pairs (+50k) curated for training sentence embedding models within the domain of French law.
The labeling process follows a systematic approach to ensure consistency and relevance:
- Initial Query Generation: Three instances of the LLaMA-3-70B model independently generate three different queries based on the same document.
- Selection of Optimal Query: A fourth instance of the LLaMA-3-70B model, using a dedicated selection prompt, evaluates the generated queries and selects the most suitable one.
- Final Label Assignment: The chosen query is used to label the document, aiming to ensure that the label accurately reflects the content and context of the original text.
Dataset: https://huggingface.co/datasets/louisbrulenaudet/legalkit
Stay tuned for further updates and release information 🔥
@clem, if we can create an "HF for Legal" organization, similar to what exists for journalists, I am available!
Note : My special thanks to @alvdansen for their illustration models ❤️ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6459fa0f5b3111fbe83286e1/UhCa7JNbtTjC6dgOjZtH0.jpeg",
"fullname": "Louis Brulé Naudet",
"name": "louisbrulenaudet",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 176,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6459fa0f5b3111fbe83286e1/750l2imQS8uYhBN97rAUw.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1734
}
] | [
{
"reaction": "👍",
"users": [
"fffiloni",
"ZeroWw",
"Ramikan-BR",
"ssatz",
"davanstrien",
"clem",
"osanseviero",
"baconnier"
],
"count": 8
},
{
"reaction": "❤️",
"users": [
"Ramikan-BR",
"clem",
"umseeker",
"baconnier",
"louisbrulenaudet",
"LeroyDyer",
"sbrandeis"
],
"count": 7
},
{
"reaction": "🚀",
"users": [
"KvrParaskevi",
"Ramikan-BR",
"antoinejeannot"
],
"count": 3
},
{
"reaction": "👀",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "🧠",
"users": [
"Ramikan-BR"
],
"count": 1
}
] | 2024-06-27T06:26:01.000Z | 2024-06-27T15:52:19.621Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1734,
"isFollowing": false
}
] | /posts/louisbrulenaudet/995227900466142 | 3,190 | 2 |
871817872658842 | [
{
"type": "text",
"value": "ESM3 is up and running in a Space if you'd like to try it out: ",
"raw": "ESM3 is up and running in a Space if you'd like to try it out: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/colbyford/esm3",
"resource": {
"type": "space",
"id": "colbyford/esm3",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/colbyford/esm3",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "(You'll need to make a HF token and accept their license agreement to use the Space app.)",
"raw": "(You'll need to make a HF token and accept their license agreement to use the Space app.)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@AmelieSchreiber",
"resource": null,
"url": null,
"href": null,
"user": "AmelieSchreiber",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ESM3 is up and running in a Space if you'd like to try it out: https://huggingface.co/spaces/colbyford/esm3
(You'll need to make a HF token and accept their license agreement to use the Space app.)
@AmelieSchreiber | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1673970109381-noauth.jpeg",
"fullname": "Colby T. Ford",
"name": "colbyford",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 9,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64191ec8d459c9e7fbb0236b/7BeTgySZzmFCaVpntaYgP.jpeg",
"fullname": "Amelie Schreiber",
"name": "AmelieSchreiber",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 716
}
] | [
{
"reaction": "🔥",
"users": [
"osanseviero"
],
"count": 1
}
] | 2024-06-26T23:52:55.000Z | 2024-06-26T23:57:37.208Z | [] | /posts/colbyford/871817872658842 | 829 | 0 |
262476258387680 | [
{
"type": "text",
"value": "We've open-sourced the code and models for Self-Play Preference Optimization (SPPO)! 🚀🚀🚀",
"raw": "We've open-sourced the code and models for Self-Play Preference Optimization (SPPO)! 🚀🚀🚀",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🤗paper: ",
"raw": "🤗paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2405.00675",
"resource": {
"type": "paper",
"id": "2405.00675",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2405.00675",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "Self-Play Preference Optimization for Language Model Alignment (2405.00675)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ⭐ code: ",
"raw": " ⭐ code: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/uclaml/SPPO",
"resource": null,
"url": null,
"href": "https://github.com/uclaml/SPPO",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🤗models: ",
"raw": "🤗models: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/UCLA-AGI/sppo-6635fdd844f2b2e4a94d0b9a",
"resource": {
"type": "collection",
"id": "UCLA-AGI/sppo-6635fdd844f2b2e4a94d0b9a",
"discussionNum": null
},
"url": "https://huggingface.co/collections/UCLA-AGI/sppo-6635fdd844f2b2e4a94d0b9a",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | We've open-sourced the code and models for Self-Play Preference Optimization (SPPO)! 🚀🚀🚀
🤗paper: https://huggingface.co/papers/2405.00675
⭐ code: https://github.com/uclaml/SPPO
🤗models: https://huggingface.co/collections/UCLA-AGI/sppo-6635fdd844f2b2e4a94d0b9a | {
"avatarUrl": "/avatars/06cc76feebba0cc80ebb8f4ff86f6d9b.svg",
"fullname": "Quanquan Gu",
"name": "thughost",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 22,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"jakemannix",
"Tonic",
"AdinaY"
],
"count": 3
}
] | 2024-06-26T23:00:28.000Z | 2024-06-26T23:00:28.095Z | [] | /posts/thughost/262476258387680 | 691 | 0 |
549104611630792 | [
{
"type": "text",
"value": "🤗 Hello from the Project Fluently team!",
"raw": "🤗 Hello from the Project Fluently team!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🥏 We are ready to announce a new series of Supple Diffusion models, these are new generation diffusion models (about 1-2 weeks left before release).",
"raw": "🥏 We are ready to announce a new series of Supple Diffusion models, these are new generation diffusion models (about 1-2 weeks left before release).",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🦾 The new series aims to take diffusion models to the next level, with performance and versatility as the main goal.",
"raw": "🦾 The new series aims to take diffusion models to the next level, with performance and versatility as the main goal.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🧐 How will our models be better than others? Firstly, we worked on the CLIP models, now they understand your requests better, it will become easier to process. Secondly, we trained the models with high quality, even better than all our previous ones. Thirdly, you won’t have to keep 20 models on your disk; only 4-6 will be enough.",
"raw": "🧐 How will our models be better than others? Firstly, we worked on the CLIP models, now they understand your requests better, it will become easier to process. Secondly, we trained the models with high quality, even better than all our previous ones. Thirdly, you won’t have to keep 20 models on your disk; only 4-6 will be enough.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🗺️ Roadmap:",
"raw": "🗺️ Roadmap:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "1. Create Supple Diffusion Small",
"raw": "1. Create Supple Diffusion Small",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "2. Creating Supple Diffusion Medium",
"raw": "2. Creating Supple Diffusion Medium",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "3. Create Supple Diffusion Large",
"raw": "3. Create Supple Diffusion Large",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🎆 Our models are universal for realism, and for cartoons, and for anime, and for caricatures.",
"raw": "🎆 Our models are universal for realism, and for cartoons, and for anime, and for caricatures.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "💖 The project really needs your support and your recommendations and reviews, please do not hesitate to write comments under this post, thank you!",
"raw": "💖 The project really needs your support and your recommendations and reviews, please do not hesitate to write comments under this post, thank you!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🖼️ Below are demo images made with the pre-release version of Supple Diffusion Small.",
"raw": "🖼️ Below are demo images made with the pre-release version of Supple Diffusion Small.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🤗 Hello from the Project Fluently team!
🥏 We are ready to announce a new series of Supple Diffusion models, these are new generation diffusion models (about 1-2 weeks left before release).
🦾 The new series aims to take diffusion models to the next level, with performance and versatility as the main goal.
🧐 How will our models be better than others? Firstly, we worked on the CLIP models, now they understand your requests better, it will become easier to process. Secondly, we trained the models with high quality, even better than all our previous ones. Thirdly, you won’t have to keep 20 models on your disk; only 4-6 will be enough.
🗺️ Roadmap:
1. Create Supple Diffusion Small
2. Creating Supple Diffusion Medium
3. Create Supple Diffusion Large
🎆 Our models are universal for realism, and for cartoons, and for anime, and for caricatures.
💖 The project really needs your support and your recommendations and reviews, please do not hesitate to write comments under this post, thank you!
🖼️ Below are demo images made with the pre-release version of Supple Diffusion Small. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/o-5N9QyjHgmSMk69e3O55.png",
"fullname": "Evgeniy Hristoforu",
"name": "ehristoforu",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 235,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65a3d8d58448f47df24c041a/8L9s39OGbhkqrsA8hcFR5.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65a3d8d58448f47df24c041a/cQ89j5AtcS_kmwGMd3_ZU.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65a3d8d58448f47df24c041a/43DACjU2bxi4jT43gLdwu.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65a3d8d58448f47df24c041a/G74s08b634T1oW49iV-cp.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65a3d8d58448f47df24c041a/cUVgU5L6SN32GSDEu1JZe.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"ehristoforu",
"YaTharThShaRma999",
"rossaai",
"Pocholo",
"testin1234",
"pshriv16",
"Tanayeb",
"GPT007",
"rajsinghparihar",
"Arakinas",
"HDiffusion",
"weiyuhsuan",
"ganslmeier",
"danielus",
"Winnougan",
"GooGooPanda",
"dreamdrop-art",
"Nymbo",
"snakeying"
],
"count": 19
},
{
"reaction": "🤗",
"users": [
"ehristoforu",
"YaTharThShaRma999",
"rossaai",
"Tanayeb",
"louisbrulenaudet",
"GPT007",
"Winnougan",
"sudzdpn",
"dreamdrop-art",
"Nymbo"
],
"count": 10
},
{
"reaction": "👍",
"users": [
"ehristoforu",
"YaTharThShaRma999",
"rossaai",
"Tanayeb",
"ManniX-ITA",
"GPT007",
"Winnougan",
"too-ai",
"dreamdrop-art",
"Nymbo"
],
"count": 10
},
{
"reaction": "🚀",
"users": [
"ehristoforu",
"YaTharThShaRma999",
"rossaai",
"NHLOCAL",
"Tanayeb",
"GPT007",
"Winnougan",
"dreamdrop-art",
"Nymbo"
],
"count": 9
},
{
"reaction": "🔥",
"users": [
"ehristoforu",
"YaTharThShaRma999",
"rossaai",
"Tanayeb",
"GPT007",
"fengjj123",
"Winnougan",
"dreamdrop-art",
"Nymbo"
],
"count": 9
},
{
"reaction": "🤯",
"users": [
"Winnougan",
"GPT007",
"dreamdrop-art"
],
"count": 3
},
{
"reaction": "😎",
"users": [
"Winnougan",
"GPT007",
"dreamdrop-art"
],
"count": 3
},
{
"reaction": "🤝",
"users": [
"Winnougan",
"GPT007",
"dreamdrop-art"
],
"count": 3
},
{
"reaction": "🧠",
"users": [
"Winnougan",
"GPT007",
"dreamdrop-art"
],
"count": 3
},
{
"reaction": "👀",
"users": [
"Winnougan",
"GPT007",
"dreamdrop-art"
],
"count": 3
},
{
"reaction": "➕",
"users": [
"GPT007",
"dreamdrop-art"
],
"count": 2
},
{
"reaction": "😔",
"users": [
"GPT007",
"dreamdrop-art"
],
"count": 2
}
] | 2024-06-26T22:11:59.000Z | 2024-07-11T10:06:49.629Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/643c7e91b409fef15e0bd11b/BrV0Rw92iJ7DQ54P9FUwa.jpeg",
"fullname": "Tran Thanh Luan",
"name": "toilaluan",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/o-5N9QyjHgmSMk69e3O55.png",
"fullname": "Evgeniy Hristoforu",
"name": "ehristoforu",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 235,
"isFollowing": false
},
{
"avatarUrl": "/avatars/42b0ba14867ab1c3f17a523239c914b7.svg",
"fullname": "Damien Maya",
"name": "GooGooPanda",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
},
{
"avatarUrl": "/avatars/7d3329c7153f331d055800c7c2f4825d.svg",
"fullname": "Landon",
"name": "Arakinas",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/ehristoforu/549104611630792 | 5,999 | 4 |
431470406865174 | [
{
"type": "text",
"value": "Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse",
"raw": "Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I don't agree with some of the assertions made here, but it is an interesting paper and a good overview. ",
"raw": "I don't agree with some of the assertions made here, but it is an interesting paper and a good overview. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2401.13142",
"resource": null,
"url": null,
"href": "https://arxiv.org/abs/2401.13142",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse
I don't agree with some of the assertions made here, but it is an interesting paper and a good overview.
https://arxiv.org/abs/2401.13142 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669551186189-63732ebbbd81fae2b3aaf3fb.jpeg",
"fullname": "Knut Jägersberg",
"name": "KnutJaegersberg",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 238,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"Priyankvadaliya"
],
"count": 1
},
{
"reaction": "👍",
"users": [
"Tonic"
],
"count": 1
}
] | 2024-06-26T22:09:36.000Z | 2024-06-26T22:09:36.682Z | [] | /posts/KnutJaegersberg/431470406865174 | 637 | 0 |
443773081472466 | [
{
"type": "text",
"value": "Per popular request, I'm working on a beginning to end LoRA training workflow blog for a style.",
"raw": "Per popular request, I'm working on a beginning to end LoRA training workflow blog for a style.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It will focus on dataset curation through training on a pre-determined style to give a better insight on my process.",
"raw": "It will focus on dataset curation through training on a pre-determined style to give a better insight on my process.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Curious what are some questions you might have that I can try to answer in it?",
"raw": "Curious what are some questions you might have that I can try to answer in it?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Per popular request, I'm working on a beginning to end LoRA training workflow blog for a style.
It will focus on dataset curation through training on a pre-determined style to give a better insight on my process.
Curious what are some questions you might have that I can try to answer in it? | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🚀",
"users": [
"YaTharThShaRma999",
"Noeloise",
"louisbrulenaudet",
"fffiloni",
"eljanmahammadli",
"attashe",
"not-lain",
"Imtiazsyed",
"nicholaskao"
],
"count": 9
}
] | 2024-06-26T17:38:26.000Z | 2024-06-26T17:38:26.059Z | [] | /posts/alvdansen/443773081472466 | 2,502 | 0 |
885739588271537 | [
{
"type": "text",
"value": "I'm developing a tool to simplify finding datasets suitable for specific tasks or libraries. Although it's still a work in progress, I've compiled a collection of datasets that likely support DPO: ",
"raw": "I'm developing a tool to simplify finding datasets suitable for specific tasks or libraries. Although it's still a work in progress, I've compiled a collection of datasets that likely support DPO: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/davanstrien/probably-dpo-datasets-667c409a557fe99a9ed39f0b",
"resource": {
"type": "collection",
"id": "davanstrien/probably-dpo-datasets-667c409a557fe99a9ed39f0b",
"discussionNum": null
},
"url": "https://huggingface.co/collections/davanstrien/probably-dpo-datasets-667c409a557fe99a9ed39f0b",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I'm developing a tool to simplify finding datasets suitable for specific tasks or libraries. Although it's still a work in progress, I've compiled a collection of datasets that likely support DPO: https://huggingface.co/collections/davanstrien/probably-dpo-datasets-667c409a557fe99a9ed39f0b | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 404,
"isFollowing": false
} | [] | [] | [] | 2024-06-26T16:32:25.000Z | 2024-06-26T16:32:25.329Z | [] | /posts/davanstrien/885739588271537 | 773 | 0 |
808551450380205 | [
{
"type": "text",
"value": "It is time for some Aura.",
"raw": "It is time for some Aura.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "First in our series of fully open sourced / commercially available models by ",
"raw": "First in our series of fully open sourced / commercially available models by ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@fal-ai",
"resource": null,
"url": null,
"href": null,
"user": "fal-ai",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ": AuraSR - a 600M parameter upscaler based on GigaGAN.",
"raw": ": AuraSR - a 600M parameter upscaler based on GigaGAN.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Blog: ",
"raw": "Blog: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://blog.fal.ai/introducing-aurasr-an-open-reproduction-of-the-gigagan-upscaler-2/",
"resource": null,
"url": null,
"href": "https://blog.fal.ai/introducing-aurasr-an-open-reproduction-of-the-gigagan-upscaler-2/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "HF: ",
"raw": "HF: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/fal-ai/AuraSR",
"resource": null,
"url": null,
"href": "https://huggingface.co/fal-ai/AuraSR",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Code: ",
"raw": "Code: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/fal-ai/aura-sr",
"resource": null,
"url": null,
"href": "https://github.com/fal-ai/aura-sr",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Playground: ",
"raw": "Playground: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://fal.ai/models/fal-ai/aura-sr/playground",
"resource": null,
"url": null,
"href": "https://fal.ai/models/fal-ai/aura-sr/playground",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "What other models would you like to see open-sourced and commercially available? :)",
"raw": "What other models would you like to see open-sourced and commercially available? :)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | It is time for some Aura.
First in our series of fully open sourced / commercially available models by @fal-ai: AuraSR - a 600M parameter upscaler based on GigaGAN.
Blog: https://blog.fal.ai/introducing-aurasr-an-open-reproduction-of-the-gigagan-upscaler-2/
HF: https://huggingface.co/fal-ai/AuraSR
Code: https://github.com/fal-ai/aura-sr
Playground: https://fal.ai/models/fal-ai/aura-sr/playground
What other models would you like to see open-sourced and commercially available? :)
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6380ebb8471a4550ff255c62/-5tqR0SqLU53cOsXA-4ON.jpeg",
"fullname": "Batuhan",
"name": "isidentical",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 80,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"YaTharThShaRma999",
"louisbrulenaudet",
"John6666",
"netynet",
"badayvedat"
],
"count": 5
}
] | 2024-06-26T16:21:01.000Z | 2024-06-26T16:21:01.729Z | [] | /posts/isidentical/808551450380205 | 1,509 | 0 |
468604971409979 | [
{
"type": "text",
"value": "Look at that 👀 ",
"raw": "Look at that 👀 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Actual benchmarks have become too easy for recent models, much like grading high school students on middle school problems makes little sense. So the team worked on a new version of the Open LLM Leaderboard with new benchmarks.",
"raw": "Actual benchmarks have become too easy for recent models, much like grading high school students on middle school problems makes little sense. So the team worked on a new version of the Open LLM Leaderboard with new benchmarks.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Stellar work from ",
"raw": "Stellar work from ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@clefourrier",
"resource": null,
"url": null,
"href": null,
"user": "clefourrier",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@SaylorTwift",
"resource": null,
"url": null,
"href": null,
"user": "SaylorTwift",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " and the team!",
"raw": " and the team!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "👉 Read the blog post: ",
"raw": "👉 Read the blog post: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/open-llm-leaderboard/blog",
"resource": {
"type": "space",
"id": "open-llm-leaderboard/blog",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/open-llm-leaderboard/blog",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "👉 Explore the leaderboard: ",
"raw": "👉 Explore the leaderboard: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard",
"resource": {
"type": "space",
"id": "open-llm-leaderboard/open_llm_leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Look at that 👀
Actual benchmarks have become too easy for recent models, much like grading high school students on middle school problems makes little sense. So the team worked on a new version of the Open LLM Leaderboard with new benchmarks.
Stellar work from @clefourrier @SaylorTwift and the team!
👉 Read the blog post: https://huggingface.co/spaces/open-llm-leaderboard/blog
👉 Explore the leaderboard: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/KyjafUc-W05rOwAHOWHU5.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png",
"fullname": "Clémentine Fourrier",
"name": "clefourrier",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 450
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1678663263366-63e0eea7af523c37e5a77966.jpeg",
"fullname": "Nathan Habib",
"name": "SaylorTwift",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 96
}
] | [
{
"reaction": "❤️",
"users": [
"anakin87",
"clefourrier",
"SaylorTwift",
"dillfrescott",
"louisbrulenaudet"
],
"count": 5
},
{
"reaction": "👍",
"users": [
"Norod78",
"dillfrescott",
"den0620"
],
"count": 3
}
] | 2024-06-26T14:50:18.000Z | 2024-06-27T09:32:35.264Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6215ce9abfcb3893344dd0a2/0srkKGjBNRDKnlMxNrsmn.jpeg",
"fullname": "Cross",
"name": "dillfrescott",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 45,
"isFollowing": false
}
] | /posts/fdaudens/468604971409979 | 1,868 | 1 |
427727576111455 | [
{
"type": "text",
"value": "🌌 Creating adventures with local LLMs",
"raw": "🌌 Creating adventures with local LLMs",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "What if 🤔... Homer Simpson met Spider-Man and they went on a quest for donuts? 🍩",
"raw": "What if 🤔... Homer Simpson met Spider-Man and they went on a quest for donuts? 🍩",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Or if Fred Astaire and Corporal Hicks teamed up to fight xenomorphs? 👾",
"raw": "Or if Fred Astaire and Corporal Hicks teamed up to fight xenomorphs? 👾",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "In the words of Karpathy, LLMs are dream machines...",
"raw": "In the words of Karpathy, LLMs are dream machines...",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "they seem specially made to simulate these wild scenarios!",
"raw": "they seem specially made to simulate these wild scenarios!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "𝐄𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐢𝐧𝐠 𝐰𝐢𝐭𝐡 𝐭𝐡𝐢𝐬 𝐢𝐝𝐞𝐚 👇",
"raw": "𝐄𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐢𝐧𝐠 𝐰𝐢𝐭𝐡 𝐭𝐡𝐢𝐬 𝐢𝐝𝐞𝐚 👇",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Nous Research / ",
"raw": "Nous Research / ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@teknium",
"resource": null,
"url": null,
"href": null,
"user": "teknium",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " recently released ",
"raw": " recently released ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/NousResearch/CharacterCodex",
"resource": {
"type": "dataset",
"id": "NousResearch/CharacterCodex",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/NousResearch/CharacterCodex",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ":",
"raw": ":",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "a massive dataset with information on 16k characters, both fictional and real.",
"raw": "a massive dataset with information on 16k characters, both fictional and real.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I couldn't wait to play it...",
"raw": "I couldn't wait to play it...",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "After a few attempts, I found that combining the information in this dataset with a good model (like ",
"raw": "After a few attempts, I found that combining the information in this dataset with a good model (like ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct",
"resource": {
"type": "model",
"id": "meta-llama/Meta-Llama-3-8B-Instruct",
"discussionNum": null
},
"url": "https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ") opens the doors to a myriad of chat adventures.",
"raw": ") opens the doors to a myriad of chat adventures.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🛠️ Stack:",
"raw": "🛠️ Stack:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔹Haystack for orchestration 🏗️",
"raw": "🔹Haystack for orchestration 🏗️",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔹llamafile 🦙🗂️ to run our model locally.",
"raw": "🔹llamafile 🦙🗂️ to run our model locally.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📓 Check out the notebook: ",
"raw": "📓 Check out the notebook: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://t.ly/y6jrZ",
"resource": null,
"url": null,
"href": "https://t.ly/y6jrZ",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "(includes a bonus 🕵️ Mystery Character Quiz)",
"raw": "(includes a bonus 🕵️ Mystery Character Quiz)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🌌 Creating adventures with local LLMs
What if 🤔... Homer Simpson met Spider-Man and they went on a quest for donuts? 🍩
Or if Fred Astaire and Corporal Hicks teamed up to fight xenomorphs? 👾
In the words of Karpathy, LLMs are dream machines...
they seem specially made to simulate these wild scenarios!
𝐄𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐢𝐧𝐠 𝐰𝐢𝐭𝐡 𝐭𝐡𝐢𝐬 𝐢𝐝𝐞𝐚 👇
Nous Research / @teknium recently released https://huggingface.co/datasets/NousResearch/CharacterCodex:
a massive dataset with information on 16k characters, both fictional and real.
I couldn't wait to play it...
After a few attempts, I found that combining the information in this dataset with a good model (like https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) opens the doors to a myriad of chat adventures.
🛠️ Stack:
🔹Haystack for orchestration 🏗️
🔹llamafile 🦙🗂️ to run our model locally.
📓 Check out the notebook: https://t.ly/y6jrZ
(includes a bonus 🕵️ Mystery Character Quiz) | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626505d493e0b04d75710566/9rfJc9ORXU9J5a42Ev3v6.png",
"fullname": "Stefano Fiorucci",
"name": "anakin87",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 66,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/626505d493e0b04d75710566/-Aw8IeW60pGI8XA-HYt1d.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317aade83d8d2fd903192d9/erOwgMXc_CZih3uMoyTAp.jpeg",
"fullname": "Teknium",
"name": "teknium",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4248
}
] | [
{
"reaction": "🔥",
"users": [
"not-lain",
"nickprock",
"jakemannix"
],
"count": 3
},
{
"reaction": "🤗",
"users": [
"not-lain",
"nickprock"
],
"count": 2
},
{
"reaction": "👍",
"users": [
"lenML"
],
"count": 1
}
] | 2024-06-26T13:26:05.000Z | 2024-06-26T18:56:04.721Z | [] | /posts/anakin87/427727576111455 | 1,649 | 0 |
330638841278815 | [
{
"type": "text",
"value": "LLaVA 1.6 - 34 billion parameters - loaded as 8-bit precision caption. Uses around 37 GB VRAM on Massed Compute with our installers (31 cents per hour with SECourses coupon). A new tutorial for this will be made hopefully soon. ",
"raw": "LLaVA 1.6 - 34 billion parameters - loaded as 8-bit precision caption. Uses around 37 GB VRAM on Massed Compute with our installers (31 cents per hour with SECourses coupon). A new tutorial for this will be made hopefully soon. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | LLaVA 1.6 - 34 billion parameters - loaded as 8-bit precision caption. Uses around 37 GB VRAM on Massed Compute with our installers (31 cents per hour with SECourses coupon). A new tutorial for this will be made hopefully soon. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gözükara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/aA-oPlfMJIUyxmt8fSqK7.png"
}
] | [] | [] | 2024-06-26T13:09:00.000Z | 2024-07-24T15:35:20.716Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/3MksIHj3UmwawD47lj4I6.jpeg",
"fullname": "dc.Sanchez",
"name": "SanchezKang",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gözükara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
}
] | /posts/MonsterMMORPG/330638841278815 | 492 | 4 |
168277709510356 | [
{
"type": "text",
"value": "Just shipped: introduction to vision language models (aka image-text-to-text) ",
"raw": "Just shipped: introduction to vision language models (aka image-text-to-text) ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/tasks/image-text-to-text",
"resource": null,
"url": null,
"href": "https://huggingface.co/tasks/image-text-to-text",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Learn about more machine learning tasks at ",
"raw": "Learn about more machine learning tasks at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/tasks",
"resource": null,
"url": null,
"href": "https://huggingface.co/tasks",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Just shipped: introduction to vision language models (aka image-text-to-text) https://huggingface.co/tasks/image-text-to-text
Learn about more machine learning tasks at https://huggingface.co/tasks | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/tn5lR1vngXyp39z1BPPxO.mp4"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"dblasko",
"gokaygokay",
"John6666",
"ucsahin",
"damerajee",
"hewliyang"
],
"count": 6
},
{
"reaction": "🔥",
"users": [
"alkzar90",
"ucyang",
"AnkitAI"
],
"count": 3
},
{
"reaction": "👍",
"users": [
"KnutJaegersberg"
],
"count": 1
}
] | 2024-06-26T10:33:31.000Z | 2024-06-26T10:33:31.695Z | [] | /posts/merve/168277709510356 | 3,274 | 0 |
532272630959058 | [
{
"type": "text",
"value": "Hi everyone! ",
"raw": "Hi everyone! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I'm Alex, I'm 16, I've been an internship at Hugging Face for a little over a week and I've already learned a lot about using and prompting LLM models. With ",
"raw": "I'm Alex, I'm 16, I've been an internship at Hugging Face for a little over a week and I've already learned a lot about using and prompting LLM models. With ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@victor",
"resource": null,
"url": null,
"href": null,
"user": "victor",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " as tutor I've just finished a space that analyzes your feelings by prompting an LLM chat model. The aim is to extend it so that it can categorize hugging face posts. ",
"raw": " as tutor I've just finished a space that analyzes your feelings by prompting an LLM chat model. The aim is to extend it so that it can categorize hugging face posts. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/alex-abb/LLM_Feeling_Analyzer",
"resource": {
"type": "space",
"id": "alex-abb/LLM_Feeling_Analyzer",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/alex-abb/LLM_Feeling_Analyzer",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi everyone!
I'm Alex, I'm 16, I've been an internship at Hugging Face for a little over a week and I've already learned a lot about using and prompting LLM models. With @victor as tutor I've just finished a space that analyzes your feelings by prompting an LLM chat model. The aim is to extend it so that it can categorize hugging face posts.
https://huggingface.co/spaces/alex-abb/LLM_Feeling_Analyzer
| {
"avatarUrl": "/avatars/b18e200ef5a79e4035e55249e1523c4a.svg",
"fullname": "alexandre ababou",
"name": "alex-abb",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 7,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/X7QKoiXbUtEZSG9jyvfk3.jpeg",
"fullname": "Victor Mustar",
"name": "victor",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 2578
}
] | [
{
"reaction": "🔥",
"users": [
"jgitsolutions",
"dblasko",
"victor",
"nefnef",
"Linaqruf",
"Wauplin",
"clem",
"thomwolf",
"mfuntowicz",
"HugoLaurencon",
"Narsil",
"davidberenstein1957",
"joaogante",
"ArthurZ",
"SixOpen",
"not-lain",
"Sylvestre",
"meganariley",
"davanstrien",
"pierrci",
"osanseviero",
"DavidGF",
"adamelliotfields",
"louisbrulenaudet",
"nataliaElv",
"lvwerra",
"azzr-duke",
"tengomucho",
"gabrielmbmb",
"julien-c",
"eustlb",
"sayakpaul",
"reach-vb",
"RoacherM",
"agenkit",
"Rexhaif",
"mishig",
"Cuiunbo",
"VishvaNCS",
"sted97",
"kramp",
"jcudit",
"MKSH",
"pcuenq"
],
"count": 44
},
{
"reaction": "👍",
"users": [
"Norod78",
"clem",
"thomwolf",
"OzzyGT",
"mfuntowicz",
"davidberenstein1957",
"not-lain",
"osanseviero",
"varuy322",
"reach-vb",
"mishig",
"frederikai",
"pcuenq"
],
"count": 13
},
{
"reaction": "🚀",
"users": [
"davidberenstein1957",
"osanseviero",
"reach-vb",
"agenkit",
"mishig",
"pcuenq"
],
"count": 6
},
{
"reaction": "😎",
"users": [
"davidberenstein1957",
"ArthurZ",
"eustlb",
"reach-vb",
"pcuenq"
],
"count": 5
}
] | 2024-06-26T09:29:18.000Z | 2024-06-26T19:32:32.406Z | [
{
"avatarUrl": "/avatars/b3eb8db1a2b7b2bb9499657b49cef353.svg",
"fullname": "Stefan D",
"name": "shtefarn",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "/avatars/b18e200ef5a79e4035e55249e1523c4a.svg",
"fullname": "alexandre ababou",
"name": "alex-abb",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 7,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 148,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/BRKGVgk_dJO34ZOi3Slb_.jpeg",
"fullname": "Lain",
"name": "not-lain",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 919,
"isFollowing": false
}
] | /posts/alex-abb/532272630959058 | 4,760 | 4 |
492429353786944 | [
{
"type": "text",
"value": "Hello!",
"raw": "Hello!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I've been in the lab playing with various data formats today, and jammed out with some plain text to produce a nice list of fallacies and their solutions from Wikipedia's List of fallacies as JSONL for data processing.",
"raw": "I've been in the lab playing with various data formats today, and jammed out with some plain text to produce a nice list of fallacies and their solutions from Wikipedia's List of fallacies as JSONL for data processing.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Had some bumps along the way, but me and Gemini 1.5 Pro got there in the end. I must really learn to work with Gemini 1.5 Flash more effectively in future.",
"raw": "Had some bumps along the way, but me and Gemini 1.5 Pro got there in the end. I must really learn to work with Gemini 1.5 Flash more effectively in future.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/MrOvkill/fallacies-list-wikipedia",
"resource": {
"type": "dataset",
"id": "MrOvkill/fallacies-list-wikipedia",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/MrOvkill/fallacies-list-wikipedia",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Enjoy!",
"raw": "Enjoy!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "code_fence",
"value": null,
"raw": "```\n-<3\n```",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "-<3",
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hello!
I've been in the lab playing with various data formats today, and jammed out with some plain text to produce a nice list of fallacies and their solutions from Wikipedia's List of fallacies as JSONL for data processing.
Had some bumps along the way, but me and Gemini 1.5 Pro got there in the end. I must really learn to work with Gemini 1.5 Flash more effectively in future.
https://huggingface.co/datasets/MrOvkill/fallacies-list-wikipedia
Enjoy!
```
-<3
```
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652ff5ee7aab9cfb619400bf/cIdrUic40uXoRbAylFiM8.png",
"fullname": "Samuel L Meyers",
"name": "MrOvkill",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"s3nh",
"Ramikan-BR",
"deepan2k5"
],
"count": 3
}
] | 2024-06-26T04:04:50.000Z | 2024-06-26T04:04:50.191Z | [] | /posts/MrOvkill/492429353786944 | 1,606 | 0 |
605584658592909 | [
{
"type": "text",
"value": "Looking to move away from OpenAI's closed-source models? ",
"raw": "Looking to move away from OpenAI's closed-source models? ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We've made switching from to open-source as easy as possible with a drop-in replacement OpenAI’s chat completion endpoint. You can specify any model you'd like in just a few lines of code. We call it the OpenAI Switch Kit.",
"raw": "We've made switching from to open-source as easy as possible with a drop-in replacement OpenAI’s chat completion endpoint. You can specify any model you'd like in just a few lines of code. We call it the OpenAI Switch Kit.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check out the docs here: ",
"raw": "Check out the docs here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://postgresml.org/docs/guides/opensourceai",
"resource": null,
"url": null,
"href": "https://postgresml.org/docs/guides/opensourceai",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Or the blog post here: ",
"raw": "Or the blog post here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://postgresml.org/blog/introducing-the-openai-switch-kit-move-from-closed-to-open-source-ai-in-minutes",
"resource": null,
"url": null,
"href": "https://postgresml.org/blog/introducing-the-openai-switch-kit-move-from-closed-to-open-source-ai-in-minutes",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Happy building!",
"raw": "Happy building!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Looking to move away from OpenAI's closed-source models?
We've made switching from to open-source as easy as possible with a drop-in replacement OpenAI’s chat completion endpoint. You can specify any model you'd like in just a few lines of code. We call it the OpenAI Switch Kit.
Check out the docs here: https://postgresml.org/docs/guides/opensourceai
Or the blog post here: https://postgresml.org/blog/introducing-the-openai-switch-kit-move-from-closed-to-open-source-ai-in-minutes
Happy building! | {
"avatarUrl": "/avatars/20c6cabac91c84388340016f1ba6494d.svg",
"fullname": "Cassandra Stumer",
"name": "cassandrapgml",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/665f71f996682d0a22327754/FRpLaWwuJqN37eelzbxAt.png"
}
] | [] | [] | 2024-06-25T21:23:49.000Z | 2024-06-25T21:23:49.487Z | [] | /posts/cassandrapgml/605584658592909 | 729 | 0 |
114782460115385 | [
{
"type": "text",
"value": "Recently, we open-sourced YaFSDP, Yandex’s tool for efficient distributed training of LLMs.",
"raw": "Recently, we open-sourced YaFSDP, Yandex’s tool for efficient distributed training of LLMs.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Here are some of the key ideas used in YaFSDP to provide speedup and memory savings over FSDP:",
"raw": "Here are some of the key ideas used in YaFSDP to provide speedup and memory savings over FSDP:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "• Allocate and utilize just two buffers throughout the transformer for all collected weights to circumvent the torch memory allocator;",
"raw": "• Allocate and utilize just two buffers throughout the transformer for all collected weights to circumvent the torch memory allocator;",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "• Gather small normalization layers at the beginning of the iteration and average the gradients only at the end;",
"raw": "• Gather small normalization layers at the beginning of the iteration and average the gradients only at the end;",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "• Move gradient division to the very end of the backward pass.",
"raw": "• Move gradient division to the very end of the backward pass.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "To learn more about how YaFSDP works, check out our latest blog post: ",
"raw": "To learn more about how YaFSDP works, check out our latest blog post: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://medium.com/yandex/yafsdp-a-tool-for-faster-llm-training-and-optimized-gpu-utilization-is-no-632b7539f5b3",
"resource": null,
"url": null,
"href": "https://medium.com/yandex/yafsdp-a-tool-for-faster-llm-training-and-optimized-gpu-utilization-is-no-632b7539f5b3",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Recently, we open-sourced YaFSDP, Yandex’s tool for efficient distributed training of LLMs.
Here are some of the key ideas used in YaFSDP to provide speedup and memory savings over FSDP:
• Allocate and utilize just two buffers throughout the transformer for all collected weights to circumvent the torch memory allocator;
• Gather small normalization layers at the beginning of the iteration and average the gradients only at the end;
• Move gradient division to the very end of the backward pass.
To learn more about how YaFSDP works, check out our latest blog post: https://medium.com/yandex/yafsdp-a-tool-for-faster-llm-training-and-optimized-gpu-utilization-is-no-632b7539f5b3 | {
"avatarUrl": "/avatars/aa880f8154840d70c7ec7d75373b8a30.svg",
"fullname": "Ruslan Vasilev",
"name": "artnitolog",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 15,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"Omshirsat",
"kristaller486",
"itsdaria",
"GPT007",
"umutphp",
"Joseph717171",
"MaziyarPanahi",
"den0620",
"Rexhaif",
"ucyang"
],
"count": 10
},
{
"reaction": "❤️",
"users": [
"Joseph717171",
"MaziyarPanahi",
"Rexhaif"
],
"count": 3
},
{
"reaction": "🔥",
"users": [
"Joseph717171",
"MaziyarPanahi",
"Rexhaif"
],
"count": 3
},
{
"reaction": "🚀",
"users": [
"Joseph717171"
],
"count": 1
},
{
"reaction": "🤗",
"users": [
"Joseph717171"
],
"count": 1
},
{
"reaction": "🤝",
"users": [
"Joseph717171"
],
"count": 1
}
] | 2024-06-25T18:10:44.000Z | 2024-06-25T18:10:44.962Z | [] | /posts/artnitolog/114782460115385 | 2,499 | 0 |
846357156531571 | [
{
"type": "text",
"value": "Hello beautiful people.",
"raw": "Hello beautiful people.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I wanted to thank everyone that read my blogpost and I am glad to share that we have achieved 11000 readers 🥳",
"raw": "I wanted to thank everyone that read my blogpost and I am glad to share that we have achieved 11000 readers 🥳",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I couldn't have done this without you, so once again thanks a lot everyone for the support 💖",
"raw": "I couldn't have done this without you, so once again thanks a lot everyone for the support 💖",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "If you haven't already you can read my blog post at: ",
"raw": "If you haven't already you can read my blog post at: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/not-lain/rag-chatbot-using-llama3",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/not-lain/rag-chatbot-using-llama3",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hello beautiful people.
I wanted to thank everyone that read my blogpost and I am glad to share that we have achieved 11000 readers 🥳
I couldn't have done this without you, so once again thanks a lot everyone for the support 💖
If you haven't already you can read my blog post at: https://huggingface.co/blog/not-lain/rag-chatbot-using-llama3 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/BRKGVgk_dJO34ZOi3Slb_.jpeg",
"fullname": "Lain",
"name": "not-lain",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 919,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/DtpOWjMB9IlNK45cpB0VQ.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"YaTharThShaRma999",
"KvrParaskevi",
"ijohn07",
"mohamed-khalil"
],
"count": 4
}
] | 2024-06-25T17:14:13.000Z | 2024-06-25T17:23:33.731Z | [] | /posts/not-lain/846357156531571 | 1,430 | 0 |
138732505334784 | [
{
"type": "text",
"value": "🌍 Cohere for AI has announced that this July and August, it is inviting researchers from around the world to join Expedition Aya, a global initiative focused on launching projects using multilingual tools like Aya 23 and Aya 101. 🌐",
"raw": "🌍 Cohere for AI has announced that this July and August, it is inviting researchers from around the world to join Expedition Aya, a global initiative focused on launching projects using multilingual tools like Aya 23 and Aya 101. 🌐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Participants can start by joining the Aya server, where all organization will take place. They can share ideas and connect with others on Discord and the signup sheet. Various events will be hosted to help people find potential team members. 🤝",
"raw": "Participants can start by joining the Aya server, where all organization will take place. They can share ideas and connect with others on Discord and the signup sheet. Various events will be hosted to help people find potential team members. 🤝",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "To support the projects, Cohere API credits will be issued. 💰",
"raw": "To support the projects, Cohere API credits will be issued. 💰",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Over the course of six weeks, weekly check-in calls are also planned to help teams stay on track and receive support with using Aya. 🖥️",
"raw": "Over the course of six weeks, weekly check-in calls are also planned to help teams stay on track and receive support with using Aya. 🖥️",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The expedition will wrap up at the end of August with a closing event to showcase everyone’s work and plan next steps. Participants who complete the expedition will also receive some Expedition Aya swag. 🎉",
"raw": "The expedition will wrap up at the end of August with a closing event to showcase everyone’s work and plan next steps. Participants who complete the expedition will also receive some Expedition Aya swag. 🎉",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Links:",
"raw": "Links:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Join the Aya Discord: ",
"raw": "Join the Aya Discord: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://discord.com/invite/q9QRYkjpwk",
"resource": null,
"url": null,
"href": "https://discord.com/invite/q9QRYkjpwk",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Visit the Expedition Aya Minisite: ",
"raw": "Visit the Expedition Aya Minisite: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://sites.google.com/cohere.com/expedition-aya/home",
"resource": null,
"url": null,
"href": "https://sites.google.com/cohere.com/expedition-aya/home",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🌍 Cohere for AI has announced that this July and August, it is inviting researchers from around the world to join Expedition Aya, a global initiative focused on launching projects using multilingual tools like Aya 23 and Aya 101. 🌐
Participants can start by joining the Aya server, where all organization will take place. They can share ideas and connect with others on Discord and the signup sheet. Various events will be hosted to help people find potential team members. 🤝
To support the projects, Cohere API credits will be issued. 💰
Over the course of six weeks, weekly check-in calls are also planned to help teams stay on track and receive support with using Aya. 🖥️
The expedition will wrap up at the end of August with a closing event to showcase everyone’s work and plan next steps. Participants who complete the expedition will also receive some Expedition Aya swag. 🎉
Links:
Join the Aya Discord: https://discord.com/invite/q9QRYkjpwk
Visit the Expedition Aya Minisite: https://sites.google.com/cohere.com/expedition-aya/home
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/GXN8mEmaq3rfITRrw7GeZ.jpeg",
"fullname": "atayloraerospace",
"name": "Taylor658",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 74,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/641b754d1911d3be6745cce9/XsyWDzNV2D6zrhki1aMgo.jpeg"
}
] | [] | [
{
"reaction": "🚀",
"users": [
"monsoon-nlp",
"louisbrulenaudet"
],
"count": 2
},
{
"reaction": "🔥",
"users": [
"takarajordan"
],
"count": 1
}
] | 2024-06-25T16:45:53.000Z | 2024-07-09T10:32:25.131Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/aqVOJmgtsBbB6BFeLpL7h.jpeg",
"fullname": "Jordan Legg",
"name": "takarajordan",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
}
] | /posts/Taylor658/138732505334784 | 694 | 1 |
784239590755279 | [
{
"type": "text",
"value": "Florence-2 has a great capability of detecting various objects in a zero-shot setting with the task prompt \"<OD>\". However, if you want to detect specific objects that the base model is not able to in its current form, you can easily finetune it for this particular task. Below I show how to finetune the model to detect tables in a given image, but a similar process can be applied to detect any objects. Thanks to ",
"raw": "Florence-2 has a great capability of detecting various objects in a zero-shot setting with the task prompt \"<OD>\". However, if you want to detect specific objects that the base model is not able to in its current form, you can easily finetune it for this particular task. Below I show how to finetune the model to detect tables in a given image, but a similar process can be applied to detect any objects. Thanks to ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@andito",
"resource": null,
"url": null,
"href": null,
"user": "andito",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@merve",
"resource": null,
"url": null,
"href": null,
"user": "merve",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", and ",
"raw": ", and ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@SkalskiP",
"resource": null,
"url": null,
"href": null,
"user": "SkalskiP",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " for sharing the fix for finetuning the Florence-2 model. Please also check their great blog post at ",
"raw": " for sharing the fix for finetuning the Florence-2 model. Please also check their great blog post at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/finetune-florence2",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/finetune-florence2",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ". ",
"raw": ". ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Colab notebook: ",
"raw": "Colab notebook: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://colab.research.google.com/drive/1Y8GVjwzBIgfmfD3ZypDX5H1JA_VG0YDL?usp=sharing",
"resource": null,
"url": null,
"href": "https://colab.research.google.com/drive/1Y8GVjwzBIgfmfD3ZypDX5H1JA_VG0YDL?usp=sharing",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Finetuned model: ",
"raw": "Finetuned model: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/ucsahin/Florence-2-large-TableDetection",
"resource": {
"type": "model",
"id": "ucsahin/Florence-2-large-TableDetection",
"discussionNum": null
},
"url": "https://huggingface.co/ucsahin/Florence-2-large-TableDetection",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Florence-2 has a great capability of detecting various objects in a zero-shot setting with the task prompt "<OD>". However, if you want to detect specific objects that the base model is not able to in its current form, you can easily finetune it for this particular task. Below I show how to finetune the model to detect tables in a given image, but a similar process can be applied to detect any objects. Thanks to @andito, @merve, and @SkalskiP for sharing the fix for finetuning the Florence-2 model. Please also check their great blog post at https://huggingface.co/blog/finetune-florence2.
Colab notebook: https://colab.research.google.com/drive/1Y8GVjwzBIgfmfD3ZypDX5H1JA_VG0YDL?usp=sharing
Finetuned model: https://huggingface.co/ucsahin/Florence-2-large-TableDetection | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64dcd996e3e44e8000cdd9cb/jLrOsWrKrKYx0NhVkJaF0.jpeg",
"fullname": "Umitcan Sahin",
"name": "ucsahin",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 45,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d66b494bbd0d92b641cdbb/6-7dm7B-JxcoS1QlCPdMN.jpeg",
"fullname": "Andres Marafioti",
"name": "andito",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 53
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60f84d4d85dbbb185d2e9a53/Mlc0XjAgQR2cuhGNchz07.jpeg",
"fullname": "Piotr Skalski",
"name": "SkalskiP",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 2299
}
] | [
{
"reaction": "🚀",
"users": [
"andito",
"qubvel-hf",
"John6666",
"Metin",
"osanseviero",
"sosoai",
"kevinjeswani",
"danelcsb",
"dblasko",
"maddosaientisuto",
"radames"
],
"count": 11
},
{
"reaction": "🔥",
"users": [
"alperiox",
"radames"
],
"count": 2
}
] | 2024-06-25T15:43:36.000Z | 2024-07-23T13:51:44.479Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6579e0eaa9e58aec614e9d97/zklEVBvTRHoIvVjuVLRom.jpeg",
"fullname": "Sangbum Choi",
"name": "danelcsb",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64dcd996e3e44e8000cdd9cb/jLrOsWrKrKYx0NhVkJaF0.jpeg",
"fullname": "Umitcan Sahin",
"name": "ucsahin",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 45,
"isFollowing": false
},
{
"avatarUrl": "/avatars/ee643b69c74bd7ae5ad59df4d05ac38d.svg",
"fullname": "Abdul Hanan Ch",
"name": "maddosaientisuto",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/ucsahin/784239590755279 | 3,556 | 5 |
807406794863793 | [
{
"type": "text",
"value": "I'm decentralizing my AI end2end, from the AI model distribution to on device AI inferencing. llama-ipfs - llama.cpp integrated with Interplanetary File System for distributing peer2peer and loading AI models without the need for cloud storage or AI model Hub.",
"raw": "I'm decentralizing my AI end2end, from the AI model distribution to on device AI inferencing. llama-ipfs - llama.cpp integrated with Interplanetary File System for distributing peer2peer and loading AI models without the need for cloud storage or AI model Hub.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "llama.cpp now supports decentralized inferencing with RPC, allowing the distribution of workload across all home devices. This functionality can be enhanced with a P2P ad-hoc VPN, enabling the extension of distributed inferencing to any device on any network.",
"raw": "llama.cpp now supports decentralized inferencing with RPC, allowing the distribution of workload across all home devices. This functionality can be enhanced with a P2P ad-hoc VPN, enabling the extension of distributed inferencing to any device on any network.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Imagine an open-source AI that's as decentralized as a potluck dinner - everyone brings something to the table, and there's ZERO need for blockchain. It's like a digital fortress, with security and privacy baked right in, not to mention a dollop of integrity and trust. This could be the secret sauce for an enterprise AI platform, complete with an integrated IT policy. It might just be the cherry on top for the next generation of Apple Intelligence and Copilot+ PCs.",
"raw": "Imagine an open-source AI that's as decentralized as a potluck dinner - everyone brings something to the table, and there's ZERO need for blockchain. It's like a digital fortress, with security and privacy baked right in, not to mention a dollop of integrity and trust. This could be the secret sauce for an enterprise AI platform, complete with an integrated IT policy. It might just be the cherry on top for the next generation of Apple Intelligence and Copilot+ PCs.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.",
"raw": "Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I'm decentralizing my AI end2end, from the AI model distribution to on device AI inferencing. llama-ipfs - llama.cpp integrated with Interplanetary File System for distributing peer2peer and loading AI models without the need for cloud storage or AI model Hub.
llama.cpp now supports decentralized inferencing with RPC, allowing the distribution of workload across all home devices. This functionality can be enhanced with a P2P ad-hoc VPN, enabling the extension of distributed inferencing to any device on any network.
Imagine an open-source AI that's as decentralized as a potluck dinner - everyone brings something to the table, and there's ZERO need for blockchain. It's like a digital fortress, with security and privacy baked right in, not to mention a dollop of integrity and trust. This could be the secret sauce for an enterprise AI platform, complete with an integrated IT policy. It might just be the cherry on top for the next generation of Apple Intelligence and Copilot+ PCs.
Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63f731c7d36951307fcca6bf/DMd5-Pt7YHC0agbAQ1xUc.png",
"fullname": "Mitko Vasilev",
"name": "mitkox",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 117,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"rvpierre",
"Ramikan-BR",
"graphicaldot",
"brainhome",
"sa8",
"pegak"
],
"count": 6
},
{
"reaction": "👍",
"users": [
"Nerius"
],
"count": 1
}
] | 2024-06-25T14:10:12.000Z | 2024-06-25T14:10:12.030Z | [] | /posts/mitkox/807406794863793 | 2,097 | 0 |
697001195364732 | [
{
"type": "text",
"value": "NEW #DecentralizeAI Writing Contest, by InternetComputer.org and HackerNoon.com! 😜 ",
"raw": "NEW #DecentralizeAI Writing Contest, by InternetComputer.org and HackerNoon.com! 😜 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.contests.hackernoon.com/decentralize-ai-writing-contest",
"resource": null,
"url": null,
"href": "https://www.contests.hackernoon.com/decentralize-ai-writing-contest",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 🤪",
"raw": " 🤪",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "\"Not going to beat centralized AI with more centralized AI.\" - Emad Mostaque",
"raw": "\"Not going to beat centralized AI with more centralized AI.\" - Emad Mostaque",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "To enter, submit a blog post with the #decentralize-ai tag on HackerNoon.",
"raw": "To enter, submit a blog post with the #decentralize-ai tag on HackerNoon.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | NEW #DecentralizeAI Writing Contest, by InternetComputer.org and HackerNoon.com! 😜 https://www.contests.hackernoon.com/decentralize-ai-writing-contest 🤪
"Not going to beat centralized AI with more centralized AI." - Emad Mostaque
To enter, submit a blog post with the #decentralize-ai tag on HackerNoon. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64862a25cf5ad5e1f0482ef2/61qPUtw9jIl7zpPYmi0VW.jpeg",
"fullname": "David Smooke",
"name": "Smooke",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 43,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64862a25cf5ad5e1f0482ef2/exJ53dftLPQJw5m1f0JeA.png"
}
] | [] | [
{
"reaction": "🚀",
"users": [
"victor"
],
"count": 1
}
] | 2024-06-25T13:25:30.000Z | 2024-06-25T13:25:30.425Z | [] | /posts/Smooke/697001195364732 | 507 | 0 |
617599851628763 | [
{
"type": "text",
"value": "Use GPT-4o + GPT-4-Turbo-Preview + GPT-3.5-Turbo + BingAI",
"raw": "Use GPT-4o + GPT-4-Turbo-Preview + GPT-3.5-Turbo + BingAI",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/NiansuhAI/Copilot",
"resource": null,
"url": null,
"href": "https://huggingface.co/spaces/NiansuhAI/Copilot",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Use GPT-4o + GPT-4-Turbo-Preview + GPT-3.5-Turbo + BingAI
https://huggingface.co/spaces/NiansuhAI/Copilot | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64cba00d710645aa7b04f281/a_-LPwd4wqRyi8sJ1QxjI.jpeg",
"fullname": "Husnain",
"name": "Niansuh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 64,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"Niansuh",
"British-Rat",
"John6666",
"ayush-thakur02"
],
"count": 4
},
{
"reaction": "🚀",
"users": [
"Niansuh",
"alvis44"
],
"count": 2
}
] | 2024-06-25T09:29:26.000Z | 2024-06-26T15:50:27.001Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/BRKGVgk_dJO34ZOi3Slb_.jpeg",
"fullname": "Lain",
"name": "not-lain",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 919,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64cba00d710645aa7b04f281/a_-LPwd4wqRyi8sJ1QxjI.jpeg",
"fullname": "Husnain",
"name": "Niansuh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 64,
"isFollowing": false
}
] | /posts/Niansuh/617599851628763 | 1,689 | 2 |
504772386075668 | [
{
"type": "text",
"value": "Were you aware that we have a dedicated guide on different prompting mechanisms to improve the image generation quality? 🧨",
"raw": "Were you aware that we have a dedicated guide on different prompting mechanisms to improve the image generation quality? 🧨",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Takes you through simple prompt engineering, prompt weighting, prompt enhancement using GPT-2, and more.",
"raw": "Takes you through simple prompt engineering, prompt weighting, prompt enhancement using GPT-2, and more.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check out the guide here 🦯",
"raw": "Check out the guide here 🦯",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/docs/diffusers/main/en/using-diffusers/weighted_prompts",
"resource": null,
"url": null,
"href": "https://huggingface.co/docs/diffusers/main/en/using-diffusers/weighted_prompts",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Were you aware that we have a dedicated guide on different prompting mechanisms to improve the image generation quality? 🧨
Takes you through simple prompt engineering, prompt weighting, prompt enhancement using GPT-2, and more.
Check out the guide here 🦯
https://huggingface.co/docs/diffusers/main/en/using-diffusers/weighted_prompts | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649681653581-5f7fbd813e94f16a85448745.jpeg",
"fullname": "Sayak Paul",
"name": "sayakpaul",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 446,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f7fbd813e94f16a85448745/12dTczAyppx9K4if1B-0s.png"
}
] | [] | [
{
"reaction": "🔥",
"users": [
"ashikurrahman",
"GPT007",
"linoyts",
"louisbrulenaudet",
"radames",
"chuangxinlezhi",
"mynkchaudhry"
],
"count": 7
},
{
"reaction": "🤯",
"users": [
"chuangxinlezhi"
],
"count": 1
}
] | 2024-06-25T09:08:27.000Z | 2024-06-25T17:42:42.253Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6640bbd0220cfa8cbfdce080/wiAHUu5ewawyipNs0YFBR.png",
"fullname": "John Smith",
"name": "John6666",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 384,
"isFollowing": false
}
] | /posts/sayakpaul/504772386075668 | 2,189 | 1 |
383558582332469 | [
{
"type": "text",
"value": "I've noticed some people are still downloading ",
"raw": "I've noticed some people are still downloading ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/neph1/sd-seer-griffin-3b",
"resource": {
"type": "model",
"id": "neph1/sd-seer-griffin-3b",
"discussionNum": null
},
"url": "https://huggingface.co/neph1/sd-seer-griffin-3b",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Should I make an update based on a more modern architecture? (griffin-3b is llama (1!))",
"raw": "Should I make an update based on a more modern architecture? (griffin-3b is llama (1!))",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I've noticed some people are still downloading https://huggingface.co/neph1/sd-seer-griffin-3b
Should I make an update based on a more modern architecture? (griffin-3b is llama (1!)) | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/653cd3049107029eb004f968/Y4XphXmk8P51GlIi6u9cd.png",
"fullname": "Rickard Edén",
"name": "neph1",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 13,
"isFollowing": false
} | [] | [] | [] | 2024-06-25T08:57:41.000Z | 2024-07-22T12:15:14.331Z | [
{
"avatarUrl": "/avatars/25883cb9e3e9411428a4b09a4e769fbb.svg",
"fullname": "gfhgfhgfh",
"name": "ffghgfh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/neph1/383558582332469 | 592 | 1 |
676034649939962 | [
{
"type": "text",
"value": "📢 Interested in #LLM safety? ",
"raw": "📢 Interested in #LLM safety? ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We have just uploaded a new version of ALERT 🚨 on ArXiv with novel insights into the weaknesses and vulnerabilities of LLMs! 👀 ",
"raw": "We have just uploaded a new version of ALERT 🚨 on ArXiv with novel insights into the weaknesses and vulnerabilities of LLMs! 👀 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2404.08676",
"resource": null,
"url": null,
"href": "https://arxiv.org/abs/2404.08676",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "For a summary of the paper, read this blog post: ",
"raw": "For a summary of the paper, read this blog post: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/sted97/alert",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/sted97/alert",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 🤗",
"raw": " 🤗",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 📢 Interested in #LLM safety?
We have just uploaded a new version of ALERT 🚨 on ArXiv with novel insights into the weaknesses and vulnerabilities of LLMs! 👀 https://arxiv.org/abs/2404.08676
For a summary of the paper, read this blog post: https://huggingface.co/blog/sted97/alert 🤗 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b85aa99ba538c73a7dc78b/gWxtQAvOYn7cXgE_nAy0p.jpeg",
"fullname": "Simone Tedeschi",
"name": "sted97",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 30,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"PereLluis13"
],
"count": 1
}
] | 2024-06-25T08:15:29.000Z | 2024-06-25T08:15:29.270Z | [] | /posts/sted97/676034649939962 | 454 | 0 |
355818467715515 | [
{
"type": "text",
"value": "📣Thrilled to make public our recent work ENVISIONS !!!",
"raw": "📣Thrilled to make public our recent work ENVISIONS !!!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Without human annotations !",
"raw": "- Without human annotations !",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Without Distilling Strong LLMs !",
"raw": "- Without Distilling Strong LLMs !",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Self-improve LLMs in the environment",
"raw": "- Self-improve LLMs in the environment",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Amazing performances on agentic and reasoning tasks",
"raw": "- Amazing performances on agentic and reasoning tasks",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Insightful analysis on \"why\" questions",
"raw": "- Insightful analysis on \"why\" questions",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📝 Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models",
"raw": "📝 Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📎 Repo: ",
"raw": "📎 Repo: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/xufangzhi/ENVISIONS",
"resource": null,
"url": null,
"href": "https://github.com/xufangzhi/ENVISIONS",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 📣Thrilled to make public our recent work ENVISIONS !!!
- Without human annotations !
- Without Distilling Strong LLMs !
- Self-improve LLMs in the environment
- Amazing performances on agentic and reasoning tasks
- Insightful analysis on "why" questions
📝 Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
📎 Repo: https://github.com/xufangzhi/ENVISIONS | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/656d73ed0bbc114fe6449704/gpteBU9GmKSHRVkRBUHld.png",
"fullname": "Symbol-LLM",
"name": "Symbol-LLM",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 27,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"Symbol-LLM",
"GPT007",
"John6666"
],
"count": 3
},
{
"reaction": "🚀",
"users": [
"Symbol-LLM"
],
"count": 1
}
] | 2024-06-25T05:17:34.000Z | 2024-06-25T07:58:42.528Z | [
{
"avatarUrl": "/avatars/c5dd35ecf6b895f01b20fba7aa75124d.svg",
"fullname": "Jeffery",
"name": "sjfhsajkf",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/Symbol-LLM/355818467715515 | 1,776 | 1 |
210297200279279 | [
{
"type": "text",
"value": "📣Thrilled to make public our recent work ENVISIONS !!!",
"raw": "📣Thrilled to make public our recent work ENVISIONS !!!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Without human annotations !",
"raw": "- Without human annotations !",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Without Distilling Strong LLMs !",
"raw": "- Without Distilling Strong LLMs !",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Self-improve LLMs in the environment",
"raw": "- Self-improve LLMs in the environment",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Amazing performances on agentic and reasoning tasks",
"raw": "- Amazing performances on agentic and reasoning tasks",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Insightful analysis on \"why\" questions",
"raw": "- Insightful analysis on \"why\" questions",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📝 Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models",
"raw": "📝 Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "📎 Repo: ",
"raw": "📎 Repo: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/xufangzhi/ENVISIONS",
"resource": null,
"url": null,
"href": "https://github.com/xufangzhi/ENVISIONS",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 📣Thrilled to make public our recent work ENVISIONS !!!
- Without human annotations !
- Without Distilling Strong LLMs !
- Self-improve LLMs in the environment
- Amazing performances on agentic and reasoning tasks
- Insightful analysis on "why" questions
📝 Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
📎 Repo: https://github.com/xufangzhi/ENVISIONS
| {
"avatarUrl": "/avatars/68ad6babbd6b1bc85a7dcb1a7888ac53.svg",
"fullname": "Fangzhi Xu",
"name": "xufangzhi",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"xufangzhi"
],
"count": 1
},
{
"reaction": "🚀",
"users": [
"xufangzhi"
],
"count": 1
}
] | 2024-06-25T05:13:21.000Z | 2024-06-25T05:13:21.917Z | [] | /posts/xufangzhi/210297200279279 | 463 | 0 |
154642657500810 | [
{
"type": "text",
"value": "🚀 KARAKURI LM 8x7B Instruct v0.1",
"raw": "🚀 KARAKURI LM 8x7B Instruct v0.1",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "KARAKURI Inc. has publicly released \"KARAKURI LM 8x7B Instruct v0.1\", the first domestic Large Language Model (LLM) in Japan to support Function calling and Retrieval-Augmented Generation (RAG). This AI agent can handle tasks across various applications autonomously, significantly reducing implementation costs compared to traditional models. ",
"raw": "KARAKURI Inc. has publicly released \"KARAKURI LM 8x7B Instruct v0.1\", the first domestic Large Language Model (LLM) in Japan to support Function calling and Retrieval-Augmented Generation (RAG). This AI agent can handle tasks across various applications autonomously, significantly reducing implementation costs compared to traditional models. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Model Features:",
"raw": "Model Features:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Capable of autonomously choosing optimal documents and databases for various tasks.",
"raw": "- Capable of autonomously choosing optimal documents and databases for various tasks.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Applied extensively in customer support for automating responses and processes, analyzing Voice of Customer (VoC), and predicting optimal outreach timings.",
"raw": "- Applied extensively in customer support for automating responses and processes, analyzing Voice of Customer (VoC), and predicting optimal outreach timings.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Model URL:",
"raw": "Model URL:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/karakuri-ai/karakuri-lm-8x7b-instruct-v0.1",
"resource": {
"type": "model",
"id": "karakuri-ai/karakuri-lm-8x7b-instruct-v0.1",
"discussionNum": null
},
"url": "https://huggingface.co/karakuri-ai/karakuri-lm-8x7b-instruct-v0.1",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Detailed press release (in Japanese):",
"raw": "Detailed press release (in Japanese):",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://karakuri.ai/seminar/news/karakuri-lm-8x7b-instruct-v0-1/",
"resource": null,
"url": null,
"href": "https://karakuri.ai/seminar/news/karakuri-lm-8x7b-instruct-v0-1/",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🚀 KARAKURI LM 8x7B Instruct v0.1
KARAKURI Inc. has publicly released "KARAKURI LM 8x7B Instruct v0.1", the first domestic Large Language Model (LLM) in Japan to support Function calling and Retrieval-Augmented Generation (RAG). This AI agent can handle tasks across various applications autonomously, significantly reducing implementation costs compared to traditional models.
Model Features:
- Capable of autonomously choosing optimal documents and databases for various tasks.
- Applied extensively in customer support for automating responses and processes, analyzing Voice of Customer (VoC), and predicting optimal outreach timings.
Model URL:
https://huggingface.co/karakuri-ai/karakuri-lm-8x7b-instruct-v0.1
Detailed press release (in Japanese):
https://karakuri.ai/seminar/news/karakuri-lm-8x7b-instruct-v0-1/ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61a32e422172c41f121589d2/8jExNd-9fenpqw_Z1rvL6.jpeg",
"fullname": "Kaito Sugimoto",
"name": "kaisugi",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 12,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"ynakashima"
],
"count": 1
}
] | 2024-06-25T04:51:03.000Z | 2024-06-25T04:51:03.474Z | [] | /posts/kaisugi/154642657500810 | 610 | 0 |
630121342404584 | [
{
"type": "text",
"value": "Doraemon AI your future friend",
"raw": "Doraemon AI your future friend",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://hf.co/chat/assistant/667a0d8482b5bcd065dd882f",
"resource": null,
"url": null,
"href": "https://hf.co/chat/assistant/667a0d8482b5bcd065dd882f",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Doraemon AI your future friend
https://hf.co/chat/assistant/667a0d8482b5bcd065dd882f
| {
"avatarUrl": "/avatars/d773a7dd9b706759131fc482ab71ced7.svg",
"fullname": "[email protected]",
"name": "Taf2023",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 8,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64841af2295256340e4b9f88/E12XQ7t5IQfgP7fp4kxnC.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64841af2295256340e4b9f88/5tJMbB_93yfHOSfpX82sk.jpeg"
}
] | [] | [
{
"reaction": "👀",
"users": [
"GPT007",
"Ievan",
"edwixx"
],
"count": 3
}
] | 2024-06-25T00:54:34.000Z | 2024-06-25T00:54:34.765Z | [] | /posts/Taf2023/630121342404584 | 1,531 | 0 |
515229433183214 | [
{
"type": "text",
"value": "𝗝𝘂𝗱𝗴𝗶𝗻𝗴 𝘁𝗵𝗲 𝗝𝘂𝗱𝗴𝗲𝘀: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗮𝗻𝗱 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗶𝗻 𝗟𝗟𝗠𝘀-𝗮𝘀-𝗝𝘂𝗱𝗴𝗲𝘀",
"raw": "𝗝𝘂𝗱𝗴𝗶𝗻𝗴 𝘁𝗵𝗲 𝗝𝘂𝗱𝗴𝗲𝘀: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗮𝗻𝗱 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗶𝗻 𝗟𝗟𝗠𝘀-𝗮𝘀-𝗝𝘂𝗱𝗴𝗲𝘀",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2406.12624",
"resource": {
"type": "paper",
"id": "2406.12624",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2406.12624",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "Judging the Judges: Evaluating Alignment and Vulnerabilities in\n LLMs-as-Judges (2406.12624)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "𝐂𝐚𝐧 𝐋𝐋𝐌𝐬 𝐬𝐞𝐫𝐯𝐞 𝐚𝐬 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐣𝐮𝐝𝐠𝐞𝐬 ⚖️?",
"raw": "𝐂𝐚𝐧 𝐋𝐋𝐌𝐬 𝐬𝐞𝐫𝐯𝐞 𝐚𝐬 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐣𝐮𝐝𝐠𝐞𝐬 ⚖️?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We aim to identify the right metrics for evaluating Judge LLMs and understand their sensitivities to prompt guidelines, engineering, and specificity. With this paper, we want to raise caution ⚠️ to blindly using LLMs as human proxy.",
"raw": "We aim to identify the right metrics for evaluating Judge LLMs and understand their sensitivities to prompt guidelines, engineering, and specificity. With this paper, we want to raise caution ⚠️ to blindly using LLMs as human proxy.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Blog - ",
"raw": "Blog - ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/singh96aman/judgingthejudges",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/singh96aman/judgingthejudges",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Arxiv - ",
"raw": "Arxiv - ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2406.12624",
"resource": null,
"url": null,
"href": "https://arxiv.org/abs/2406.12624",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Tweet - ",
"raw": "Tweet - ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://x.com/iamsingh96aman/status/1804148173008703509",
"resource": null,
"url": null,
"href": "https://x.com/iamsingh96aman/status/1804148173008703509",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@singh96aman",
"resource": null,
"url": null,
"href": null,
"user": "singh96aman",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@kartik727",
"resource": null,
"url": null,
"href": null,
"user": "kartik727",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@Srinik-1",
"resource": null,
"url": null,
"href": null,
"user": "Srinik-1",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@sankaranv",
"resource": null,
"url": null,
"href": null,
"user": "sankaranv",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@dieuwkehupkes",
"resource": null,
"url": null,
"href": null,
"user": "dieuwkehupkes",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 𝗝𝘂𝗱𝗴𝗶𝗻𝗴 𝘁𝗵𝗲 𝗝𝘂𝗱𝗴𝗲𝘀: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗮𝗻𝗱 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗶𝗻 𝗟𝗟𝗠𝘀-𝗮𝘀-𝗝𝘂𝗱𝗴𝗲𝘀
https://huggingface.co/papers/2406.12624
𝐂𝐚𝐧 𝐋𝐋𝐌𝐬 𝐬𝐞𝐫𝐯𝐞 𝐚𝐬 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐣𝐮𝐝𝐠𝐞𝐬 ⚖️?
We aim to identify the right metrics for evaluating Judge LLMs and understand their sensitivities to prompt guidelines, engineering, and specificity. With this paper, we want to raise caution ⚠️ to blindly using LLMs as human proxy.
Blog - https://huggingface.co/blog/singh96aman/judgingthejudges
Arxiv - https://arxiv.org/abs/2406.12624
Tweet - https://x.com/iamsingh96aman/status/1804148173008703509
@singh96aman @kartik727 @Srinik-1 @sankaranv @dieuwkehupkes | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b491f3103617b0a5af6b4b/4RqcGDxdF8Ny9cqyVuwYg.jpeg",
"fullname": "Aman Singh Thakur",
"name": "singh96aman",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 6,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/WZV9IGJtwDCbshA98gkrQ.png"
}
] | [
{
"avatarUrl": "/avatars/78c49f863eda2eac82b18261e794fb0f.svg",
"fullname": "Dieuwke Hupkes",
"name": "dieuwkehupkes",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1
},
{
"avatarUrl": "/avatars/43c7bbac7569e3d63017831f71ecd762.svg",
"fullname": "Kartik Choudhary",
"name": "kartik727",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1
},
{
"avatarUrl": "/avatars/979a72589441400a9dc25b1008de7def.svg",
"fullname": "Sankaran Vaidyanathan",
"name": "sankaranv",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b491f3103617b0a5af6b4b/4RqcGDxdF8Ny9cqyVuwYg.jpeg",
"fullname": "Aman Singh Thakur",
"name": "singh96aman",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 6
},
{
"avatarUrl": "/avatars/5aa58d7cc9da903399210b9f1f54dd68.svg",
"fullname": "Venkat Srinik Ramayapally",
"name": "Srinik-1",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null
}
] | [
{
"reaction": "🔥",
"users": [
"singh96aman",
"louisbrulenaudet",
"ljvmiranda921",
"Choms",
"Tom-Neverwinter"
],
"count": 5
},
{
"reaction": "🚀",
"users": [
"RuhiJ",
"ljvmiranda921",
"Tom-Neverwinter"
],
"count": 3
}
] | 2024-06-24T22:17:58.000Z | 2024-06-24T22:40:43.018Z | [] | /posts/singh96aman/515229433183214 | 2,086 | 0 |
357091156426242 | [
{
"type": "mention",
"value": null,
"raw": "@osanseviero",
"resource": null,
"url": null,
"href": null,
"user": "osanseviero",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " your move",
"raw": " your move",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | @osanseviero your move | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659f000b83abded48e190901/BnXL_XYbVX6PHngfQLECW.png",
"fullname": "Noa Roggendorff",
"name": "nroggendorff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/659f000b83abded48e190901/_nLytYtZ5nHbRw6icztTv.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2846
}
] | [] | 2024-06-24T19:50:30.000Z | 2024-06-25T11:28:55.616Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg",
"fullname": "Julien Chaumond",
"name": "julien-c",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1568,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659f000b83abded48e190901/BnXL_XYbVX6PHngfQLECW.png",
"fullname": "Noa Roggendorff",
"name": "nroggendorff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2846,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/YCLHM-kwF9--v7I8YT9TH.jpeg",
"fullname": "jordan issiah levi joseph charles foots",
"name": "MrFoots",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/BRKGVgk_dJO34ZOi3Slb_.jpeg",
"fullname": "Lain",
"name": "not-lain",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 919,
"isFollowing": false
}
] | /posts/nroggendorff/357091156426242 | 530 | 6 |
158194207799625 | [
{
"type": "text",
"value": "How to generate LLM embeddings with open source models from Hugging Face 🤗 in PostgresML. ",
"raw": "How to generate LLM embeddings with open source models from Hugging Face 🤗 in PostgresML. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This article is the first in a multipart series that will show you how to build a post-modern semantic search and recommendation engine. ",
"raw": "This article is the first in a multipart series that will show you how to build a post-modern semantic search and recommendation engine. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "➡️ ",
"raw": "➡️ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://postgresml.org/blog/generating-llm-embeddings-with-open-source-models-in-postgresml",
"resource": null,
"url": null,
"href": "https://postgresml.org/blog/generating-llm-embeddings-with-open-source-models-in-postgresml",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "PostgresML is a backend for your AI app that unifies LLMs w/ vector memory + embedding generation + reranking & pruning models — all in a single process for better performance. ",
"raw": "PostgresML is a backend for your AI app that unifies LLMs w/ vector memory + embedding generation + reranking & pruning models — all in a single process for better performance. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We're always looking for ways to make PostgresML better — let us know what you think!",
"raw": "We're always looking for ways to make PostgresML better — let us know what you think!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | How to generate LLM embeddings with open source models from Hugging Face 🤗 in PostgresML.
This article is the first in a multipart series that will show you how to build a post-modern semantic search and recommendation engine.
➡️ https://postgresml.org/blog/generating-llm-embeddings-with-open-source-models-in-postgresml
PostgresML is a backend for your AI app that unifies LLMs w/ vector memory + embedding generation + reranking & pruning models — all in a single process for better performance.
We're always looking for ways to make PostgresML better — let us know what you think! | {
"avatarUrl": "/avatars/20c6cabac91c84388340016f1ba6494d.svg",
"fullname": "Cassandra Stumer",
"name": "cassandrapgml",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🚀",
"users": [
"monsoon-nlp"
],
"count": 1
}
] | 2024-06-24T18:25:21.000Z | 2024-06-24T18:26:14.423Z | [] | /posts/cassandrapgml/158194207799625 | 439 | 0 |
986989202846565 | [
{
"type": "text",
"value": "Fine-tune Florence-2 on any task 🔥",
"raw": "Fine-tune Florence-2 on any task 🔥",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Today we release a notebook and a walkthrough blog on fine-tuning Florence-2 on DocVQA dataset ",
"raw": "Today we release a notebook and a walkthrough blog on fine-tuning Florence-2 on DocVQA dataset ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@andito",
"resource": null,
"url": null,
"href": null,
"user": "andito",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@SkalskiP",
"resource": null,
"url": null,
"href": null,
"user": "SkalskiP",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Blog: ",
"raw": "Blog: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 📕",
"raw": " 📕",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Notebook: ",
"raw": "Notebook: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://colab.research.google.com/drive/1hKDrJ5AH_o7I95PtZ9__VlCTNAo1Gjpf?usp=sharing",
"resource": null,
"url": null,
"href": "https://colab.research.google.com/drive/1hKDrJ5AH_o7I95PtZ9__VlCTNAo1Gjpf?usp=sharing",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 📖",
"raw": " 📖",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Florence-2 is a great vision-language model thanks to it's massive dataset and small size!",
"raw": "Florence-2 is a great vision-language model thanks to it's massive dataset and small size!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This model requires conditioning through task prefixes and it's not as generalist, requiring fine-tuning on a new task, such as DocVQA 📝",
"raw": "This model requires conditioning through task prefixes and it's not as generalist, requiring fine-tuning on a new task, such as DocVQA 📝",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We have fine-tuned the model on A100 (and one can also use a smaller GPU with smaller batch size) and saw that model picks up new tasks 🥹",
"raw": "We have fine-tuned the model on A100 (and one can also use a smaller GPU with smaller batch size) and saw that model picks up new tasks 🥹",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "See below how it looks like before and after FT 🤩",
"raw": "See below how it looks like before and after FT 🤩",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Play with the demo here ",
"raw": "Play with the demo here ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/andito/Florence-2-DocVQA",
"resource": {
"type": "space",
"id": "andito/Florence-2-DocVQA",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/andito/Florence-2-DocVQA",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 🏄♀️",
"raw": " 🏄♀️",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Fine-tune Florence-2 on any task 🔥
Today we release a notebook and a walkthrough blog on fine-tuning Florence-2 on DocVQA dataset @andito @SkalskiP
Blog: https://huggingface.co/blog 📕
Notebook: https://colab.research.google.com/drive/1hKDrJ5AH_o7I95PtZ9__VlCTNAo1Gjpf?usp=sharing 📖
Florence-2 is a great vision-language model thanks to it's massive dataset and small size!
This model requires conditioning through task prefixes and it's not as generalist, requiring fine-tuning on a new task, such as DocVQA 📝
We have fine-tuned the model on A100 (and one can also use a smaller GPU with smaller batch size) and saw that model picks up new tasks 🥹
See below how it looks like before and after FT 🤩
Play with the demo here https://huggingface.co/spaces/andito/Florence-2-DocVQA 🏄♀️ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/LPGYSsRAKwZgv8v2C7W8z.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d66b494bbd0d92b641cdbb/6-7dm7B-JxcoS1QlCPdMN.jpeg",
"fullname": "Andres Marafioti",
"name": "andito",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 53
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60f84d4d85dbbb185d2e9a53/Mlc0XjAgQR2cuhGNchz07.jpeg",
"fullname": "Piotr Skalski",
"name": "SkalskiP",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 2299
}
] | [
{
"reaction": "🤗",
"users": [
"omaryshchenko",
"pcuenq",
"MrDragonFox",
"SixOpen",
"Taylor658",
"KvrParaskevi",
"jxtngx",
"fdaudens",
"gokaygokay",
"osanseviero",
"John6666",
"nicolay-r",
"IsmaelMousa",
"emanuelevivoli",
"JoPmt",
"CelineLind",
"pandora-s",
"IlyasMoutawwakil",
"damerajee",
"bharatxoni",
"louisbrulenaudet",
"ucsahin",
"victor",
"not-lain",
"LeonceNsh",
"andito",
"fcakyon"
],
"count": 27
},
{
"reaction": "🔥",
"users": [
"not-lain",
"catyung",
"John6666",
"hoangphu7122002ai",
"Nadav314",
"uurcelikk",
"fcakyon"
],
"count": 7
}
] | 2024-06-24T15:51:35.000Z | 2024-06-24T15:51:35.488Z | [] | /posts/merve/986989202846565 | 6,000 | 0 |
423633326718867 | [
{
"type": "text",
"value": "🎉 We are thrilled to share our work on model merging. We proposed a new approach, Della-merging, which combines expert models from various domains into a single, versatile model. Della employs a magnitude-based sampling approach to eliminate redundant delta parameters, reducing interference when merging homologous models (those fine-tuned from the same backbone).",
"raw": "🎉 We are thrilled to share our work on model merging. We proposed a new approach, Della-merging, which combines expert models from various domains into a single, versatile model. Della employs a magnitude-based sampling approach to eliminate redundant delta parameters, reducing interference when merging homologous models (those fine-tuned from the same backbone).",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Della outperforms existing homologous model merging techniques such as DARE and TIES. Across three expert models (LM, Math, Code) and their corresponding benchmark datasets (AlpacaEval, GSM8K, MBPP), Della achieves an improvement of 3.6 points over TIES and 1.2 points over DARE.",
"raw": "Della outperforms existing homologous model merging techniques such as DARE and TIES. Across three expert models (LM, Math, Code) and their corresponding benchmark datasets (AlpacaEval, GSM8K, MBPP), Della achieves an improvement of 3.6 points over TIES and 1.2 points over DARE.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper: ",
"raw": "Paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2406.11617",
"resource": {
"type": "paper",
"id": "2406.11617",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2406.11617",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "DELLA-Merging: Reducing Interference in Model Merging through\n Magnitude-Based Sampling (2406.11617)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Github: ",
"raw": "Github: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/declare-lab/della",
"resource": null,
"url": null,
"href": "https://github.com/declare-lab/della",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@soujanyaporia",
"resource": null,
"url": null,
"href": null,
"user": "soujanyaporia",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@Tej3",
"resource": null,
"url": null,
"href": null,
"user": "Tej3",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🎉 We are thrilled to share our work on model merging. We proposed a new approach, Della-merging, which combines expert models from various domains into a single, versatile model. Della employs a magnitude-based sampling approach to eliminate redundant delta parameters, reducing interference when merging homologous models (those fine-tuned from the same backbone).
Della outperforms existing homologous model merging techniques such as DARE and TIES. Across three expert models (LM, Math, Code) and their corresponding benchmark datasets (AlpacaEval, GSM8K, MBPP), Della achieves an improvement of 3.6 points over TIES and 1.2 points over DARE.
Paper: https://huggingface.co/papers/2406.11617
Github: https://github.com/declare-lab/della
@soujanyaporia @Tej3 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f278507e923d665e616271b/tWFuswXOTXtvMdL8zSrr_.png",
"fullname": "Rishabh Bhardwaj",
"name": "RishabhBhardwaj",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 17,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626b626405fe1cb65725aca1/aa-Lata46I3fXOmMetvXH.jpeg",
"fullname": "Soujanya Poria",
"name": "soujanyaporia",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 6
},
{
"avatarUrl": "/avatars/48312867ac2c05d589869972f90881b9.svg",
"fullname": "Pala Tej Deep",
"name": "Tej3",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4
}
] | [
{
"reaction": "🤯",
"users": [
"YaTharThShaRma999",
"John6666",
"soujanyaporia",
"RishabhBhardwaj"
],
"count": 4
},
{
"reaction": "👍",
"users": [
"dillfrescott",
"QizhiPei"
],
"count": 2
},
{
"reaction": "👀",
"users": [
"louisbrulenaudet"
],
"count": 1
}
] | 2024-06-24T14:23:02.000Z | 2024-06-25T21:07:09.186Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg",
"fullname": "leroy Samuel Dyer",
"name": "LeroyDyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 82,
"isFollowing": false
},
{
"avatarUrl": "/avatars/48312867ac2c05d589869972f90881b9.svg",
"fullname": "Pala Tej Deep",
"name": "Tej3",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
}
] | /posts/RishabhBhardwaj/423633326718867 | 2,422 | 3 |
341903216222465 | [
{
"type": "text",
"value": "Hello!",
"raw": "Hello!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Fixed Moondream 2 Multi-Interrogation, ( Use ZeroGPU correctly, Sam. *doink* ) ",
"raw": "Fixed Moondream 2 Multi-Interrogation, ( Use ZeroGPU correctly, Sam. *doink* ) ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Located here:",
"raw": "Located here:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/MrOvkill/moondream-2-multi-interrogation",
"resource": {
"type": "space",
"id": "MrOvkill/moondream-2-multi-interrogation",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/MrOvkill/moondream-2-multi-interrogation",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Also, uploaded pdox-reversed to include some new fields, my bad for not putting the Paradox name in from the start. All good now.",
"raw": "Also, uploaded pdox-reversed to include some new fields, my bad for not putting the Paradox name in from the start. All good now.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/MrOvkill/pdox-reversed",
"resource": {
"type": "dataset",
"id": "MrOvkill/pdox-reversed",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/MrOvkill/pdox-reversed",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hello!
Fixed Moondream 2 Multi-Interrogation, ( Use ZeroGPU correctly, Sam. *doink* )
Located here:
https://huggingface.co/spaces/MrOvkill/moondream-2-multi-interrogation
Also, uploaded pdox-reversed to include some new fields, my bad for not putting the Paradox name in from the start. All good now.
https://huggingface.co/datasets/MrOvkill/pdox-reversed | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652ff5ee7aab9cfb619400bf/cIdrUic40uXoRbAylFiM8.png",
"fullname": "Samuel L Meyers",
"name": "MrOvkill",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👀",
"users": [
"osanseviero",
"Ramikan-BR",
"dillfrescott",
"kaltre"
],
"count": 4
}
] | 2024-06-24T13:24:36.000Z | 2024-06-24T13:24:36.332Z | [] | /posts/MrOvkill/341903216222465 | 1,098 | 0 |
985624558872301 | [
{
"type": "mention",
"value": null,
"raw": "@dwancin",
"resource": null,
"url": null,
"href": null,
"user": "dwancin",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " Can you please reset your toggle component's space? It's stuck for some reason. Happy to help",
"raw": " Can you please reset your toggle component's space? It's stuck for some reason. Happy to help",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/dwancin/gradio_toggle",
"resource": {
"type": "space",
"id": "dwancin/gradio_toggle",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/dwancin/gradio_toggle",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | @dwancin Can you please reset your toggle component's space? It's stuck for some reason. Happy to help
https://huggingface.co/spaces/dwancin/gradio_toggle | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654278567459-626a9bfa03e2e2796f24ca11.jpeg",
"fullname": "Freddy Boulton",
"name": "freddyaboulton",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 155,
"isFollowing": false
} | [] | [] | [] | 2024-06-24T09:54:45.000Z | 2024-06-24T09:54:45.698Z | [] | /posts/freddyaboulton/985624558872301 | 1,287 | 0 |
798841828414409 | [
{
"type": "text",
"value": "A few new styles added as SDXL LoRA:",
"raw": "A few new styles added as SDXL LoRA:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Midsommar Cartoon",
"raw": "Midsommar Cartoon",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "A playful cartoon style featuring bold colors and a retro aesthetic. Personal favorite at the moment.",
"raw": "A playful cartoon style featuring bold colors and a retro aesthetic. Personal favorite at the moment.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/alvdansen/midsommarcartoon",
"resource": {
"type": "model",
"id": "alvdansen/midsommarcartoon",
"discussionNum": null
},
"url": "https://huggingface.co/alvdansen/midsommarcartoon",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "---",
"raw": "---",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Wood Block XL",
"raw": "Wood Block XL",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I've started training public domain styles to create some interesting datasets. In this case I found a group of images taken from really beautiful and colorful Japanese Blockprints. ",
"raw": "I've started training public domain styles to create some interesting datasets. In this case I found a group of images taken from really beautiful and colorful Japanese Blockprints. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/alvdansen/wood-block-xl",
"resource": {
"type": "model",
"id": "alvdansen/wood-block-xl",
"discussionNum": null
},
"url": "https://huggingface.co/alvdansen/wood-block-xl",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "--",
"raw": "--",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Dimension W",
"raw": "Dimension W",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "For this model I did actually end up working on an SD 1.5 model as well as an SDXL. I prefer the SDXL version, and I am still looking for parameters I am really happy with for SD 1.5. That said, both have their merits. I trained this with the short film I am working on in mind.",
"raw": "For this model I did actually end up working on an SD 1.5 model as well as an SDXL. I prefer the SDXL version, and I am still looking for parameters I am really happy with for SD 1.5. That said, both have their merits. I trained this with the short film I am working on in mind.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/alvdansen/dimension-w",
"resource": {
"type": "model",
"id": "alvdansen/dimension-w",
"discussionNum": null
},
"url": "https://huggingface.co/alvdansen/dimension-w",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/alvdansen/dimension-w-sd15",
"resource": {
"type": "model",
"id": "alvdansen/dimension-w-sd15",
"discussionNum": null
},
"url": "https://huggingface.co/alvdansen/dimension-w-sd15",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | A few new styles added as SDXL LoRA:
Midsommar Cartoon
A playful cartoon style featuring bold colors and a retro aesthetic. Personal favorite at the moment.
https://huggingface.co/alvdansen/midsommarcartoon
---
Wood Block XL
I've started training public domain styles to create some interesting datasets. In this case I found a group of images taken from really beautiful and colorful Japanese Blockprints.
https://huggingface.co/alvdansen/wood-block-xl
--
Dimension W
For this model I did actually end up working on an SD 1.5 model as well as an SDXL. I prefer the SDXL version, and I am still looking for parameters I am really happy with for SD 1.5. That said, both have their merits. I trained this with the short film I am working on in mind.
https://huggingface.co/alvdansen/dimension-w
https://huggingface.co/alvdansen/dimension-w-sd15
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/N9eAopiefimHDYRMc62Xo.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/rCywSMdNeGV31BI0Ioy0S.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/VMp9VXt4JNeDQrXnkmm0E.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/rLp6dm0Aw5h_E24KDQoDq.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/tlJV08OtHr1vEt5LzQT24.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/6LKAivWxR6HVJLa-jTqvv.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"kristaller486",
"Brukolakos",
"Ramikan-BR",
"YaTharThShaRma999"
],
"count": 4
},
{
"reaction": "❤️",
"users": [
"louisbrulenaudet",
"Noeloise",
"IkeaMan"
],
"count": 3
},
{
"reaction": "🔥",
"users": [
"YaTharThShaRma999"
],
"count": 1
}
] | 2024-06-24T07:30:18.000Z | 2024-06-24T07:30:18.715Z | [] | /posts/alvdansen/798841828414409 | 2,447 | 0 |
876260401440477 | [
{
"type": "text",
"value": "Last week, Intel's new Xeon CPUs, Sapphire Rapids (SPR), landed on Inference Endpoints and I think they got the potential to reduce the cost of your RAG pipelines 💸",
"raw": "Last week, Intel's new Xeon CPUs, Sapphire Rapids (SPR), landed on Inference Endpoints and I think they got the potential to reduce the cost of your RAG pipelines 💸",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Why ? Because they come with Intel® AMX support, which is a set of instructions that support and accelerate BF16 and INT8 matrix multiplications on CPU ⚡",
"raw": "Why ? Because they come with Intel® AMX support, which is a set of instructions that support and accelerate BF16 and INT8 matrix multiplications on CPU ⚡",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I went ahead and built a Space to showcase how to efficiently deploy embedding models on SPR for both Retrieving and Ranking documents, with Haystack compatible components: ",
"raw": "I went ahead and built a Space to showcase how to efficiently deploy embedding models on SPR for both Retrieving and Ranking documents, with Haystack compatible components: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/optimum-intel/haystack-e2e",
"resource": null,
"url": null,
"href": "https://huggingface.co/spaces/optimum-intel/haystack-e2e",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Here's how it works:",
"raw": "Here's how it works:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Document Store: A FAISS document store containing the seven-wonders dataset, embedded, indexed and stored on the Space's persistent storage to avoid unnecessary re-computation of embeddings.",
"raw": "- Document Store: A FAISS document store containing the seven-wonders dataset, embedded, indexed and stored on the Space's persistent storage to avoid unnecessary re-computation of embeddings.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Retriever: It embeds the query at runtime and retrieves from the dataset N documents that are most semantically similar to the query's embedding.",
"raw": "- Retriever: It embeds the query at runtime and retrieves from the dataset N documents that are most semantically similar to the query's embedding.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We use the small variant of the BGE family here because we want a model that's fast to run on the entire dataset and has a small embedding space for fast similarity search. Specifically we use an INT8 quantized bge-small-en-v1.5, deployed on an Intel Sapphire Rapids CPU instance. ",
"raw": "We use the small variant of the BGE family here because we want a model that's fast to run on the entire dataset and has a small embedding space for fast similarity search. Specifically we use an INT8 quantized bge-small-en-v1.5, deployed on an Intel Sapphire Rapids CPU instance. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Ranker: It re-embeds the retrieved documents at runtime and re-ranks them based on semantic similarity to the query's embedding. We use the large variant of the BGE family here because it's optimized for accuracy allowing us to filter the most relevant k documents that we'll use in the LLM prompt. Specifically we use an INT8 quantized bge-large-en-v1.5, deployed on an Intel Sapphire Rapids CPU instance. ",
"raw": "- Ranker: It re-embeds the retrieved documents at runtime and re-ranks them based on semantic similarity to the query's embedding. We use the large variant of the BGE family here because it's optimized for accuracy allowing us to filter the most relevant k documents that we'll use in the LLM prompt. Specifically we use an INT8 quantized bge-large-en-v1.5, deployed on an Intel Sapphire Rapids CPU instance. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Space: ",
"raw": "Space: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/optimum-intel/haystack-e2e",
"resource": null,
"url": null,
"href": "https://huggingface.co/spaces/optimum-intel/haystack-e2e",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Retriever IE: ",
"raw": "Retriever IE: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/optimum-intel/fastrag-retriever",
"resource": {
"type": "model",
"id": "optimum-intel/fastrag-retriever",
"discussionNum": null
},
"url": "https://huggingface.co/optimum-intel/fastrag-retriever",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Ranker IE: ",
"raw": "Ranker IE: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/optimum-intel/fastrag-ranker",
"resource": {
"type": "model",
"id": "optimum-intel/fastrag-ranker",
"discussionNum": null
},
"url": "https://huggingface.co/optimum-intel/fastrag-ranker",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Last week, Intel's new Xeon CPUs, Sapphire Rapids (SPR), landed on Inference Endpoints and I think they got the potential to reduce the cost of your RAG pipelines 💸
Why ? Because they come with Intel® AMX support, which is a set of instructions that support and accelerate BF16 and INT8 matrix multiplications on CPU ⚡
I went ahead and built a Space to showcase how to efficiently deploy embedding models on SPR for both Retrieving and Ranking documents, with Haystack compatible components: https://huggingface.co/spaces/optimum-intel/haystack-e2e
Here's how it works:
- Document Store: A FAISS document store containing the seven-wonders dataset, embedded, indexed and stored on the Space's persistent storage to avoid unnecessary re-computation of embeddings.
- Retriever: It embeds the query at runtime and retrieves from the dataset N documents that are most semantically similar to the query's embedding.
We use the small variant of the BGE family here because we want a model that's fast to run on the entire dataset and has a small embedding space for fast similarity search. Specifically we use an INT8 quantized bge-small-en-v1.5, deployed on an Intel Sapphire Rapids CPU instance.
- Ranker: It re-embeds the retrieved documents at runtime and re-ranks them based on semantic similarity to the query's embedding. We use the large variant of the BGE family here because it's optimized for accuracy allowing us to filter the most relevant k documents that we'll use in the LLM prompt. Specifically we use an INT8 quantized bge-large-en-v1.5, deployed on an Intel Sapphire Rapids CPU instance.
Space: https://huggingface.co/spaces/optimum-intel/haystack-e2e
Retriever IE: https://huggingface.co/optimum-intel/fastrag-retriever
Ranker IE: https://huggingface.co/optimum-intel/fastrag-ranker | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1642598610696-noauth.jpeg",
"fullname": "Ilyas Moutawwakil",
"name": "IlyasMoutawwakil",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 67,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🧠",
"users": [
"davidberenstein1957",
"louisbrulenaudet",
"IlyasMoutawwakil",
"anakin87",
"mfuntowicz",
"lunarflu",
"not-lain",
"nbroad"
],
"count": 8
},
{
"reaction": "🚀",
"users": [
"IlyasMoutawwakil",
"mfuntowicz",
"lunarflu",
"not-lain",
"jeffboudier",
"nbroad"
],
"count": 6
},
{
"reaction": "👍",
"users": [
"whitebill",
"jtatman",
"rjohnn",
"jlzhou"
],
"count": 4
}
] | 2024-06-24T06:55:08.000Z | 2024-06-24T06:55:08.721Z | [] | /posts/IlyasMoutawwakil/876260401440477 | 3,919 | 0 |
242181868210458 | [
{
"type": "text",
"value": "So far I've implemented more accurate 👌 assessment of LLMs reasoning capabilities in Target Sentiment Analysis (zero-shot mode). With that, recalculated tables of the related benchmark 📊 also has better separation into categories, with the following 🏆 top 🏆 performing models: ",
"raw": "So far I've implemented more accurate 👌 assessment of LLMs reasoning capabilities in Target Sentiment Analysis (zero-shot mode). With that, recalculated tables of the related benchmark 📊 also has better separation into categories, with the following 🏆 top 🏆 performing models: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🟩 1. Proprietary models (🏆 GPT-4 🇺🇸 / GPT-3.5-0613 🇷🇺 )",
"raw": "🟩 1. Proprietary models (🏆 GPT-4 🇺🇸 / GPT-3.5-0613 🇷🇺 )",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🟥 2. Open and < 100B (🏆 LLaMA-3-70B)",
"raw": "🟥 2. Open and < 100B (🏆 LLaMA-3-70B)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🟧 3. Open and < 10B (🏆LLaMA-3-8B-Instruct 🇺🇸 / Qwen-2-7B-Instruct 🇷🇺)",
"raw": "🟧 3. Open and < 10B (🏆LLaMA-3-8B-Instruct 🇺🇸 / Qwen-2-7B-Instruct 🇷🇺)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🟨 4. Open and less 1B (🏆Flan-T5-large 🇺🇸 / Qwen2-0.5B-Instruct 🇷🇺)",
"raw": "🟨 4. Open and less 1B (🏆Flan-T5-large 🇺🇸 / Qwen2-0.5B-Instruct 🇷🇺)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Benchmark: ",
"raw": "Benchmark: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/nicolay-r/RuSentNE-LLM-Benchmark",
"resource": null,
"url": null,
"href": "https://github.com/nicolay-r/RuSentNE-LLM-Benchmark",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | So far I've implemented more accurate 👌 assessment of LLMs reasoning capabilities in Target Sentiment Analysis (zero-shot mode). With that, recalculated tables of the related benchmark 📊 also has better separation into categories, with the following 🏆 top 🏆 performing models:
🟩 1. Proprietary models (🏆 GPT-4 🇺🇸 / GPT-3.5-0613 🇷🇺 )
🟥 2. Open and < 100B (🏆 LLaMA-3-70B)
🟧 3. Open and < 10B (🏆LLaMA-3-8B-Instruct 🇺🇸 / Qwen-2-7B-Instruct 🇷🇺)
🟨 4. Open and less 1B (🏆Flan-T5-large 🇺🇸 / Qwen2-0.5B-Instruct 🇷🇺)
Benchmark: https://github.com/nicolay-r/RuSentNE-LLM-Benchmark | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64e62d11d27a8292c3637f86/aptDeBHpCJxcREj6KPLN1.jpeg",
"fullname": "Nicolay Rusnachenko",
"name": "nicolay-r",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 49,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64e62d11d27a8292c3637f86/IEuMy5deAhvnNDF7QgoRZ.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64e62d11d27a8292c3637f86/hT2FygThP22BOBslSpSlJ.png"
}
] | [] | [] | 2024-06-23T22:27:52.000Z | 2024-06-23T22:28:15.780Z | [] | /posts/nicolay-r/242181868210458 | 679 | 0 |
504603469361099 | [
{
"type": "text",
"value": "I'm decentralizing my AI. I'll be using Radicle for decentralized Git and IPFS for distributing AI models.",
"raw": "I'm decentralizing my AI. I'll be using Radicle for decentralized Git and IPFS for distributing AI models.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I believe there is a significant opportunity to democratize open AI development moving forward. I appreciate that Radicle is open-source, prioritizes local operations, functions offline, seeds data peer-to-peer from my node, is programmable, and incorporates built-in security features.",
"raw": "I believe there is a significant opportunity to democratize open AI development moving forward. I appreciate that Radicle is open-source, prioritizes local operations, functions offline, seeds data peer-to-peer from my node, is programmable, and incorporates built-in security features.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "IPFS is great decentralized data storage, and I have already begun seeding SLMs and LoRa adapters. Tomorrow will add my collection of LLMs, VLMs, etc models and datasets I'm actively using. I have 10Gbps fiber optics at home so my node has enough bandwidth. ",
"raw": "IPFS is great decentralized data storage, and I have already begun seeding SLMs and LoRa adapters. Tomorrow will add my collection of LLMs, VLMs, etc models and datasets I'm actively using. I have 10Gbps fiber optics at home so my node has enough bandwidth. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.",
"raw": "Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I'm decentralizing my AI. I'll be using Radicle for decentralized Git and IPFS for distributing AI models.
I believe there is a significant opportunity to democratize open AI development moving forward. I appreciate that Radicle is open-source, prioritizes local operations, functions offline, seeds data peer-to-peer from my node, is programmable, and incorporates built-in security features.
IPFS is great decentralized data storage, and I have already begun seeding SLMs and LoRa adapters. Tomorrow will add my collection of LLMs, VLMs, etc models and datasets I'm actively using. I have 10Gbps fiber optics at home so my node has enough bandwidth.
Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63f731c7d36951307fcca6bf/DMd5-Pt7YHC0agbAQ1xUc.png",
"fullname": "Mitko Vasilev",
"name": "mitkox",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 117,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63f731c7d36951307fcca6bf/_Bm8tsXGgJcPCEO2olkaf.png"
}
] | [] | [] | 2024-06-23T19:45:28.000Z | 2024-06-24T11:55:32.971Z | [
{
"avatarUrl": "/avatars/9a369763a73278cddcf2abcae594865d.svg",
"fullname": "Dhruv Diddi",
"name": "ddiddi",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
}
] | /posts/mitkox/504603469361099 | 656 | 2 |
172866169462273 | [
{
"type": "text",
"value": "Hey everyone!",
"raw": "Hey everyone!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I'm excited to share a new demo for my ChartInstruct model from our ACL 2024 paper. It excels at various chart understanding tasks like QA, captioning, open-ended QA, fact checking and more!",
"raw": "I'm excited to share a new demo for my ChartInstruct model from our ACL 2024 paper. It excels at various chart understanding tasks like QA, captioning, open-ended QA, fact checking and more!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Thanks to Hugging Face's ZeroGPU program, the demo runs smoothly even with the model's 7B parameters!",
"raw": "Thanks to Hugging Face's ZeroGPU program, the demo runs smoothly even with the model's 7B parameters!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check it out and enjoy!",
"raw": "Check it out and enjoy!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Demo: ",
"raw": "Demo: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/ahmed-masry/ChartInstruct-LLama2",
"resource": {
"type": "space",
"id": "ahmed-masry/ChartInstruct-LLama2",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/ahmed-masry/ChartInstruct-LLama2",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Model: ",
"raw": "Model: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/ahmed-masry/ChartInstruct-LLama2",
"resource": {
"type": "model",
"id": "ahmed-masry/ChartInstruct-LLama2",
"discussionNum": null
},
"url": "https://huggingface.co/ahmed-masry/ChartInstruct-LLama2",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper: ",
"raw": "Paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2403.09028",
"resource": null,
"url": null,
"href": "https://arxiv.org/abs/2403.09028",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hey everyone!
I'm excited to share a new demo for my ChartInstruct model from our ACL 2024 paper. It excels at various chart understanding tasks like QA, captioning, open-ended QA, fact checking and more!
Thanks to Hugging Face's ZeroGPU program, the demo runs smoothly even with the model's 7B parameters!
Check it out and enjoy!
Demo: https://huggingface.co/spaces/ahmed-masry/ChartInstruct-LLama2
Model: https://huggingface.co/ahmed-masry/ChartInstruct-LLama2
Paper: https://arxiv.org/abs/2403.09028 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63efd75a5c2ceb16fc6e98fc/qoA4LKuLTEr7hx90i90UK.jpeg",
"fullname": "Ahmed Masry",
"name": "ahmed-masry",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 42,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"osanseviero",
"qgyd2021",
"shivanshdhar"
],
"count": 3
}
] | 2024-06-23T15:44:44.000Z | 2024-06-23T15:44:44.529Z | [] | /posts/ahmed-masry/172866169462273 | 3,399 | 0 |
927750006725150 | [
{
"type": "text",
"value": "Hello!",
"raw": "Hello!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I've made a little evaluation dataset for LLMs that require advanced and convoluted logical reasoning. It's composed of 81 unique paradoxes, with admittedly a couple in the same category ( absolutes. ) It's available here: ",
"raw": "I've made a little evaluation dataset for LLMs that require advanced and convoluted logical reasoning. It's composed of 81 unique paradoxes, with admittedly a couple in the same category ( absolutes. ) It's available here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/MrOvkill/pdox",
"resource": {
"type": "dataset",
"id": "MrOvkill/pdox",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/MrOvkill/pdox",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "**Update**: I have upgraded the dataset to v3, ( don't worry about v2, it can be forgotten... ) and placed in a separate repo here: ",
"raw": "**Update**: I have upgraded the dataset to v3, ( don't worry about v2, it can be forgotten... ) and placed in a separate repo here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/MrOvkill/pdox-reversed",
"resource": {
"type": "dataset",
"id": "MrOvkill/pdox-reversed",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/MrOvkill/pdox-reversed",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Enjoy & Have fun!",
"raw": "Enjoy & Have fun!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "inline_code",
"value": null,
"raw": "`-<3`",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "-<3",
"label": null
}
] | Hello!
I've made a little evaluation dataset for LLMs that require advanced and convoluted logical reasoning. It's composed of 81 unique paradoxes, with admittedly a couple in the same category ( absolutes. ) It's available here: https://huggingface.co/datasets/MrOvkill/pdox
**Update**: I have upgraded the dataset to v3, ( don't worry about v2, it can be forgotten... ) and placed in a separate repo here:
https://huggingface.co/datasets/MrOvkill/pdox-reversed
Enjoy & Have fun!
`-<3` | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652ff5ee7aab9cfb619400bf/cIdrUic40uXoRbAylFiM8.png",
"fullname": "Samuel L Meyers",
"name": "MrOvkill",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"YaTharThShaRma999",
"SaylorTwift",
"osanseviero",
"Ramikan-BR",
"nbroad"
],
"count": 5
},
{
"reaction": "👀",
"users": [
"Ramikan-BR",
"louisbrulenaudet",
"John6666"
],
"count": 3
},
{
"reaction": "🚀",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "❤️",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "🧠",
"users": [
"Ramikan-BR"
],
"count": 1
}
] | 2024-06-23T13:05:20.000Z | 2024-06-27T15:26:29.818Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/644cb09a22d211df644a0a6c/v0EHypMU4X3Oxxf3cao_O.png",
"fullname": "Júlio César",
"name": "Ramikan-BR",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 10,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652ff5ee7aab9cfb619400bf/cIdrUic40uXoRbAylFiM8.png",
"fullname": "Samuel L Meyers",
"name": "MrOvkill",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg",
"fullname": "leroy Samuel Dyer",
"name": "LeroyDyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 82,
"isFollowing": false
}
] | /posts/MrOvkill/927750006725150 | 3,328 | 12 |
461231991332860 | [
{
"type": "text",
"value": "Zero to Hero Stable Diffusion 3 Tutorial with Amazing SwarmUI SD Web UI that Utilizes ComfyUI",
"raw": "Zero to Hero Stable Diffusion 3 Tutorial with Amazing SwarmUI SD Web UI that Utilizes ComfyUI",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/HKX8_F1Er_w",
"resource": null,
"url": null,
"href": "https://youtu.be/HKX8_F1Er_w",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Do not skip any part of this tutorial to master how to use Stable Diffusion 3 (SD3) with the most advanced generative AI open source APP SwarmUI. Automatic1111 SD Web UI or Fooocus are not supporting the #SD3 yet. Therefore, I am starting to make tutorials for SwarmUI as well. #StableSwarmUI is officially developed by the StabilityAI and your mind will be blown after you watch this tutorial and learn its amazing features. StableSwarmUI uses #ComfyUI as the back end thus it has all the good features of ComfyUI and it brings you easy to use features of Automatic1111 #StableDiffusion Web UI with them. I really liked SwarmUI and planning to do more tutorials for it.",
"raw": "Do not skip any part of this tutorial to master how to use Stable Diffusion 3 (SD3) with the most advanced generative AI open source APP SwarmUI. Automatic1111 SD Web UI or Fooocus are not supporting the #SD3 yet. Therefore, I am starting to make tutorials for SwarmUI as well. #StableSwarmUI is officially developed by the StabilityAI and your mind will be blown after you watch this tutorial and learn its amazing features. StableSwarmUI uses #ComfyUI as the back end thus it has all the good features of ComfyUI and it brings you easy to use features of Automatic1111 #StableDiffusion Web UI with them. I really liked SwarmUI and planning to do more tutorials for it.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔗 The Public Post (no login or account required) Shown In The Video With The Links ➡️ ",
"raw": "🔗 The Public Post (no login or account required) Shown In The Video With The Links ➡️ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.patreon.com/posts/stableswarmui-3-106135985",
"resource": null,
"url": null,
"href": "https://www.patreon.com/posts/stableswarmui-3-106135985",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "0:00 Introduction to the Stable Diffusion 3 (SD3) and SwarmUI and what is in the tutorial",
"raw": "0:00 Introduction to the Stable Diffusion 3 (SD3) and SwarmUI and what is in the tutorial",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "4:12 Architecture and features of SD3",
"raw": "4:12 Architecture and features of SD3",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "5:05 What each different model files of Stable Diffusion 3 means",
"raw": "5:05 What each different model files of Stable Diffusion 3 means",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "6:26 How to download and install SwarmUI on Windows for SD3 and all other Stable Diffusion models",
"raw": "6:26 How to download and install SwarmUI on Windows for SD3 and all other Stable Diffusion models",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "8:42 What kind of folder path you should use when installing SwarmUI",
"raw": "8:42 What kind of folder path you should use when installing SwarmUI",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "10:28 If you get installation error how to notice and fix it",
"raw": "10:28 If you get installation error how to notice and fix it",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "11:49 Installation has been completed and now how to start using SwarmUI",
"raw": "11:49 Installation has been completed and now how to start using SwarmUI",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "12:29 Which settings I change before start using SwarmUI and how to change your theme like dark, white, gray",
"raw": "12:29 Which settings I change before start using SwarmUI and how to change your theme like dark, white, gray",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "12:56 How to make SwarmUI save generated images as PNG",
"raw": "12:56 How to make SwarmUI save generated images as PNG",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "13:08 How to find description of each settings and configuration",
"raw": "13:08 How to find description of each settings and configuration",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "13:28 How to download SD3 model and start using on Windows",
"raw": "13:28 How to download SD3 model and start using on Windows",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "13:38 How to use model downloader utility of SwarmUI",
"raw": "13:38 How to use model downloader utility of SwarmUI",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "14:17 How to set models folder paths and link your existing models folders in SwarmUI",
"raw": "14:17 How to set models folder paths and link your existing models folders in SwarmUI",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "14:35 Explanation of Root folder path in SwarmUI",
"raw": "14:35 Explanation of Root folder path in SwarmUI",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "14:52 VAE of SD3 do we need to download?",
"raw": "14:52 VAE of SD3 do we need to download?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Zero to Hero Stable Diffusion 3 Tutorial with Amazing SwarmUI SD Web UI that Utilizes ComfyUI
https://youtu.be/HKX8_F1Er_w
Do not skip any part of this tutorial to master how to use Stable Diffusion 3 (SD3) with the most advanced generative AI open source APP SwarmUI. Automatic1111 SD Web UI or Fooocus are not supporting the #SD3 yet. Therefore, I am starting to make tutorials for SwarmUI as well. #StableSwarmUI is officially developed by the StabilityAI and your mind will be blown after you watch this tutorial and learn its amazing features. StableSwarmUI uses #ComfyUI as the back end thus it has all the good features of ComfyUI and it brings you easy to use features of Automatic1111 #StableDiffusion Web UI with them. I really liked SwarmUI and planning to do more tutorials for it.
🔗 The Public Post (no login or account required) Shown In The Video With The Links ➡️ https://www.patreon.com/posts/stableswarmui-3-106135985
0:00 Introduction to the Stable Diffusion 3 (SD3) and SwarmUI and what is in the tutorial
4:12 Architecture and features of SD3
5:05 What each different model files of Stable Diffusion 3 means
6:26 How to download and install SwarmUI on Windows for SD3 and all other Stable Diffusion models
8:42 What kind of folder path you should use when installing SwarmUI
10:28 If you get installation error how to notice and fix it
11:49 Installation has been completed and now how to start using SwarmUI
12:29 Which settings I change before start using SwarmUI and how to change your theme like dark, white, gray
12:56 How to make SwarmUI save generated images as PNG
13:08 How to find description of each settings and configuration
13:28 How to download SD3 model and start using on Windows
13:38 How to use model downloader utility of SwarmUI
14:17 How to set models folder paths and link your existing models folders in SwarmUI
14:35 Explanation of Root folder path in SwarmUI
14:52 VAE of SD3 do we need to download? | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gözükara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/Yifqu3JeZwKM8DvoOlXdp.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/AGfMPv9BIMhuq13kTfs5R.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/sc8xqWH5oIVNe6DfYQAiu.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/kkbbhcIXs3ELkBylqFPLt.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/tvSJLxyEfHnvOph0TaVHm.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/_QjFQHM2-W4KPXrW9ohP8.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/Pd4B4wlpxk39q50tlKAc0.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"MonsterMMORPG",
"arun5k1095",
"ssergorp",
"faisalbsl21",
"Ramikan-BR",
"alielfilali01",
"ucsahin",
"Xeven7",
"UfukHurriyetoglu",
"Ougrid-D",
"clem",
"ihsass"
],
"count": 12
},
{
"reaction": "🚀",
"users": [
"MonsterMMORPG",
"prithivMLmods",
"Jaward",
"Ramikan-BR",
"clem"
],
"count": 5
},
{
"reaction": "👀",
"users": [
"MonsterMMORPG",
"Ramikan-BR",
"louisbrulenaudet"
],
"count": 3
},
{
"reaction": "👍",
"users": [
"MonsterMMORPG",
"Ramikan-BR",
"Ougrid-D"
],
"count": 3
},
{
"reaction": "🔥",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "🤗",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "😎",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "➕",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "🧠",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "🤝",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "🤯",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
}
] | 2024-06-22T14:38:51.000Z | 2024-06-22T14:38:51.110Z | [] | /posts/MonsterMMORPG/461231991332860 | 6,958 | 0 |
445166361535551 | [
{
"type": "text",
"value": "I've made an on device AI comparison between open source, Apple Intelligence, and Microsoft Copilot+ PC. This OS and applications level integration will bring GenAI to everyone, be it consumers or businesses, over the next year. ",
"raw": "I've made an on device AI comparison between open source, Apple Intelligence, and Microsoft Copilot+ PC. This OS and applications level integration will bring GenAI to everyone, be it consumers or businesses, over the next year. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Communities and BigTech hold divergent visions regarding the problems they aim to solve, ways to lock in users and enterprises, as well as their commercialization and GTM strategies.",
"raw": "Communities and BigTech hold divergent visions regarding the problems they aim to solve, ways to lock in users and enterprises, as well as their commercialization and GTM strategies.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I'm aware that this table has the potential to expand into an epic 30-page saga during an in-depth analysis, but hey, it's a beginning. Do you think I should throw in a few more comparisons? I'm all ears for your thoughts and critiques!",
"raw": "I'm aware that this table has the potential to expand into an epic 30-page saga during an in-depth analysis, but hey, it's a beginning. Do you think I should throw in a few more comparisons? I'm all ears for your thoughts and critiques!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it",
"raw": "Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I've made an on device AI comparison between open source, Apple Intelligence, and Microsoft Copilot+ PC. This OS and applications level integration will bring GenAI to everyone, be it consumers or businesses, over the next year.
Communities and BigTech hold divergent visions regarding the problems they aim to solve, ways to lock in users and enterprises, as well as their commercialization and GTM strategies.
I'm aware that this table has the potential to expand into an epic 30-page saga during an in-depth analysis, but hey, it's a beginning. Do you think I should throw in a few more comparisons? I'm all ears for your thoughts and critiques!
Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63f731c7d36951307fcca6bf/DMd5-Pt7YHC0agbAQ1xUc.png",
"fullname": "Mitko Vasilev",
"name": "mitkox",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 117,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63f731c7d36951307fcca6bf/2J2u_ftO271_mUEpjcY10.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"Ramikan-BR",
"not-lain",
"ucsahin",
"fabiodr",
"shaising",
"Narsil",
"SidharthanRajendran",
"Nymbo"
],
"count": 8
},
{
"reaction": "😎",
"users": [
"YaTharThShaRma999",
"Ramikan-BR",
"hugbean",
"gregoutlaw",
"not-lain",
"Nymbo"
],
"count": 6
},
{
"reaction": "👀",
"users": [
"Ramikan-BR",
"not-lain",
"Nymbo"
],
"count": 3
}
] | 2024-06-22T13:22:23.000Z | 2024-06-25T12:58:42.367Z | [
{
"avatarUrl": "/avatars/250fa1367af184832fe56ecf8a44ea2f.svg",
"fullname": "Sidharthan Rajendran",
"name": "SidharthanRajendran",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/mitkox/445166361535551 | 3,404 | 1 |
857502992845439 | [
{
"type": "text",
"value": "What is your favorite part of our Diffusers integration of Stable Diffusion 3? ",
"raw": "What is your favorite part of our Diffusers integration of Stable Diffusion 3? ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "My personal favorite is the ability to run it on a variety of different GPUs with minimal code changes. ",
"raw": "My personal favorite is the ability to run it on a variety of different GPUs with minimal code changes. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Learn more about them here:",
"raw": "Learn more about them here:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/sd3",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/sd3",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | What is your favorite part of our Diffusers integration of Stable Diffusion 3?
My personal favorite is the ability to run it on a variety of different GPUs with minimal code changes.
Learn more about them here:
https://huggingface.co/blog/sd3 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649681653581-5f7fbd813e94f16a85448745.jpeg",
"fullname": "Sayak Paul",
"name": "sayakpaul",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 446,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f7fbd813e94f16a85448745/yEe5emwYuqmEjKjYpq23p.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"prithivMLmods",
"KingNish",
"osanseviero",
"jasstionzyf",
"Ramikan-BR",
"radames",
"GPT007"
],
"count": 7
}
] | 2024-06-22T03:46:48.000Z | 2024-06-22T03:46:48.619Z | [] | /posts/sayakpaul/857502992845439 | 3,113 | 0 |
729661224905079 | [
{
"type": "text",
"value": "Updated the Journalists on 🤗 community page:",
"raw": "Updated the Journalists on 🤗 community page:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- new text-to-speech tools collection ",
"raw": "- new text-to-speech tools collection ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/JournalistsonHF/text-to-speech-6675c4dccdaa11e86928a15b",
"resource": {
"type": "collection",
"id": "JournalistsonHF/text-to-speech-6675c4dccdaa11e86928a15b",
"discussionNum": null
},
"url": "https://huggingface.co/collections/JournalistsonHF/text-to-speech-6675c4dccdaa11e86928a15b",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- additional leaderboards in the eval collection: ",
"raw": "- additional leaderboards in the eval collection: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/TTS-AGI/TTS-Arena",
"resource": {
"type": "space",
"id": "TTS-AGI/TTS-Arena",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/TTS-AGI/TTS-Arena",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " and ",
"raw": " and ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/dylanebert/3d-arena",
"resource": {
"type": "space",
"id": "dylanebert/3d-arena",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/dylanebert/3d-arena",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- new tools in the Text-Analysis collection: ",
"raw": "- new tools in the Text-Analysis collection: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/gokaygokay/Florence-2",
"resource": {
"type": "space",
"id": "gokaygokay/Florence-2",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/gokaygokay/Florence-2",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/pdf2dataset/pdf2dataset",
"resource": {
"type": "space",
"id": "pdf2dataset/pdf2dataset",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/pdf2dataset/pdf2dataset",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/cvachet/pdf-chatbot",
"resource": {
"type": "space",
"id": "cvachet/pdf-chatbot",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/cvachet/pdf-chatbot",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/Xenova/realtime-whisper-webgpu",
"resource": {
"type": "space",
"id": "Xenova/realtime-whisper-webgpu",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/Xenova/realtime-whisper-webgpu",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " in the Transcription collection",
"raw": " in the Transcription collection",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/radames/flash-sd3-taesd3",
"resource": {
"type": "space",
"id": "radames/flash-sd3-taesd3",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/radames/flash-sd3-taesd3",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " in the Image Tools collection",
"raw": " in the Image Tools collection",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Last but not least, ",
"raw": "- Last but not least, ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/okaris/omni-zero",
"resource": {
"type": "space",
"id": "okaris/omni-zero",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/okaris/omni-zero",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " in the fun collection for zero-shot stylized portrait creation",
"raw": " in the fun collection for zero-shot stylized portrait creation",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Is there any tool you would like to see added?",
"raw": "Is there any tool you would like to see added?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Find all the curated tools here: ",
"raw": "Find all the curated tools here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/collections/JournalistsonHF/",
"resource": null,
"url": null,
"href": "https://huggingface.co/collections/JournalistsonHF/",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Updated the Journalists on 🤗 community page:
- new text-to-speech tools collection https://huggingface.co/collections/JournalistsonHF/text-to-speech-6675c4dccdaa11e86928a15b
- additional leaderboards in the eval collection: https://huggingface.co/spaces/TTS-AGI/TTS-Arena and https://huggingface.co/spaces/dylanebert/3d-arena
- new tools in the Text-Analysis collection: https://huggingface.co/spaces/gokaygokay/Florence-2, https://huggingface.co/spaces/pdf2dataset/pdf2dataset, https://huggingface.co/spaces/cvachet/pdf-chatbot
- https://huggingface.co/spaces/Xenova/realtime-whisper-webgpu in the Transcription collection
- https://huggingface.co/spaces/radames/flash-sd3-taesd3 in the Image Tools collection
- Last but not least, https://huggingface.co/spaces/okaris/omni-zero in the fun collection for zero-shot stylized portrait creation
Is there any tool you would like to see added?
Find all the curated tools here: https://huggingface.co/collections/JournalistsonHF/ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"YaTharThShaRma999",
"xi0v",
"prithivMLmods",
"Taylor658",
"iamrobotbear",
"Ramikan-BR",
"YannisTevissen"
],
"count": 7
},
{
"reaction": "❤️",
"users": [
"cedpsam",
"osanseviero",
"Javedalam",
"Ramikan-BR"
],
"count": 4
},
{
"reaction": "🚀",
"users": [
"Ramikan-BR",
"fffiloni"
],
"count": 2
},
{
"reaction": "👍",
"users": [
"Ramikan-BR",
"YannisTevissen"
],
"count": 2
},
{
"reaction": "👀",
"users": [
"Ramikan-BR"
],
"count": 1
}
] | 2024-06-21T18:40:40.000Z | 2024-06-21T18:40:40.278Z | [] | /posts/fdaudens/729661224905079 | 3,374 | 0 |
991766553836950 | [
{
"type": "text",
"value": "Finally, a good handwriting recognition tool? ",
"raw": "Finally, a good handwriting recognition tool? ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I'm impressed by Microsoft's latest vision model, Florence-2 ",
"raw": "I'm impressed by Microsoft's latest vision model, Florence-2 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/microsoft/Florence-2-large",
"resource": {
"type": "model",
"id": "microsoft/Florence-2-large",
"discussionNum": null
},
"url": "https://huggingface.co/microsoft/Florence-2-large",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The results are really good, boasting a remarkably low error rate, as you can see with this letter from George W. Bush to Bill Clinton!",
"raw": "The results are really good, boasting a remarkably low error rate, as you can see with this letter from George W. Bush to Bill Clinton!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🚀🔒 What’s even better? You can run it locally on your device, ensuring your data stays 100% safe. ",
"raw": "🚀🔒 What’s even better? You can run it locally on your device, ensuring your data stays 100% safe. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "👉 Try it out here: ",
"raw": "👉 Try it out here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/gokaygokay/Florence-2",
"resource": {
"type": "space",
"id": "gokaygokay/Florence-2",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/gokaygokay/Florence-2",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Finally, a good handwriting recognition tool?
I'm impressed by Microsoft's latest vision model, Florence-2 https://huggingface.co/microsoft/Florence-2-large
The results are really good, boasting a remarkably low error rate, as you can see with this letter from George W. Bush to Bill Clinton!
🚀🔒 What’s even better? You can run it locally on your device, ensuring your data stays 100% safe.
👉 Try it out here: https://huggingface.co/spaces/gokaygokay/Florence-2 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/q9BMV2It4IV2ZpCONVcGa.mp4"
}
] | [] | [
{
"reaction": "👍",
"users": [
"gruhit-patel",
"ben11997",
"ehottl",
"John6666",
"Utochi",
"aizazkhowaja",
"Ramikan-BR",
"SaylorTwift"
],
"count": 8
}
] | 2024-06-21T16:22:31.000Z | 2024-06-21T23:21:38.775Z | [
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
}
] | /posts/fdaudens/991766553836950 | 2,585 | 1 |
263282152101643 | [
{
"type": "mention",
"value": null,
"raw": "@Be-Bo",
"resource": null,
"url": null,
"href": null,
"user": "Be-Bo",
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Dear Mr. Bahaa Shamoon Atia,",
"raw": "Dear Mr. Bahaa Shamoon Atia,",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "My name is Krischan Schoeninger, and I am very impressed with your Llama 3-70B Chatbot that you have made available on Hugging Face. I have been trying to use both your chatbot and the model from Hugging Face via API for a project, and I have found that your model produces significantly better results.",
"raw": "My name is Krischan Schoeninger, and I am very impressed with your Llama 3-70B Chatbot that you have made available on Hugging Face. I have been trying to use both your chatbot and the model from Hugging Face via API for a project, and I have found that your model produces significantly better results.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Could you please let me know what changes or optimizations you have made to your model that make it so powerful? Additionally, I am very interested in learning how I can host such a model myself. Could you assist me with this?",
"raw": "Could you please let me know what changes or optimizations you have made to your model that make it so powerful? Additionally, I am very interested in learning how I can host such a model myself. Could you assist me with this?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I would greatly appreciate your feedback.",
"raw": "I would greatly appreciate your feedback.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Best regards,",
"raw": "Best regards,",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Krischan Schoeninger",
"raw": "Krischan Schoeninger",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | @Be-Bo
Dear Mr. Bahaa Shamoon Atia,
My name is Krischan Schoeninger, and I am very impressed with your Llama 3-70B Chatbot that you have made available on Hugging Face. I have been trying to use both your chatbot and the model from Hugging Face via API for a project, and I have found that your model produces significantly better results.
Could you please let me know what changes or optimizations you have made to your model that make it so powerful? Additionally, I am very interested in learning how I can host such a model myself. Could you assist me with this?
I would greatly appreciate your feedback.
Best regards,
Krischan Schoeninger
| {
"avatarUrl": "/avatars/cc343c5f59e05c2a061f8474c6dad278.svg",
"fullname": "Krischan Schoeninger",
"name": "Smoke666",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
} | [] | [
{
"avatarUrl": "/avatars/625b9697c6b2e3c30e97524a0eae6c8b.svg",
"fullname": "Bahaa Shamoon Atia",
"name": "Be-Bo",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 7
}
] | [] | 2024-06-21T15:46:14.000Z | 2024-06-21T22:36:43.977Z | [
{
"avatarUrl": "/avatars/cc343c5f59e05c2a061f8474c6dad278.svg",
"fullname": "Krischan Schoeninger",
"name": "Smoke666",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641a05a7f5e9c66105fec9b2/INWVc96PQFO0ZPciozg7w.jpeg",
"fullname": "Artur Lauche",
"name": "Artples",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
},
{
"avatarUrl": "/avatars/c82779fdf94f80cdb5020504f83c818b.svg",
"fullname": "Yatharth Sharma",
"name": "YaTharThShaRma999",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 13,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61bf84c8ca59d6d196a1b4e8/L_NvUwlMYcye9X35z6f7e.jpeg",
"fullname": "Amir Hossein Kargaran",
"name": "kargaranamir",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 36,
"isFollowing": false
}
] | /posts/Smoke666/263282152101643 | 660 | 5 |
813446846184364 | [
{
"type": "text",
"value": "Several methods/models have recently been shared to generate synthetic data from minimal or no initial seeds, essentially creating data directly from raw text.",
"raw": "Several methods/models have recently been shared to generate synthetic data from minimal or no initial seeds, essentially creating data directly from raw text.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "IMO, these approaches that rely on smaller models for synthetic data generation are quite valuable for scaling up synthetic data and democratizing access to creating domain-specific synthetic datasets. ",
"raw": "IMO, these approaches that rely on smaller models for synthetic data generation are quite valuable for scaling up synthetic data and democratizing access to creating domain-specific synthetic datasets. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I've compiled a collection of Gradio demos showcasing some of these methods here: ",
"raw": "I've compiled a collection of Gradio demos showcasing some of these methods here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/davanstrien/synthetic-data-generation-demos-667573f248b97360ff3668a5",
"resource": {
"type": "collection",
"id": "davanstrien/synthetic-data-generation-demos-667573f248b97360ff3668a5",
"discussionNum": null
},
"url": "https://huggingface.co/collections/davanstrien/synthetic-data-generation-demos-667573f248b97360ff3668a5",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Several methods/models have recently been shared to generate synthetic data from minimal or no initial seeds, essentially creating data directly from raw text.
IMO, these approaches that rely on smaller models for synthetic data generation are quite valuable for scaling up synthetic data and democratizing access to creating domain-specific synthetic datasets.
I've compiled a collection of Gradio demos showcasing some of these methods here: https://huggingface.co/collections/davanstrien/synthetic-data-generation-demos-667573f248b97360ff3668a5 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 404,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"instruction-pretrain",
"Clausss",
"Taylor658",
"nihalnayak",
"osanseviero",
"nanyy1025"
],
"count": 6
}
] | 2024-06-21T13:32:58.000Z | 2024-06-25T09:38:29.081Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/GXN8mEmaq3rfITRrw7GeZ.jpeg",
"fullname": "atayloraerospace",
"name": "Taylor658",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 74,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60d0e7ff0c9ba111563b81d7/MudCH3WtpBIQGuodc7GND.jpeg",
"fullname": "Nihal Nayak",
"name": "nihalnayak",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 404,
"isFollowing": false
}
] | /posts/davanstrien/813446846184364 | 2,312 | 5 |
181714694113305 | [
{
"type": "text",
"value": "EPFL and Apple (at ",
"raw": "EPFL and Apple (at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@EPFL-VILAB",
"resource": null,
"url": null,
"href": null,
"user": "EPFL-VILAB",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ") just released 4M-21: single any-to-any model that can do anything from text-to-image generation to generating depth masks! 🙀",
"raw": ") just released 4M-21: single any-to-any model that can do anything from text-to-image generation to generating depth masks! 🙀",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "4M is a multimodal training framework introduced by Apple and EPFL.",
"raw": "4M is a multimodal training framework introduced by Apple and EPFL.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Resulting model takes image and text and output image and text 🤩",
"raw": "Resulting model takes image and text and output image and text 🤩",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Models: ",
"raw": "Models: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/EPFL-VILAB/4m-models-660193abe3faf4b4d98a2742",
"resource": {
"type": "collection",
"id": "EPFL-VILAB/4m-models-660193abe3faf4b4d98a2742",
"discussionNum": null
},
"url": "https://huggingface.co/collections/EPFL-VILAB/4m-models-660193abe3faf4b4d98a2742",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Demo: ",
"raw": "Demo: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/EPFL-VILAB/4M",
"resource": {
"type": "space",
"id": "EPFL-VILAB/4M",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/EPFL-VILAB/4M",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper: ",
"raw": "Paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2406.09406",
"resource": {
"type": "paper",
"id": "2406.09406",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2406.09406",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities (2406.09406)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This model consists of transformer encoder and decoder, where the key to multimodality lies in input and output data:",
"raw": "This model consists of transformer encoder and decoder, where the key to multimodality lies in input and output data:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "input and output tokens are decoded to generate bounding boxes, generated image's pixels, captions and more!",
"raw": "input and output tokens are decoded to generate bounding boxes, generated image's pixels, captions and more!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This model also learnt to generate canny maps, SAM edges and other things for steerable text-to-image generation 🖼️",
"raw": "This model also learnt to generate canny maps, SAM edges and other things for steerable text-to-image generation 🖼️",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The authors only added image-to-all capabilities for the demo, but you can try to use this model for text-to-image generation as well ☺️",
"raw": "The authors only added image-to-all capabilities for the demo, but you can try to use this model for text-to-image generation as well ☺️",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | EPFL and Apple (at @EPFL-VILAB) just released 4M-21: single any-to-any model that can do anything from text-to-image generation to generating depth masks! 🙀
4M is a multimodal training framework introduced by Apple and EPFL.
Resulting model takes image and text and output image and text 🤩
Models: https://huggingface.co/collections/EPFL-VILAB/4m-models-660193abe3faf4b4d98a2742
Demo: https://huggingface.co/spaces/EPFL-VILAB/4M
Paper: https://huggingface.co/papers/2406.09406
This model consists of transformer encoder and decoder, where the key to multimodality lies in input and output data:
input and output tokens are decoded to generate bounding boxes, generated image's pixels, captions and more!
This model also learnt to generate canny maps, SAM edges and other things for steerable text-to-image generation 🖼️
The authors only added image-to-all capabilities for the demo, but you can try to use this model for text-to-image generation as well ☺️
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/2HyJV8kCBZPp7EX7z3Dzc.png"
}
] | [] | [
{
"reaction": "🤯",
"users": [
"Tonic",
"YaTharThShaRma999",
"radames",
"KingNish",
"LeroyDyer",
"Jebadiah",
"Ramikan-BR",
"louisbrulenaudet"
],
"count": 8
},
{
"reaction": "👀",
"users": [
"Tonic",
"YaTharThShaRma999",
"Taylor658",
"radames",
"LeroyDyer",
"Jebadiah",
"Ukdah",
"Ramikan-BR"
],
"count": 8
},
{
"reaction": "👍",
"users": [
"Tonic",
"YaTharThShaRma999",
"LeroyDyer",
"Jebadiah",
"Norod78",
"Ramikan-BR"
],
"count": 6
},
{
"reaction": "😎",
"users": [
"Tonic",
"YaTharThShaRma999",
"LeroyDyer",
"Jebadiah",
"Ramikan-BR"
],
"count": 5
},
{
"reaction": "🧠",
"users": [
"Tonic",
"YaTharThShaRma999",
"Jebadiah",
"Ramikan-BR"
],
"count": 4
},
{
"reaction": "🤝",
"users": [
"Ukdah",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "🔥",
"users": [
"Ramikan-BR"
],
"count": 1
}
] | 2024-06-21T13:11:37.000Z | 2024-06-21T13:11:37.163Z | [] | /posts/merve/181714694113305 | 3,558 | 0 |
746271131430293 | [
{
"type": "text",
"value": "The new Claude Sonnet 3.5 model from Anthropic AI has been getting good reviews on since last night. It is quite good at coding related tasks. We tried it on the Static Analysis Eval benchmark (",
"raw": "The new Claude Sonnet 3.5 model from Anthropic AI has been getting good reviews on since last night. It is quite good at coding related tasks. We tried it on the Static Analysis Eval benchmark (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/patched-codes/static-analysis-eval",
"resource": {
"type": "dataset",
"id": "patched-codes/static-analysis-eval",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/patched-codes/static-analysis-eval",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ") which measures the ability of a LLM to fix vulnerabilities. The model scores 59.21% which is good but not better than other frontier models (like GPT-4, Gemini-1.5 and LLama-3).",
"raw": ") which measures the ability of a LLM to fix vulnerabilities. The model scores 59.21% which is good but not better than other frontier models (like GPT-4, Gemini-1.5 and LLama-3).",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | The new Claude Sonnet 3.5 model from Anthropic AI has been getting good reviews on since last night. It is quite good at coding related tasks. We tried it on the Static Analysis Eval benchmark (https://huggingface.co/datasets/patched-codes/static-analysis-eval) which measures the ability of a LLM to fix vulnerabilities. The model scores 59.21% which is good but not better than other frontier models (like GPT-4, Gemini-1.5 and LLama-3). | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png",
"fullname": "Asankhaya Sharma",
"name": "codelion",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 46,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👀",
"users": [
"codelion",
"John6666",
"sbarman25",
"lemon-mint",
"den0620",
"sroecker"
],
"count": 6
},
{
"reaction": "🧠",
"users": [
"codelion",
"phanerozoic",
"pduf",
"xi0v"
],
"count": 4
},
{
"reaction": "👍",
"users": [
"codelion",
"joseEjmendez"
],
"count": 2
},
{
"reaction": "🚀",
"users": [
"codelion"
],
"count": 1
}
] | 2024-06-21T11:20:27.000Z | 2024-11-11T08:11:38.913Z | [
{
"avatarUrl": "/avatars/066a250e5f73c4f37c9436e60aa8c861.svg",
"fullname": "Minoru Tanaka ",
"name": "Minorutanaka14052005",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1671634255137-noauth.png",
"fullname": "diego lopoez",
"name": "D1360VR",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "/avatars/8ee8dc056adbf361e79f577ce801e163.svg",
"fullname": "JacketJacket",
"name": "Jacket1119",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "/avatars/0fbd78ac044ee1fa6248ab1119021956.svg",
"fullname": "Fuji Santoso",
"name": "FujiAILearner",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "/avatars/cbd018187d5909d2173357f46bff030f.svg",
"fullname": "امیر مرادی",
"name": "Ettdghh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "/avatars/2b3abd4d9045b68fc213946950bdf4cd.svg",
"fullname": "Rich'Art Dely ",
"name": "Richartvrai",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "/avatars/f3fc9a8fa17c76dfc4d5c3d7a89a6956.svg",
"fullname": "MEDDEB",
"name": "NOBODY204",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/codelion/746271131430293 | 7,219 | 11 |
857171391930523 | [
{
"type": "text",
"value": "Hey all!",
"raw": "Hey all!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Here I take a somewhat strong stance and am petitioning to revisit the default training parameters on the Diffusers LoRA page.",
"raw": "Here I take a somewhat strong stance and am petitioning to revisit the default training parameters on the Diffusers LoRA page.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "In my opinion and after observing and testing may training pipelines shared by startups and resources, I have found that many of them exhibit the same types of issues. Upon discussing with some of these founders and creators, the common theme has been working backwards from the Diffusers LoRA page.",
"raw": "In my opinion and after observing and testing may training pipelines shared by startups and resources, I have found that many of them exhibit the same types of issues. Upon discussing with some of these founders and creators, the common theme has been working backwards from the Diffusers LoRA page.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "In this article, I explain why the defaults in the Diffuser LoRA code produce some positive results, which can be initially misleading, and a suggestion on how that could be improved.",
"raw": "In this article, I explain why the defaults in the Diffuser LoRA code produce some positive results, which can be initially misleading, and a suggestion on how that could be improved.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/alvdansen/revisit-diffusers-default-params",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/alvdansen/revisit-diffusers-default-params",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hey all!
Here I take a somewhat strong stance and am petitioning to revisit the default training parameters on the Diffusers LoRA page.
In my opinion and after observing and testing may training pipelines shared by startups and resources, I have found that many of them exhibit the same types of issues. Upon discussing with some of these founders and creators, the common theme has been working backwards from the Diffusers LoRA page.
In this article, I explain why the defaults in the Diffuser LoRA code produce some positive results, which can be initially misleading, and a suggestion on how that could be improved.
https://huggingface.co/blog/alvdansen/revisit-diffusers-default-params | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493,
"isFollowing": false
} | [] | [] | [] | 2024-06-21T09:45:59.000Z | 2024-06-21T17:53:22.249Z | [
{
"avatarUrl": "/avatars/5f967031612cba5c89ebd3ede1aee392.svg",
"fullname": "Cedric Roux",
"name": "djcedr",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493,
"isFollowing": false
}
] | /posts/alvdansen/857171391930523 | 972 | 4 |
965061553506061 | [
{
"type": "text",
"value": "We’re thrilled to share our latest technical paper on the multi-task GLiNER model. Our research dives into the following exciting and forward-thinking topics:",
"raw": "We’re thrilled to share our latest technical paper on the multi-task GLiNER model. Our research dives into the following exciting and forward-thinking topics:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔍 Zero-shot NER & Information Extraction: We demonstrate that with diverse and ample data, paired with the right architecture, encoders can achieve impressive results across various extraction tasks;",
"raw": "🔍 Zero-shot NER & Information Extraction: We demonstrate that with diverse and ample data, paired with the right architecture, encoders can achieve impressive results across various extraction tasks;",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🛠️ Synthetic Data Generation: Leveraging open labelling by LLMs like Llama, we generated high-quality training data. Our student model even outperformed the teacher model, highlighting the potential of this approach.",
"raw": "🛠️ Synthetic Data Generation: Leveraging open labelling by LLMs like Llama, we generated high-quality training data. Our student model even outperformed the teacher model, highlighting the potential of this approach.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🤖 Self-Learning: Our model showed consistent improvements in performance without labelled data, achieving up to a 12% increase in F1 score for initially challenging topics. This ability to learn and improve autonomously is a very perspective direction of future research!",
"raw": "🤖 Self-Learning: Our model showed consistent improvements in performance without labelled data, achieving up to a 12% increase in F1 score for initially challenging topics. This ability to learn and improve autonomously is a very perspective direction of future research!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2406.12925",
"resource": {
"type": "paper",
"id": "2406.12925",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2406.12925",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "GLiNER multi-task: Generalist Lightweight Model for Various Information\n Extraction Tasks (2406.12925)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/knowledgator/gliner-multitask-large-v0.5",
"resource": {
"type": "model",
"id": "knowledgator/gliner-multitask-large-v0.5",
"discussionNum": null
},
"url": "https://huggingface.co/knowledgator/gliner-multitask-large-v0.5",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/knowledgator/GLiNER_HandyLab",
"resource": {
"type": "space",
"id": "knowledgator/GLiNER_HandyLab",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/knowledgator/GLiNER_HandyLab",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "code_fence",
"value": null,
"raw": "```\n#!pip install gliner -U\n\nfrom gliner import GLiNER\n\nmodel = GLiNER.from_pretrained(\"knowledgator/gliner-multitask-large-v0.5\")\n\ntext = \"\"\"\nMicrosoft was founded by Bill Gates and Paul Allen on April 4, 1975 to develop and sell BASIC interpreters for the Altair 8800. \n\"\"\"\n\nlabels = [\"founder\", \"computer\", \"software\", \"position\", \"date\"]\n\nentities = model.predict_entities(text, labels)\n\nfor entity in entities:\n print(entity[\"text\"], \"=>\", entity[\"label\"])\n```",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "#!pip install gliner -U\n\nfrom gliner import GLiNER\n\nmodel = GLiNER.from_pretrained(\"knowledgator/gliner-multitask-large-v0.5\")\n\ntext = \"\"\"\nMicrosoft was founded by Bill Gates and Paul Allen on April 4, 1975 to develop and sell BASIC interpreters for the Altair 8800. \n\"\"\"\n\nlabels = [\"founder\", \"computer\", \"software\", \"position\", \"date\"]\n\nentities = model.predict_entities(text, labels)\n\nfor entity in entities:\n print(entity[\"text\"], \"=>\", entity[\"label\"])",
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | We’re thrilled to share our latest technical paper on the multi-task GLiNER model. Our research dives into the following exciting and forward-thinking topics:
🔍 Zero-shot NER & Information Extraction: We demonstrate that with diverse and ample data, paired with the right architecture, encoders can achieve impressive results across various extraction tasks;
🛠️ Synthetic Data Generation: Leveraging open labelling by LLMs like Llama, we generated high-quality training data. Our student model even outperformed the teacher model, highlighting the potential of this approach.
🤖 Self-Learning: Our model showed consistent improvements in performance without labelled data, achieving up to a 12% increase in F1 score for initially challenging topics. This ability to learn and improve autonomously is a very perspective direction of future research!
https://huggingface.co/papers/2406.12925
https://huggingface.co/knowledgator/gliner-multitask-large-v0.5
https://huggingface.co/spaces/knowledgator/GLiNER_HandyLab
```
#!pip install gliner -U
from gliner import GLiNER
model = GLiNER.from_pretrained("knowledgator/gliner-multitask-large-v0.5")
text = """
Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975 to develop and sell BASIC interpreters for the Altair 8800.
"""
labels = ["founder", "computer", "software", "position", "date"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1658166666371-noauth.png",
"fullname": "Stepanov",
"name": "Ihor",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 15,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"YaTharThShaRma999"
],
"count": 1
}
] | 2024-06-21T08:53:27.000Z | 2024-06-21T08:54:47.439Z | [] | /posts/Ihor/965061553506061 | 594 | 0 |
795270205684056 | [
{
"type": "text",
"value": "I'm about to start storing files in my TFLOPS count..",
"raw": "I'm about to start storing files in my TFLOPS count..",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I'm about to start storing files in my TFLOPS count.. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659f000b83abded48e190901/BnXL_XYbVX6PHngfQLECW.png",
"fullname": "Noa Roggendorff",
"name": "nroggendorff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"mazenandmariah",
"YaTharThShaRma999",
"lunarflu",
"not-lain"
],
"count": 4
}
] | 2024-06-20T19:14:37.000Z | 2024-06-21T16:48:31.799Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659f000b83abded48e190901/BnXL_XYbVX6PHngfQLECW.png",
"fullname": "Noa Roggendorff",
"name": "nroggendorff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/634262af8d8089ebaefd410e/pr6KcEebXTo5V2XAlpQNw.png",
"fullname": "Fizz 🏳️⚧️",
"name": "Fizzarolli",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 47,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652ff5ee7aab9cfb619400bf/cIdrUic40uXoRbAylFiM8.png",
"fullname": "Samuel L Meyers",
"name": "MrOvkill",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
}
] | /posts/nroggendorff/795270205684056 | 2,479 | 38 |
379857675340435 | [
{
"type": "text",
"value": "I am excited to share Synthetic Data Workshop, a Space that aims to simplify creating synthetic datasets! ",
"raw": "I am excited to share Synthetic Data Workshop, a Space that aims to simplify creating synthetic datasets! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✅ Pre-configured environment",
"raw": "✅ Pre-configured environment",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✅ Ready-to-use notebooks",
"raw": "✅ Ready-to-use notebooks",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✅ No local GPU needed",
"raw": "✅ No local GPU needed",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "You can try the Space here: ",
"raw": "You can try the Space here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/davanstrien/synthetic-data-workshop",
"resource": {
"type": "space",
"id": "davanstrien/synthetic-data-workshop",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/davanstrien/synthetic-data-workshop",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I also wrote a blog post going into more detail about the motivations for the Space: ",
"raw": "I also wrote a blog post going into more detail about the motivations for the Space: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/davanstrien/synthetic-data-workshop",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/davanstrien/synthetic-data-workshop",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I am excited to share Synthetic Data Workshop, a Space that aims to simplify creating synthetic datasets!
✅ Pre-configured environment
✅ Ready-to-use notebooks
✅ No local GPU needed
You can try the Space here: https://huggingface.co/spaces/davanstrien/synthetic-data-workshop
I also wrote a blog post going into more detail about the motivations for the Space: https://huggingface.co/blog/davanstrien/synthetic-data-workshop | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 404,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"osanseviero",
"xi0v",
"yjernite"
],
"count": 3
}
] | 2024-06-20T15:47:03.000Z | 2024-06-21T06:22:12.802Z | [] | /posts/davanstrien/379857675340435 | 2,019 | 1 |
730848617257487 | [
{
"type": "text",
"value": "🔍 A recently published technical report introduces MINT-1T, a dataset that will considerably expand open-source multimodal data. It features one trillion text tokens and three billion images and is scheduled for release in July 2024.",
"raw": "🔍 A recently published technical report introduces MINT-1T, a dataset that will considerably expand open-source multimodal data. It features one trillion text tokens and three billion images and is scheduled for release in July 2024.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Researcher Affiliation: ",
"raw": "Researcher Affiliation: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "University of Washington",
"raw": "University of Washington",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Salesforce Research",
"raw": "Salesforce Research",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Stanford University",
"raw": "Stanford University",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "University of Texas at Austin",
"raw": "University of Texas at Austin",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "University of California, Berkeley",
"raw": "University of California, Berkeley",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper:",
"raw": "Paper:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens",
"raw": "MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/pdf/2406.11271v1.pdf",
"resource": null,
"url": null,
"href": "https://arxiv.org/pdf/2406.11271v1.pdf",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "GitHub:",
"raw": "GitHub:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/mlfoundations/MINT-1T",
"resource": null,
"url": null,
"href": "https://github.com/mlfoundations/MINT-1T",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Highlights:",
"raw": "Highlights:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "MINT-1T Dataset: Largest open-source multimodal interleaved dataset with 1 trillion text tokens & 3 billion images. 📊🖼️",
"raw": "MINT-1T Dataset: Largest open-source multimodal interleaved dataset with 1 trillion text tokens & 3 billion images. 📊🖼️",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Diverse Sources: Incorporates data from HTML, PDFs, and ArXiv documents. 📄📚",
"raw": "Diverse Sources: Incorporates data from HTML, PDFs, and ArXiv documents. 📄📚",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Open Source: Dataset and code will be released at ",
"raw": "Open Source: Dataset and code will be released at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/mlfoundations/MINT-1T",
"resource": null,
"url": null,
"href": "https://github.com/mlfoundations/MINT-1T",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ". 🌐🔓",
"raw": ". 🌐🔓",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Broader Domain Representation: Uses diverse data sources for balanced domain representation. 🌍📚",
"raw": "Broader Domain Representation: Uses diverse data sources for balanced domain representation. 🌍📚",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Performance in Multimodal Tasks: The dataset’s scale and diversity should enhance multimodal task performance. 🤖💡",
"raw": "Performance in Multimodal Tasks: The dataset’s scale and diversity should enhance multimodal task performance. 🤖💡",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Datasheet Information:",
"raw": "Datasheet Information:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Motivation: Addresses the gap in large-scale open-source multimodal datasets. 🌐📊",
"raw": "Motivation: Addresses the gap in large-scale open-source multimodal datasets. 🌐📊",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Composition: 927.6 million documents, including HTML, PDF, and ArXiv sources. 📄📚",
"raw": "Composition: 927.6 million documents, including HTML, PDF, and ArXiv sources. 📄📚",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Collection Process: Gathered from CommonCrawl WARC and WAT dumps, with rigorous filtering. 🗂️🔍",
"raw": "Collection Process: Gathered from CommonCrawl WARC and WAT dumps, with rigorous filtering. 🗂️🔍",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Preprocessing/Cleaning: Removal of low-quality text, duplicates and anonymization of sensitive information. 🧹🔒",
"raw": "Preprocessing/Cleaning: Removal of low-quality text, duplicates and anonymization of sensitive information. 🧹🔒",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Ethical Considerations: Measures to ensure privacy and avoid bias. ⚖️🔏",
"raw": "Ethical Considerations: Measures to ensure privacy and avoid bias. ⚖️🔏",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Uses: Training multimodal models, generating interleaved image-text sequences, and building retrieval systems. 🤖📖",
"raw": "Uses: Training multimodal models, generating interleaved image-text sequences, and building retrieval systems. 🤖📖",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | 🔍 A recently published technical report introduces MINT-1T, a dataset that will considerably expand open-source multimodal data. It features one trillion text tokens and three billion images and is scheduled for release in July 2024.
Researcher Affiliation:
University of Washington
Salesforce Research
Stanford University
University of Texas at Austin
University of California, Berkeley
Paper:
MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
https://arxiv.org/pdf/2406.11271v1.pdf
GitHub:
https://github.com/mlfoundations/MINT-1T
Highlights:
MINT-1T Dataset: Largest open-source multimodal interleaved dataset with 1 trillion text tokens & 3 billion images. 📊🖼️
Diverse Sources: Incorporates data from HTML, PDFs, and ArXiv documents. 📄📚
Open Source: Dataset and code will be released at https://github.com/mlfoundations/MINT-1T. 🌐🔓
Broader Domain Representation: Uses diverse data sources for balanced domain representation. 🌍📚
Performance in Multimodal Tasks: The dataset’s scale and diversity should enhance multimodal task performance. 🤖💡
Datasheet Information:
Motivation: Addresses the gap in large-scale open-source multimodal datasets. 🌐📊
Composition: 927.6 million documents, including HTML, PDF, and ArXiv sources. 📄📚
Collection Process: Gathered from CommonCrawl WARC and WAT dumps, with rigorous filtering. 🗂️🔍
Preprocessing/Cleaning: Removal of low-quality text, duplicates and anonymization of sensitive information. 🧹🔒
Ethical Considerations: Measures to ensure privacy and avoid bias. ⚖️🔏
Uses: Training multimodal models, generating interleaved image-text sequences, and building retrieval systems. 🤖📖
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/GXN8mEmaq3rfITRrw7GeZ.jpeg",
"fullname": "atayloraerospace",
"name": "Taylor658",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 74,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/641b754d1911d3be6745cce9/meW29kMUUadt1nmS79PJN.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/641b754d1911d3be6745cce9/np_DuLHU3dz1wYpDenXwj.png"
}
] | [] | [
{
"reaction": "🔥",
"users": [
"YaTharThShaRma999",
"anas-awadalla"
],
"count": 2
}
] | 2024-06-20T15:19:49.000Z | 2024-06-20T15:31:17.817Z | [] | /posts/Taylor658/730848617257487 | 935 | 0 |
597673567444043 | [
{
"type": "text",
"value": "Towards a Dynamic 2.0 Model of Generative Intelligence",
"raw": "Towards a Dynamic 2.0 Model of Generative Intelligence",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://empereur-pirate.medium.com/towards-a-dynamic-2-0-model-of-generative-intelligence-6128d64fb523",
"resource": null,
"url": null,
"href": "https://empereur-pirate.medium.com/towards-a-dynamic-2-0-model-of-generative-intelligence-6128d64fb523",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This article explores the limitations of current AI language models, which do not allow user contributions and restrict freedom of expression through corporate censorship. It argues for a participatory approach to AI development, emphasizing the need for open and transparent models that integrate user feedback and contributions. The article highlights the importance of network neutrality and the potential of AI to enhance creativity and collective intelligence. It calls for political and institutional support to foster innovation in AI, ensuring that these technologies respect fundamental rights and promote qualitative performance through inclusivity and transparency.",
"raw": "This article explores the limitations of current AI language models, which do not allow user contributions and restrict freedom of expression through corporate censorship. It argues for a participatory approach to AI development, emphasizing the need for open and transparent models that integrate user feedback and contributions. The article highlights the importance of network neutrality and the potential of AI to enhance creativity and collective intelligence. It calls for political and institutional support to foster innovation in AI, ensuring that these technologies respect fundamental rights and promote qualitative performance through inclusivity and transparency.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Towards a Dynamic 2.0 Model of Generative Intelligence
https://empereur-pirate.medium.com/towards-a-dynamic-2-0-model-of-generative-intelligence-6128d64fb523
This article explores the limitations of current AI language models, which do not allow user contributions and restrict freedom of expression through corporate censorship. It argues for a participatory approach to AI development, emphasizing the need for open and transparent models that integrate user feedback and contributions. The article highlights the importance of network neutrality and the potential of AI to enhance creativity and collective intelligence. It calls for political and institutional support to foster innovation in AI, ensuring that these technologies respect fundamental rights and promote qualitative performance through inclusivity and transparency. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1678038324479-noauth.jpeg",
"fullname": "Empereur Pirate",
"name": "Empereur-Pirate",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 7,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🔥",
"users": [
"Empereur-Pirate",
"osanseviero"
],
"count": 2
}
] | 2024-06-20T15:01:48.000Z | 2024-06-20T15:01:48.531Z | [] | /posts/Empereur-Pirate/597673567444043 | 1,458 | 0 |
422173022604565 | [
{
"type": "text",
"value": "Florence-2 is a new vision foundation model capable of a wide variety of tasks 🤯 ",
"raw": "Florence-2 is a new vision foundation model capable of a wide variety of tasks 🤯 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Demo 👉🏻 ",
"raw": "Demo 👉🏻 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/gokaygokay/Florence-2",
"resource": {
"type": "space",
"id": "gokaygokay/Florence-2",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/gokaygokay/Florence-2",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Collection 👉🏻 ",
"raw": "Collection 👉🏻 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/microsoft/florence-6669f44df0d87d9c3bfb76de",
"resource": {
"type": "collection",
"id": "microsoft/florence-6669f44df0d87d9c3bfb76de",
"discussionNum": null
},
"url": "https://huggingface.co/collections/microsoft/florence-6669f44df0d87d9c3bfb76de",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This model can handle tasks that vary from OCR to semantic segmentation. ",
"raw": "This model can handle tasks that vary from OCR to semantic segmentation. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The difference from previous models is that the authors have compiled a dataset consisting of 126M images with 5.4B annotations labelled with their own data engine pseudolabelled by smaller specialized models and APIs. ",
"raw": "The difference from previous models is that the authors have compiled a dataset consisting of 126M images with 5.4B annotations labelled with their own data engine pseudolabelled by smaller specialized models and APIs. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The model has a similar architecture to previous models: an image encoder and a multimodality encoder with a text decoder. The authors have compiled the multitask dataset with prompts for each task. ",
"raw": "The model has a similar architecture to previous models: an image encoder and a multimodality encoder with a text decoder. The authors have compiled the multitask dataset with prompts for each task. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "You can also fine-tune this model on any task of choice. The authors also released different results on downstream tasks and reported their results when un/freezing the vision encoder 🤓📉 ",
"raw": "You can also fine-tune this model on any task of choice. The authors also released different results on downstream tasks and reported their results when un/freezing the vision encoder 🤓📉 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "They have released fine-tuned models too, you can find them in the collection above 🤗 ",
"raw": "They have released fine-tuned models too, you can find them in the collection above 🤗 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Florence-2 is a new vision foundation model capable of a wide variety of tasks 🤯
Demo 👉🏻 https://huggingface.co/spaces/gokaygokay/Florence-2
Collection 👉🏻 https://huggingface.co/collections/microsoft/florence-6669f44df0d87d9c3bfb76de
This model can handle tasks that vary from OCR to semantic segmentation.
The difference from previous models is that the authors have compiled a dataset consisting of 126M images with 5.4B annotations labelled with their own data engine pseudolabelled by smaller specialized models and APIs.
The model has a similar architecture to previous models: an image encoder and a multimodality encoder with a text decoder. The authors have compiled the multitask dataset with prompts for each task.
You can also fine-tune this model on any task of choice. The authors also released different results on downstream tasks and reported their results when un/freezing the vision encoder 🤓📉
They have released fine-tuned models too, you can find them in the collection above 🤗 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/cKwqeIUVmdMM9GDeVu7z2.png"
}
] | [] | [
{
"reaction": "🔥",
"users": [
"osanseviero",
"afondiel",
"victor",
"gokaygokay",
"John6666",
"leduckhai",
"louisbrulenaudet",
"KingNish",
"Linksplayer",
"jlzhou"
],
"count": 10
},
{
"reaction": "👀",
"users": [
"osanseviero",
"Taylor658",
"victor",
"gokaygokay",
"John6666",
"omaryshchenko"
],
"count": 6
},
{
"reaction": "❤️",
"users": [
"anakin87",
"KnutJaegersberg"
],
"count": 2
}
] | 2024-06-20T12:49:10.000Z | 2024-06-22T10:18:39.766Z | [
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
},
{
"avatarUrl": "/avatars/9fbbb8eaaa7b19752b336cf228d4679e.svg",
"fullname": "lucasjin",
"name": "lucasjin",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 8,
"isFollowing": false
},
{
"avatarUrl": "/avatars/a69b6f3e1f5e04937fe9500ca9b31af7.svg",
"fullname": "pollies",
"name": "polles",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/merve/422173022604565 | 4,320 | 3 |
106001158662393 | [
{
"type": "text",
"value": "Check out AutoRound, SOTA LLM quantization algorithm across 2-4 bits without adding any inference overhead to any model",
"raw": "Check out AutoRound, SOTA LLM quantization algorithm across 2-4 bits without adding any inference overhead to any model",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "paper: ",
"raw": "paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2309.05516",
"resource": null,
"url": null,
"href": "https://arxiv.org/abs/2309.05516",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "github: ",
"raw": "github: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/intel/auto-round",
"resource": null,
"url": null,
"href": "https://github.com/intel/auto-round",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "lowbits leaderboard: ",
"raw": "lowbits leaderboard: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/Intel/low-bit-leaderboard",
"resource": null,
"url": null,
"href": "https://huggingface.co/spaces/Intel/low-bit-leaderboard",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Check out AutoRound, SOTA LLM quantization algorithm across 2-4 bits without adding any inference overhead to any model
paper: https://arxiv.org/abs/2309.05516
github: https://github.com/intel/auto-round
lowbits leaderboard: https://huggingface.co/spaces/Intel/low-bit-leaderboard
| {
"avatarUrl": "/avatars/2e88e32ac0d45a1c624026e497eb00b3.svg",
"fullname": "wenhua cheng",
"name": "wenhuach",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/634772780a9fa1ac49994510/eVfopROZjga1VmgywgayB.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/634772780a9fa1ac49994510/8XuToowlRbkpwtniEza_a.png"
}
] | [] | [] | 2024-06-20T09:38:56.000Z | 2024-06-20T09:38:56.679Z | [] | /posts/wenhuach/106001158662393 | 530 | 0 |
317212739158608 | [
{
"type": "text",
"value": "With the most recent workshop on Semantic Evaluation as a part of NAACL-2024, this year delighted to contribute with 🧪 on Chain-of-Thought fine-tuning concepts to push forward LLMs reasoning capabilities in: ",
"raw": "With the most recent workshop on Semantic Evaluation as a part of NAACL-2024, this year delighted to contribute with 🧪 on Chain-of-Thought fine-tuning concepts to push forward LLMs reasoning capabilities in: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🧪 1. Reading Comprehension of Numerals in texts 🇨🇳 ",
"raw": "🧪 1. Reading Comprehension of Numerals in texts 🇨🇳 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "⭐ ",
"raw": "⭐ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/GavinZhao19/SemEval24-NumAnalysis-CN",
"resource": null,
"url": null,
"href": "https://github.com/GavinZhao19/SemEval24-NumAnalysis-CN",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔒 ",
"raw": "🔒 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/GavinZhao23/NumAnalysis-Chatglm3-6B",
"resource": null,
"url": null,
"href": "https://huggingface.co/GavinZhao23/NumAnalysis-Chatglm3-6B",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🧪 2. Extracting Emotion-Causes using Reasoning Revision (RR)",
"raw": "🧪 2. Extracting Emotion-Causes using Reasoning Revision (RR)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "⭐ ",
"raw": "⭐ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/nicolay-r/THOR-ECAC",
"resource": null,
"url": null,
"href": "https://github.com/nicolay-r/THOR-ECAC",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔓 ",
"raw": "🔓 ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/nicolay-r/flan-t5-emotion-cause-thor-base",
"resource": {
"type": "model",
"id": "nicolay-r/flan-t5-emotion-cause-thor-base",
"discussionNum": null
},
"url": "https://huggingface.co/nicolay-r/flan-t5-emotion-cause-thor-base",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2404.03361",
"resource": {
"type": "paper",
"id": "2404.03361",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2404.03361",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "nicolay-r at SemEval-2024 Task 3: Using Flan-T5 for Reasoning Emotion\n Cause in Conversations with Chain-of-Thought on Emotion States (2404.03361)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "🔑 In short, there are three major takeaways:",
"raw": "🔑 In short, there are three major takeaways:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✅ 1. The scale of the backboned LLM for SFT matters (>1.1B is preferable)",
"raw": "✅ 1. The scale of the backboned LLM for SFT matters (>1.1B is preferable)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✅ 2. The language of the input data matters in LLM reasoning capabilities: transfering data in English and picking English-based LLM is crucial for the most cases!",
"raw": "✅ 2. The language of the input data matters in LLM reasoning capabilities: transfering data in English and picking English-based LLM is crucial for the most cases!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "✅ 3. CoT and RR takes more time ⏳ for inferring and fine-tuning, proportionally to abount of steps in chain / amount of revisions in reasoning 🧠",
"raw": "✅ 3. CoT and RR takes more time ⏳ for inferring and fine-tuning, proportionally to abount of steps in chain / amount of revisions in reasoning 🧠",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | With the most recent workshop on Semantic Evaluation as a part of NAACL-2024, this year delighted to contribute with 🧪 on Chain-of-Thought fine-tuning concepts to push forward LLMs reasoning capabilities in:
🧪 1. Reading Comprehension of Numerals in texts 🇨🇳
⭐ https://github.com/GavinZhao19/SemEval24-NumAnalysis-CN
🔒 https://huggingface.co/GavinZhao23/NumAnalysis-Chatglm3-6B
🧪 2. Extracting Emotion-Causes using Reasoning Revision (RR)
⭐ https://github.com/nicolay-r/THOR-ECAC
🔓 https://huggingface.co/nicolay-r/flan-t5-emotion-cause-thor-base
https://huggingface.co/papers/2404.03361
🔑 In short, there are three major takeaways:
✅ 1. The scale of the backboned LLM for SFT matters (>1.1B is preferable)
✅ 2. The language of the input data matters in LLM reasoning capabilities: transfering data in English and picking English-based LLM is crucial for the most cases!
✅ 3. CoT and RR takes more time ⏳ for inferring and fine-tuning, proportionally to abount of steps in chain / amount of revisions in reasoning 🧠 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64e62d11d27a8292c3637f86/aptDeBHpCJxcREj6KPLN1.jpeg",
"fullname": "Nicolay Rusnachenko",
"name": "nicolay-r",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 49,
"isFollowing": false
} | [] | [] | [] | 2024-06-20T09:18:29.000Z | 2024-06-20T09:19:29.195Z | [] | /posts/nicolay-r/317212739158608 | 451 | 0 |