slug
stringlengths 15
15
| content
listlengths 1
129
| rawContent
stringlengths 1
2k
| author
dict | attachments
listlengths 0
49
| mentions
listlengths 0
49
| reactions
listlengths 0
12
| publishedAt
stringlengths 24
24
| updatedAt
stringlengths 24
24
| commentators
listlengths 0
47
| url
stringlengths 25
46
| totalUniqueImpressions
int64 1
41.5k
| numComments
int64 0
621
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
304603661954264 | [
{
"type": "text",
"value": "Free LLM/RAG course at ",
"raw": "Free LLM/RAG course at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://mltblog.com/48GebAG",
"resource": null,
"url": null,
"href": "https://mltblog.com/48GebAG",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " - learn how to build custom architectures from scratch, and earn an LLM certification, all free. ",
"raw": " - learn how to build custom architectures from scratch, and earn an LLM certification, all free. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The GenAItechLab Fellowship program allows participants to work on state-of-the-art, enterprise-grade projects, entirely for free, at their own pace, at home or in their workplace. The goal is to help you test, enhance, and further implement applications that outperform solutions offered by AI startups or organizations such as Google or OpenAI.",
"raw": "The GenAItechLab Fellowship program allows participants to work on state-of-the-art, enterprise-grade projects, entirely for free, at their own pace, at home or in their workplace. The goal is to help you test, enhance, and further implement applications that outperform solutions offered by AI startups or organizations such as Google or OpenAI.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "You will learn how to quickly build faster and lighter systems that deliver better results based on sound evaluation metrics, with a focus on case studies and best practices. Not the least, you will learn modern methods here to stay, designed by world-class expert and investor, Dr. Vincent Granville, founder of GenAItechLab.com.",
"raw": "You will learn how to quickly build faster and lighter systems that deliver better results based on sound evaluation metrics, with a focus on case studies and best practices. Not the least, you will learn modern methods here to stay, designed by world-class expert and investor, Dr. Vincent Granville, founder of GenAItechLab.com.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Free LLM/RAG course at https://mltblog.com/48GebAG - learn how to build custom architectures from scratch, and earn an LLM certification, all free.
The GenAItechLab Fellowship program allows participants to work on state-of-the-art, enterprise-grade projects, entirely for free, at their own pace, at home or in their workplace. The goal is to help you test, enhance, and further implement applications that outperform solutions offered by AI startups or organizations such as Google or OpenAI.
You will learn how to quickly build faster and lighter systems that deliver better results based on sound evaluation metrics, with a focus on case studies and best practices. Not the least, you will learn modern methods here to stay, designed by world-class expert and investor, Dr. Vincent Granville, founder of GenAItechLab.com. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/669c89e98f2dbc203f9e74ab/higvnXEHeo_Ig2bgTpn47.png",
"fullname": "Vincent Granville",
"name": "vincentg64",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 17,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/669c89e98f2dbc203f9e74ab/4zbSKFOCHNZ7E9WuqEjuo.png"
}
] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"redturk",
"AGenchev",
"Saripudin",
"prithivMLmods",
"Chars",
"Tonic"
],
"count": 6
},
{
"reaction": "๐",
"users": [
"cho2024",
"Tonic"
],
"count": 2
},
{
"reaction": "๐ค",
"users": [
"Tonic"
],
"count": 1
}
] | 2024-07-31T18:41:41.000Z | 2024-07-31T18:41:41.432Z | [] | /posts/vincentg64/304603661954264 | 1,775 | 0 |
742986348801279 | [
{
"type": "text",
"value": "โค๏ธโ๐ฅย Just released version 2.0 of Argilla!",
"raw": "โค๏ธโ๐ฅย Just released version 2.0 of Argilla!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This small revolution includes:",
"raw": "This small revolution includes:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ย You can now integrate with the Hugging Face Hub and get started in under five minutes.",
"raw": "๐ย You can now integrate with the Hugging Face Hub and get started in under five minutes.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ชย A single ",
"raw": "๐ชย A single ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "inline_code",
"value": null,
"raw": "`Dataset`",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "Dataset",
"label": null
},
{
"type": "text",
"value": " class is now designed to handle multiple tasks.",
"raw": " class is now designed to handle multiple tasks.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐งย Itโs 100 times simpler to configure your dataset now with the new SDK!",
"raw": "๐งย Itโs 100 times simpler to configure your dataset now with the new SDK!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ย The documentation has been revamped to be cleaner and more user-friendly.",
"raw": "๐ย The documentation has been revamped to be cleaner and more user-friendly.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ย A new feature automates splitting annotation tasks among a team.",
"raw": "๐ย A new feature automates splitting annotation tasks among a team.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โ๏ธย The layout has been made more flexible to accommodate many use cases.",
"raw": "โ๏ธย The layout has been made more flexible to accommodate many use cases.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check out the release highlights for more details: ",
"raw": "Check out the release highlights for more details: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/argilla-io/argilla/releases/tag/v2.0.0",
"resource": null,
"url": null,
"href": "https://github.com/argilla-io/argilla/releases/tag/v2.0.0",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | โค๏ธโ๐ฅย Just released version 2.0 of Argilla!
This small revolution includes:
๐ย You can now integrate with the Hugging Face Hub and get started in under five minutes.
๐ชย A single `Dataset` class is now designed to handle multiple tasks.
๐งย Itโs 100 times simpler to configure your dataset now with the new SDK!
๐ย The documentation has been revamped to be cleaner and more user-friendly.
๐ย A new feature automates splitting annotation tasks among a team.
โ๏ธย The layout has been made more flexible to accommodate many use cases.
Check out the release highlights for more details: https://github.com/argilla-io/argilla/releases/tag/v2.0.0 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e27f0f1f963b8f20f4a10d/n9KcVAzZDfymP9j_jpTRc.jpeg",
"fullname": "Ame Vi",
"name": "Ameeeee",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 26,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"nea-glossa",
"gabrielmbmb",
"anakin87",
"jfcalvo",
"kgdrathan",
"Hev832",
"Leiyre",
"sdiazlor",
"davidberenstein1957",
"philipp-zettl",
"burtenshaw",
"victor",
"KingNish",
"linoyts",
"cschroeder",
"darknoon",
"Nymbo",
"louisbrulenaudet",
"osanseviero",
"Tonic"
],
"count": 20
},
{
"reaction": "๐ค",
"users": [
"jfcalvo",
"Leiyre",
"sdiazlor",
"davidberenstein1957",
"burtenshaw",
"Nymbo",
"osanseviero",
"Tonic"
],
"count": 8
},
{
"reaction": "๐",
"users": [
"LeroyDyer",
"Nymbo",
"Tonic"
],
"count": 3
}
] | 2024-07-31T15:58:53.000Z | 2024-08-03T07:52:43.049Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3bb1cd0d8c2c2169f0b88/eT2TS0IlQbZtz-F_zHLz9.jpeg",
"fullname": "Joseph Pollack",
"name": "Tonic",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 310,
"isFollowing": false
}
] | /posts/Ameeeee/742986348801279 | 3,520 | 1 |
755951686477118 | [
{
"type": "text",
"value": "๐๐น๐ฎ๐บ๐ฎ-๐ฏ.๐ญ ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ณ๐ถ๐ป๐ฎ๐น๐น๐ ๐ด๐ฒ๐ ๐๐ต๐ฒ๐ถ๐ฟ ๐๐ต๐ฎ๐๐ฏ๐ผ๐ ๐๐ฟ๐ฒ๐ป๐ฎ ๐ฟ๐ฎ๐ป๐ธ๐ถ๐ป๐ด ๐๏ธ",
"raw": "๐๐น๐ฎ๐บ๐ฎ-๐ฏ.๐ญ ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ณ๐ถ๐ป๐ฎ๐น๐น๐ ๐ด๐ฒ๐ ๐๐ต๐ฒ๐ถ๐ฟ ๐๐ต๐ฎ๐๐ฏ๐ผ๐ ๐๐ฟ๐ฒ๐ป๐ฎ ๐ฟ๐ฎ๐ป๐ธ๐ถ๐ป๐ด ๐๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Given the impressive benchmarks published my Meta for their Llama-3.1 models, I was curious to see how these models would compare to top proprietary models on Chatbot Arena.",
"raw": "Given the impressive benchmarks published my Meta for their Llama-3.1 models, I was curious to see how these models would compare to top proprietary models on Chatbot Arena.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Now we've got the results! LMSys released the ELO derived from thousands of user votes for the new models, and here are the rankings:",
"raw": "Now we've got the results! LMSys released the ELO derived from thousands of user votes for the new models, and here are the rankings:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ฅ 405B Model ranks 5th overall, in front of GPT-4-turbo! But behind GPT-4o, Claude-3.5 Sonnet and Gemini-advanced.",
"raw": "๐ฅ 405B Model ranks 5th overall, in front of GPT-4-turbo! But behind GPT-4o, Claude-3.5 Sonnet and Gemini-advanced.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ 70B Model climbs up to 9th rank ! From 1206 โก๏ธ 1244.",
"raw": "๐ 70B Model climbs up to 9th rank ! From 1206 โก๏ธ 1244.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ 8B Model improves from 1152 โก๏ธ 1170.",
"raw": "๐ 8B Model improves from 1152 โก๏ธ 1170.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โ
This confirms that Llama-3.1 is a good contender for any task: any of its 3 model size is much cheaper to run than equivalent proprietary models!",
"raw": "โ
This confirms that Llama-3.1 is a good contender for any task: any of its 3 model size is much cheaper to run than equivalent proprietary models!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "For instance, here are the inference prices for the top models;",
"raw": "For instance, here are the inference prices for the top models;",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โค GPT-4-Turbo inference price from OpenAI: $5/M input tokens, $15/M output tokens",
"raw": "โค GPT-4-Turbo inference price from OpenAI: $5/M input tokens, $15/M output tokens",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โค Llama-3.1-405B from HF API (for testing only): 3$/M for input or output tokens (Source linked in the first comment)",
"raw": "โค Llama-3.1-405B from HF API (for testing only): 3$/M for input or output tokens (Source linked in the first comment)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โค Llama-3.1-405B from HF API (for testing only): free โจ",
"raw": "โค Llama-3.1-405B from HF API (for testing only): free โจ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Get a head start on the HF API (resource by ",
"raw": "Get a head start on the HF API (resource by ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@andrewrreed",
"resource": null,
"url": null,
"href": null,
"user": "andrewrreed",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ") ๐ ",
"raw": ") ๐ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/learn/cookbook/enterprise_hub_serverless_inference_api",
"resource": null,
"url": null,
"href": "https://huggingface.co/learn/cookbook/enterprise_hub_serverless_inference_api",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐๐น๐ฎ๐บ๐ฎ-๐ฏ.๐ญ ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ณ๐ถ๐ป๐ฎ๐น๐น๐ ๐ด๐ฒ๐ ๐๐ต๐ฒ๐ถ๐ฟ ๐๐ต๐ฎ๐๐ฏ๐ผ๐ ๐๐ฟ๐ฒ๐ป๐ฎ ๐ฟ๐ฎ๐ป๐ธ๐ถ๐ป๐ด ๐๏ธ
Given the impressive benchmarks published my Meta for their Llama-3.1 models, I was curious to see how these models would compare to top proprietary models on Chatbot Arena.
Now we've got the results! LMSys released the ELO derived from thousands of user votes for the new models, and here are the rankings:
๐ฅ 405B Model ranks 5th overall, in front of GPT-4-turbo! But behind GPT-4o, Claude-3.5 Sonnet and Gemini-advanced.
๐ 70B Model climbs up to 9th rank ! From 1206 โก๏ธ 1244.
๐ 8B Model improves from 1152 โก๏ธ 1170.
โ
This confirms that Llama-3.1 is a good contender for any task: any of its 3 model size is much cheaper to run than equivalent proprietary models!
For instance, here are the inference prices for the top models;
โค GPT-4-Turbo inference price from OpenAI: $5/M input tokens, $15/M output tokens
โค Llama-3.1-405B from HF API (for testing only): 3$/M for input or output tokens (Source linked in the first comment)
โค Llama-3.1-405B from HF API (for testing only): free โจ
Get a head start on the HF API (resource by @andrewrreed) ๐ https://huggingface.co/learn/cookbook/enterprise_hub_serverless_inference_api | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg",
"fullname": "Aymeric Roucher",
"name": "m-ric",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 476,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63d10d4e8eaa4831005e92b5/jBzqAenqeYJRHOZ2YK-pY.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61d375fd733d3a83ecd1bba9/oIXwvvs1-HaCnJXMCZgkc.jpeg",
"fullname": "Andrew Reed",
"name": "andrewrreed",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 103
}
] | [
{
"reaction": "๐ฅ",
"users": [
"Stoneking89",
"louisbrulenaudet",
"Tonic",
"andrewrreed"
],
"count": 4
}
] | 2024-07-31T15:30:37.000Z | 2024-07-31T15:48:56.738Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6569216f9c96f1a47bf45788/mCLqmAs4dOjKdxNQVAp1w.png",
"fullname": "Sica Rius",
"name": "SicariusSicariiStuff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 132,
"isFollowing": false
}
] | /posts/m-ric/755951686477118 | 1,103 | 1 |
397656500149493 | [
{
"type": "text",
"value": "๐ I created some shiny new Argilla datasets to go along with the 2.0 release! ",
"raw": "๐ I created some shiny new Argilla datasets to go along with the 2.0 release! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "code_fence",
"value": null,
"raw": "```python\nimport argilla as rg \n\nds = rg.Dataset.from_hub(\n \"argilla/multi-modal-vlm-visit-bench\"\n) \n```",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": "python",
"code": "import argilla as rg \n\nds = rg.Dataset.from_hub(\n \"argilla/multi-modal-vlm-visit-bench\"\n) ",
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/argilla/argilla-v20-compatible-datasets-66a8e670f351acac61a0421c",
"resource": {
"type": "collection",
"id": "argilla/argilla-v20-compatible-datasets-66a8e670f351acac61a0421c",
"discussionNum": null
},
"url": "https://huggingface.co/collections/argilla/argilla-v20-compatible-datasets-66a8e670f351acac61a0421c",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ I created some shiny new Argilla datasets to go along with the 2.0 release!
```python
import argilla as rg
ds = rg.Dataset.from_hub(
"argilla/multi-modal-vlm-visit-bench"
)
```
https://huggingface.co/collections/argilla/argilla-v20-compatible-datasets-66a8e670f351acac61a0421c | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 148,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"Ameeeee",
"clem",
"burtenshaw",
"Norod78",
"gabrielmbmb",
"not-lain",
"Blane187",
"mynkchaudhry",
"louisbrulenaudet",
"Tonic",
"davidberenstein1957"
],
"count": 11
}
] | 2024-07-31T14:45:45.000Z | 2024-07-31T14:48:54.553Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 148,
"isFollowing": false
}
] | /posts/davidberenstein1957/397656500149493 | 2,195 | 2 |
584786234994856 | [
{
"type": "text",
"value": "๐ฅ Today, Writer dropped Palmyra-Med-70b and Palmyra-Fin-70b, two new domain-specific models that are setting a new standard for medical and financial model performance.",
"raw": "๐ฅ Today, Writer dropped Palmyra-Med-70b and Palmyra-Fin-70b, two new domain-specific models that are setting a new standard for medical and financial model performance.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "TL;DR",
"raw": "TL;DR",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Palmyra-Med-70b",
"raw": "Palmyra-Med-70b",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ข 8k and 32k versions available",
"raw": "๐ข 8k and 32k versions available",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ MMLU performance of ~86%, outperforming other top models",
"raw": "๐ MMLU performance of ~86%, outperforming other top models",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐จโโ๏ธ Great for diagnosing, planning treatments, medical research, insurance coding and billing",
"raw": "๐จโโ๏ธ Great for diagnosing, planning treatments, medical research, insurance coding and billing",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Open-model license for non-commercial use cases",
"raw": "๐ Open-model license for non-commercial use cases",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ค Available on Hugging Face: ",
"raw": "๐ค Available on Hugging Face: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/Writer/Palmyra-Med-70B",
"resource": {
"type": "model",
"id": "Writer/Palmyra-Med-70B",
"discussionNum": null
},
"url": "https://huggingface.co/Writer/Palmyra-Med-70B",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐พ Live on NVIDIA NIM: ",
"raw": "๐พ Live on NVIDIA NIM: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://build.nvidia.com/writer/palmyra-med-70b",
"resource": null,
"url": null,
"href": "https://build.nvidia.com/writer/palmyra-med-70b",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Palmyra-Fin-70b",
"raw": "Palmyra-Fin-70b",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Passed the CFA Level III exam with a 73% score โ the first model to do so",
"raw": "๐ Passed the CFA Level III exam with a 73% score โ the first model to do so",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ธ Skilled at complex tasks like investment research, financial analysis, and sentiment analysis",
"raw": "๐ธ Skilled at complex tasks like investment research, financial analysis, and sentiment analysis",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Outperformed other top models on a long-fin-eval test of real-world use cases",
"raw": "๐ Outperformed other top models on a long-fin-eval test of real-world use cases",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Open-model license for non-commercial use cases",
"raw": "๐ Open-model license for non-commercial use cases",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ค Available on Hugging Face: ",
"raw": "๐ค Available on Hugging Face: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/Writer/Palmyra-Fin-70B-32K",
"resource": {
"type": "model",
"id": "Writer/Palmyra-Fin-70B-32K",
"discussionNum": null
},
"url": "https://huggingface.co/Writer/Palmyra-Fin-70B-32K",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐พ Live on NVIDIA NIM: ",
"raw": "๐พ Live on NVIDIA NIM: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://build.nvidia.com/writer/palmyra-fin-70b-32k",
"resource": null,
"url": null,
"href": "https://build.nvidia.com/writer/palmyra-fin-70b-32k",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Try them out and let us know what you think!",
"raw": "Try them out and let us know what you think!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ฅ Today, Writer dropped Palmyra-Med-70b and Palmyra-Fin-70b, two new domain-specific models that are setting a new standard for medical and financial model performance.
TL;DR
Palmyra-Med-70b
๐ข 8k and 32k versions available
๐ MMLU performance of ~86%, outperforming other top models
๐จโโ๏ธ Great for diagnosing, planning treatments, medical research, insurance coding and billing
๐ Open-model license for non-commercial use cases
๐ค Available on Hugging Face: https://huggingface.co/Writer/Palmyra-Med-70B
๐พ Live on NVIDIA NIM: https://build.nvidia.com/writer/palmyra-med-70b
Palmyra-Fin-70b
๐ Passed the CFA Level III exam with a 73% score โ the first model to do so
๐ธ Skilled at complex tasks like investment research, financial analysis, and sentiment analysis
๐ Outperformed other top models on a long-fin-eval test of real-world use cases
๐ Open-model license for non-commercial use cases
๐ค Available on Hugging Face: https://huggingface.co/Writer/Palmyra-Fin-70B-32K
๐พ Live on NVIDIA NIM: https://build.nvidia.com/writer/palmyra-fin-70b-32k
Try them out and let us know what you think! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/666d1e4e2b9e45273912c14a/FffOTN2hceaGWWoGqnJZW.jpeg",
"fullname": "Sam Julien",
"name": "samjulien",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 7,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"wassemgtk",
"starlingfeather",
"Xhehab",
"tomaarsen",
"gihong99",
"Neithrik",
"clem",
"kiranr",
"louisbrulenaudet"
],
"count": 9
}
] | 2024-07-31T14:24:18.000Z | 2024-08-02T15:06:01.906Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem ๐ค",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1734,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/666d1e4e2b9e45273912c14a/FffOTN2hceaGWWoGqnJZW.jpeg",
"fullname": "Sam Julien",
"name": "samjulien",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 7,
"isFollowing": false
}
] | /posts/samjulien/584786234994856 | 1,892 | 2 |
199757608126776 | [
{
"type": "text",
"value": "Excited to announce the release of our high-quality Llama-3.1 8B 4-bit HQQ/calibrated quantized model! Achieving an impressive 99.3% relative performance to FP16, it also delivers the fastest inference speed for transformers. ",
"raw": "Excited to announce the release of our high-quality Llama-3.1 8B 4-bit HQQ/calibrated quantized model! Achieving an impressive 99.3% relative performance to FP16, it also delivers the fastest inference speed for transformers. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/mobiuslabsgmbh/Llama-3.1-8b-instruct_4bitgs64_hqq_calib",
"resource": {
"type": "model",
"id": "mobiuslabsgmbh/Llama-3.1-8b-instruct_4bitgs64_hqq_calib",
"discussionNum": null
},
"url": "https://huggingface.co/mobiuslabsgmbh/Llama-3.1-8b-instruct_4bitgs64_hqq_calib",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Excited to announce the release of our high-quality Llama-3.1 8B 4-bit HQQ/calibrated quantized model! Achieving an impressive 99.3% relative performance to FP16, it also delivers the fastest inference speed for transformers.
https://huggingface.co/mobiuslabsgmbh/Llama-3.1-8b-instruct_4bitgs64_hqq_calib | {
"avatarUrl": "/avatars/32cba53eb49ee49654751568a762ee0a.svg",
"fullname": "Shaij",
"name": "appoose",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 14,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"mobicham",
"victor",
"CountOfMonteCrypto",
"AtAndDev",
"nbroad",
"zsolx2",
"osanseviero",
"gabrielmbmb",
"Tonic"
],
"count": 9
}
] | 2024-07-31T13:18:19.000Z | 2024-07-31T15:50:27.655Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6569216f9c96f1a47bf45788/mCLqmAs4dOjKdxNQVAp1w.png",
"fullname": "Sica Rius",
"name": "SicariusSicariiStuff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 132,
"isFollowing": false
}
] | /posts/appoose/199757608126776 | 1,787 | 1 |
693098912127415 | [
{
"type": "text",
"value": "Hey everyone ๐ค!",
"raw": "Hey everyone ๐ค!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check out this cool new space from Finegrain: ",
"raw": "Check out this cool new space from Finegrain: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/finegrain/finegrain-object-eraser",
"resource": {
"type": "space",
"id": "finegrain/finegrain-object-eraser",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/finegrain/finegrain-object-eraser",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Under the hoods, it's a pipeline of models (currently exposed via an API) that allows you to easily erase any object from your image just by naming it or selecting it! Not only will the object disappear, but so will its effects on the scene, like shadows and reflections. Built on top of Refiners, our micro-framework for simple foundation model adaptation (feel free to star it on GitHub if you like it: ",
"raw": "Under the hoods, it's a pipeline of models (currently exposed via an API) that allows you to easily erase any object from your image just by naming it or selecting it! Not only will the object disappear, but so will its effects on the scene, like shadows and reflections. Built on top of Refiners, our micro-framework for simple foundation model adaptation (feel free to star it on GitHub if you like it: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/finegrain-ai/refiners",
"resource": null,
"url": null,
"href": "https://github.com/finegrain-ai/refiners",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ")",
"raw": ")",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hey everyone ๐ค!
Check out this cool new space from Finegrain: https://huggingface.co/spaces/finegrain/finegrain-object-eraser
Under the hoods, it's a pipeline of models (currently exposed via an API) that allows you to easily erase any object from your image just by naming it or selecting it! Not only will the object disappear, but so will its effects on the scene, like shadows and reflections. Built on top of Refiners, our micro-framework for simple foundation model adaptation (feel free to star it on GitHub if you like it: https://github.com/finegrain-ai/refiners) | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669043420538-6364f1784f773b7e4cede70c.jpeg",
"fullname": "Laureฮทt Fainsin",
"name": "1aurent",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 79,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6364f1784f773b7e4cede70c/cfyUq9m-8aNkFOKzmFmzf.mp4"
}
] | [] | [
{
"reaction": "๐ฅ",
"users": [
"1aurent",
"deltheil",
"limiteinductive",
"Metin",
"efecelik",
"afkfatih",
"dnsbrl",
"erkhem-gantulga",
"alangunning",
"Hatman",
"DiamanteAmarelo"
],
"count": 11
},
{
"reaction": "๐",
"users": [
"catwell",
"limiteinductive",
"1aurent",
"DiamanteAmarelo"
],
"count": 4
},
{
"reaction": "๐",
"users": [
"prithivMLmods",
"DiamanteAmarelo"
],
"count": 2
},
{
"reaction": "๐ค",
"users": [
"Tonic",
"DiamanteAmarelo"
],
"count": 2
}
] | 2024-07-31T13:04:56.000Z | 2024-07-31T19:59:19.385Z | [
{
"avatarUrl": "/avatars/582c67dded3b9c8fcec87891bac4dbe7.svg",
"fullname": "Arca yi",
"name": "arcayi",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669043420538-6364f1784f773b7e4cede70c.jpeg",
"fullname": "Laureฮทt Fainsin",
"name": "1aurent",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 79,
"isFollowing": false
}
] | /posts/1aurent/693098912127415 | 2,564 | 2 |
460911864437039 | [
{
"type": "text",
"value": "Journalists, this is a must-read for your career evolution. I just read ",
"raw": "Journalists, this is a must-read for your career evolution. I just read ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@ndiakopoulos",
"resource": null,
"url": null,
"href": null,
"user": "ndiakopoulos",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " on \"The Impact of Generative AI on Journalistic Labor\". Here are my 5 takeaways:",
"raw": " on \"The Impact of Generative AI on Journalistic Labor\". Here are my 5 takeaways:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ LLMs could make 83% of reporter tasks and 76% of editor tasks way more efficient. \"Itโs important to emphasize that these figures are fundamentally about augmentation rather than automation\".",
"raw": "๐ LLMs could make 83% of reporter tasks and 76% of editor tasks way more efficient. \"Itโs important to emphasize that these figures are fundamentally about augmentation rather than automation\".",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐งฉ Four emerging job clusters to consider: AI-doers, AI-users, AI-strategizers, and AI-reporters. Where do you fit?",
"raw": "๐งฉ Four emerging job clusters to consider: AI-doers, AI-users, AI-strategizers, and AI-reporters. Where do you fit?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ US newsrooms \"will need people (1) who have the skills to use the current generation of LLMs and (2) who can develop the bespoke software to unlock their full potential, particularly if building new in-house tools\". Get ahead of the curve.",
"raw": "๐ US newsrooms \"will need people (1) who have the skills to use the current generation of LLMs and (2) who can develop the bespoke software to unlock their full potential, particularly if building new in-house tools\". Get ahead of the curve.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Key takeaway: upskill in AI. It's not just about using tools, but understanding how to integrate them into your workflow.",
"raw": "๐ Key takeaway: upskill in AI. It's not just about using tools, but understanding how to integrate them into your workflow.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ News orgs \"would be wise to accelerate hiring and invest in upskilling their existing workforce\". Take advantage or seek out learning opportunities.",
"raw": "๐ News orgs \"would be wise to accelerate hiring and invest in upskilling their existing workforce\". Take advantage or seek out learning opportunities.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Journalists: What AI skills are you planning to develop? How might this reshape your role?",
"raw": "Journalists: What AI skills are you planning to develop? How might this reshape your role?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Read the full blog post here: ",
"raw": "๐ Read the full blog post here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://generative-ai-newsroom.com/the-impact-of-generative-ai-on-journalistic-labor-e87a6c333245",
"resource": null,
"url": null,
"href": "https://generative-ai-newsroom.com/the-impact-of-generative-ai-on-journalistic-labor-e87a6c333245",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " It's worth your time if you're thinking about the future of journalism & your career.",
"raw": " It's worth your time if you're thinking about the future of journalism & your career.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "#AIinJournalism #CareerEvolution #FutureofNews",
"raw": "#AIinJournalism #CareerEvolution #FutureofNews",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Journalists, this is a must-read for your career evolution. I just read @ndiakopoulos on "The Impact of Generative AI on Journalistic Labor". Here are my 5 takeaways:
๐ LLMs could make 83% of reporter tasks and 76% of editor tasks way more efficient. "Itโs important to emphasize that these figures are fundamentally about augmentation rather than automation".
๐งฉ Four emerging job clusters to consider: AI-doers, AI-users, AI-strategizers, and AI-reporters. Where do you fit?
๐ US newsrooms "will need people (1) who have the skills to use the current generation of LLMs and (2) who can develop the bespoke software to unlock their full potential, particularly if building new in-house tools". Get ahead of the curve.
๐ Key takeaway: upskill in AI. It's not just about using tools, but understanding how to integrate them into your workflow.
๐ News orgs "would be wise to accelerate hiring and invest in upskilling their existing workforce". Take advantage or seek out learning opportunities.
Journalists: What AI skills are you planning to develop? How might this reshape your role?
๐ Read the full blog post here: https://generative-ai-newsroom.com/the-impact-of-generative-ai-on-journalistic-labor-e87a6c333245 It's worth your time if you're thinking about the future of journalism & your career.
#AIinJournalism #CareerEvolution #FutureofNews | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/jteMIRzk954YKs6Mhcwcr.webp"
}
] | [
{
"avatarUrl": "/avatars/134290a59822bff8cada0c48111de464.svg",
"fullname": "Nick",
"name": "ndiakopoulos",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1
}
] | [
{
"reaction": "๐",
"users": [
"louisbrulenaudet"
],
"count": 1
}
] | 2024-07-31T12:45:57.000Z | 2024-07-31T12:45:57.613Z | [] | /posts/fdaudens/460911864437039 | 465 | 0 |
328474988002537 | [
{
"type": "text",
"value": "They should make a thing like google colab but you can have unlimited free access to a whole datacenter that would be cool. like if you agree",
"raw": "They should make a thing like google colab but you can have unlimited free access to a whole datacenter that would be cool. like if you agree",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | They should make a thing like google colab but you can have unlimited free access to a whole datacenter that would be cool. like if you agree | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6316fb937b0ee0136e5f1220/poHBoJ7QAF_s2CCaosdvQ.jpeg",
"fullname": "Firstname Lastname",
"name": "takeraparterer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
} | [] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"takeraparterer",
"hawaee",
"jnh-ordbogen",
"holooo",
"ashercn97",
"Blane187"
],
"count": 6
},
{
"reaction": "โ",
"users": [
"VOiceOwl",
"thethinkmachine",
"takeraparterer"
],
"count": 3
}
] | 2024-07-31T08:41:42.000Z | 2024-08-05T20:48:13.236Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63de560a15266dd945f209ca/PeZf3IF-x7Qh8OcnKH12R.png",
"fullname": "MrDragonFox",
"name": "MrDragonFox",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 11,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6316fb937b0ee0136e5f1220/poHBoJ7QAF_s2CCaosdvQ.jpeg",
"fullname": "Firstname Lastname",
"name": "takeraparterer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3bb1cd0d8c2c2169f0b88/eT2TS0IlQbZtz-F_zHLz9.jpeg",
"fullname": "Joseph Pollack",
"name": "Tonic",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 310,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 148,
"isFollowing": false
}
] | /posts/takeraparterer/328474988002537 | 2,052 | 5 |
238095044062491 | [
{
"type": "text",
"value": "๐ง๐ต๐ฒ ๐ต๐๐ด๐ฒ ๐ฐ๐ผ๐๐ ๐ผ๐ณ ๐ฟ๐ฒ๐๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐ผ๐ป ๐ณ๐ฟ๐ผ๐ป๐๐ถ๐ฒ๐ฟ ๐๐๐ ๐ ๐ธ",
"raw": "๐ง๐ต๐ฒ ๐ต๐๐ด๐ฒ ๐ฐ๐ผ๐๐ ๐ผ๐ณ ๐ฟ๐ฒ๐๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐ผ๐ป ๐ณ๐ฟ๐ผ๐ป๐๐ถ๐ฒ๐ฟ ๐๐๐ ๐ ๐ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Google DeepMind recently released a great paper that shows optimal hyperparameters to train across different regimes: Scaling Exponents Across Parameterizations and Optimizers, with data from 10,000 training runs.",
"raw": "Google DeepMind recently released a great paper that shows optimal hyperparameters to train across different regimes: Scaling Exponents Across Parameterizations and Optimizers, with data from 10,000 training runs.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "One engineer decided to quantify the price of such a large-scale experiment.",
"raw": "One engineer decided to quantify the price of such a large-scale experiment.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ฌ And the bill is hefty: ~13M USD ",
"raw": "๐ฌ And the bill is hefty: ~13M USD ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This exact number is to take with a grain of salt because many approximations were necessary to get the final result.",
"raw": "This exact number is to take with a grain of salt because many approximations were necessary to get the final result.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โ๏ธ But still this ballpark means that for this sole experiment, the price is way over what most startups or research labs could afford.",
"raw": "โ๏ธ But still this ballpark means that for this sole experiment, the price is way over what most startups or research labs could afford.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This means that open-sourcing research is more important than ever, to put everyone in the ecosystem on a roughly equal footing. Don't let OpenAI run first, they'll keep everything for themselves!",
"raw": "This means that open-sourcing research is more important than ever, to put everyone in the ecosystem on a roughly equal footing. Don't let OpenAI run first, they'll keep everything for themselves!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Read the full post that quantifies the paper's cost ๐ ",
"raw": "Read the full post that quantifies the paper's cost ๐ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://152334h.github.io/blog/scaling-exponents/",
"resource": null,
"url": null,
"href": "https://152334h.github.io/blog/scaling-exponents/",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ง๐ต๐ฒ ๐ต๐๐ด๐ฒ ๐ฐ๐ผ๐๐ ๐ผ๐ณ ๐ฟ๐ฒ๐๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐ผ๐ป ๐ณ๐ฟ๐ผ๐ป๐๐ถ๐ฒ๐ฟ ๐๐๐ ๐ ๐ธ
Google DeepMind recently released a great paper that shows optimal hyperparameters to train across different regimes: Scaling Exponents Across Parameterizations and Optimizers, with data from 10,000 training runs.
One engineer decided to quantify the price of such a large-scale experiment.
๐ฌ And the bill is hefty: ~13M USD
This exact number is to take with a grain of salt because many approximations were necessary to get the final result.
โ๏ธ But still this ballpark means that for this sole experiment, the price is way over what most startups or research labs could afford.
This means that open-sourcing research is more important than ever, to put everyone in the ecosystem on a roughly equal footing. Don't let OpenAI run first, they'll keep everything for themselves!
Read the full post that quantifies the paper's cost ๐ https://152334h.github.io/blog/scaling-exponents/ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg",
"fullname": "Aymeric Roucher",
"name": "m-ric",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 476,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63d10d4e8eaa4831005e92b5/ycHLDaDRYPEfxP50yTJV8.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"nbroad",
"MicPie",
"louisbrulenaudet",
"adamo1139",
"victor"
],
"count": 5
},
{
"reaction": "๐ฅ",
"users": [
"pduf"
],
"count": 1
}
] | 2024-07-30T15:08:55.000Z | 2024-07-31T06:31:41.589Z | [
{
"avatarUrl": "/avatars/7411dc98c012221f8e4c3641c8702640.svg",
"fullname": "juliawhites",
"name": "juliawhites",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/m-ric/238095044062491 | 2,251 | 1 |
675597604860526 | [
{
"type": "text",
"value": "neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8",
"raw": "neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Requant of the big llama, using 20% less memory ",
"raw": "Requant of the big llama, using 20% less memory ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8",
"resource": {
"type": "model",
"id": "neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8",
"discussionNum": null
},
"url": "https://huggingface.co/neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8
Requant of the big llama, using 20% less memory
https://huggingface.co/neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669551186189-63732ebbbd81fae2b3aaf3fb.jpeg",
"fullname": "Knut Jรคgersberg",
"name": "KnutJaegersberg",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 238,
"isFollowing": false
} | [] | [] | [] | 2024-07-30T15:03:52.000Z | 2024-07-30T15:03:52.836Z | [] | /posts/KnutJaegersberg/675597604860526 | 922 | 0 |
772170903237816 | [
{
"type": "text",
"value": "With larger and larger diffusion transformers coming up, it's becoming increasingly important to have some good quantization tools for them.",
"raw": "With larger and larger diffusion transformers coming up, it's becoming increasingly important to have some good quantization tools for them.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We present our findings from a series of experiments on quantizing different diffusion pipelines based on diffusion transformers. ",
"raw": "We present our findings from a series of experiments on quantizing different diffusion pipelines based on diffusion transformers. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We demonstrate excellent memory savings with a bit of sacrifice on inference latency which is expected to improve in the coming days. ",
"raw": "We demonstrate excellent memory savings with a bit of sacrifice on inference latency which is expected to improve in the coming days. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Diffusers ๐ค Quanto โค๏ธ",
"raw": "Diffusers ๐ค Quanto โค๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This was a juicy collaboration between ",
"raw": "This was a juicy collaboration between ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@dacorvo",
"resource": null,
"url": null,
"href": null,
"user": "dacorvo",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " and myself. ",
"raw": " and myself. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check out the post to learn all about it",
"raw": "Check out the post to learn all about it",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/quanto-diffusers",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/quanto-diffusers",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | With larger and larger diffusion transformers coming up, it's becoming increasingly important to have some good quantization tools for them.
We present our findings from a series of experiments on quantizing different diffusion pipelines based on diffusion transformers.
We demonstrate excellent memory savings with a bit of sacrifice on inference latency which is expected to improve in the coming days.
Diffusers ๐ค Quanto โค๏ธ
This was a juicy collaboration between @dacorvo and myself.
Check out the post to learn all about it
https://huggingface.co/blog/quanto-diffusers | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649681653581-5f7fbd813e94f16a85448745.jpeg",
"fullname": "Sayak Paul",
"name": "sayakpaul",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 446,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f7fbd813e94f16a85448745/eAQbBBt41KngCcHYhT0iv.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647995564be04c76ce4547b3/KpP0yuQMsqb-z6N9h4Ykg.jpeg",
"fullname": "David Corvoysier",
"name": "dacorvo",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 37
}
] | [
{
"reaction": "๐",
"users": [
"John6666",
"victor",
"thliang01",
"TuringsSolutions",
"YaTharThShaRma999",
"not-lain",
"louisbrulenaudet",
"Hev832"
],
"count": 8
},
{
"reaction": "๐ฅ",
"users": [
"YaTharThShaRma999",
"Bruhn",
"not-lain",
"EtienneDosSantos",
"rbgo"
],
"count": 5
}
] | 2024-07-30T04:26:59.000Z | 2024-07-31T18:11:01.286Z | [
{
"avatarUrl": "/avatars/52a153d04d325469e1be69bce610ebe5.svg",
"fullname": "ecyht2",
"name": "ecyht2",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 148,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6569216f9c96f1a47bf45788/mCLqmAs4dOjKdxNQVAp1w.png",
"fullname": "Sica Rius",
"name": "SicariusSicariiStuff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 132,
"isFollowing": false
}
] | /posts/sayakpaul/772170903237816 | 3,779 | 3 |
522286906747249 | [
{
"type": "text",
"value": "Kling AI Video is FINALLY Public (All Countries), Free to Use and MIND BLOWING - Full Tutorial > ",
"raw": "Kling AI Video is FINALLY Public (All Countries), Free to Use and MIND BLOWING - Full Tutorial > ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/zcpqAxYV1_w",
"resource": null,
"url": null,
"href": "https://youtu.be/zcpqAxYV1_w",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "You probably seen those mind blowing AI made videos. And the day has arrived. The famous Kling AI is now worldwide available for free. In this tutorial video I will show you how to register for free with just email to Kling AI and use its mind blowing text to video animation, image to video animation and text to image, and image to image capabilities. This video will show you non-cherry pick results so you will know the actual quality and capability of the model unlike those extremely cherry pick example demos. Still, #KlingAI is the only #AI model that competes with OpenAI's #SORA and it is real to use.",
"raw": "You probably seen those mind blowing AI made videos. And the day has arrived. The famous Kling AI is now worldwide available for free. In this tutorial video I will show you how to register for free with just email to Kling AI and use its mind blowing text to video animation, image to video animation and text to image, and image to image capabilities. This video will show you non-cherry pick results so you will know the actual quality and capability of the model unlike those extremely cherry pick example demos. Still, #KlingAI is the only #AI model that competes with OpenAI's #SORA and it is real to use.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Kling AI Official Website โคต๏ธ",
"raw": "๐ Kling AI Official Website โคต๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โถ๏ธ ",
"raw": "โถ๏ธ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.klingai.com/",
"resource": null,
"url": null,
"href": "https://www.klingai.com/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ SECourses Discord Channel to Get Full Support โคต๏ธ",
"raw": "๐ SECourses Discord Channel to Get Full Support โคต๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โถ๏ธ ",
"raw": "โถ๏ธ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://discord.com/servers/software-engineering-courses-secourses-772774097734074388",
"resource": null,
"url": null,
"href": "https://discord.com/servers/software-engineering-courses-secourses-772774097734074388",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Our GitHub Repository โคต๏ธ",
"raw": "๐ Our GitHub Repository โคต๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โถ๏ธ ",
"raw": "โถ๏ธ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/FurkanGozukara/Stable-Diffusion",
"resource": null,
"url": null,
"href": "https://github.com/FurkanGozukara/Stable-Diffusion",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Our Reddit โคต๏ธ",
"raw": "๐ Our Reddit โคต๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โถ๏ธ ",
"raw": "โถ๏ธ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.reddit.com/r/SECourses/",
"resource": null,
"url": null,
"href": "https://www.reddit.com/r/SECourses/",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Kling AI Video is FINALLY Public (All Countries), Free to Use and MIND BLOWING - Full Tutorial > https://youtu.be/zcpqAxYV1_w
You probably seen those mind blowing AI made videos. And the day has arrived. The famous Kling AI is now worldwide available for free. In this tutorial video I will show you how to register for free with just email to Kling AI and use its mind blowing text to video animation, image to video animation and text to image, and image to image capabilities. This video will show you non-cherry pick results so you will know the actual quality and capability of the model unlike those extremely cherry pick example demos. Still, #KlingAI is the only #AI model that competes with OpenAI's #SORA and it is real to use.
๐ Kling AI Official Website โคต๏ธ
โถ๏ธ https://www.klingai.com/
๐ SECourses Discord Channel to Get Full Support โคต๏ธ
โถ๏ธ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388
๐ Our GitHub Repository โคต๏ธ
โถ๏ธ https://github.com/FurkanGozukara/Stable-Diffusion
๐ Our Reddit โคต๏ธ
โถ๏ธ https://www.reddit.com/r/SECourses/ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gรถzรผkara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/iPhRhARt4OkHXk5di2889.png"
}
] | [] | [
{
"reaction": "๐ฅ",
"users": [
"MonsterMMORPG",
"ZeroWw",
"GPT007",
"amosgyamfi",
"onlinework",
"Ripp218",
"NeoTokyoUnicorn",
"John6666",
"petresil",
"rreed",
"cctuan"
],
"count": 11
},
{
"reaction": "โค๏ธ",
"users": [
"MonsterMMORPG",
"Ripp218",
"GPT007",
"efecelik"
],
"count": 4
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"GPT007",
"danielus"
],
"count": 3
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"GPT007",
"louisbrulenaudet"
],
"count": 3
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"GPT007",
"abezzarg"
],
"count": 3
},
{
"reaction": "๐ค",
"users": [
"MonsterMMORPG",
"GPT007"
],
"count": 2
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"GPT007"
],
"count": 2
},
{
"reaction": "โ",
"users": [
"MonsterMMORPG",
"GPT007"
],
"count": 2
},
{
"reaction": "๐ค",
"users": [
"MonsterMMORPG",
"GPT007"
],
"count": 2
},
{
"reaction": "๐คฏ",
"users": [
"MonsterMMORPG",
"GPT007"
],
"count": 2
},
{
"reaction": "๐ง ",
"users": [
"MonsterMMORPG",
"GPT007"
],
"count": 2
}
] | 2024-07-30T00:53:00.000Z | 2024-10-29T09:09:01.719Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64a43122e3cf200cbf8a03b3/xilplM2M8Sjn3jLGyvwma.jpeg",
"fullname": "Byte",
"name": "CyberNative",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 19,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gรถzรผkara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/oAbpDk8u836RDLtQCczTg.png",
"fullname": " ",
"name": "NeoTokyoUnicorn",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/VU_XmyTsoSN9IKYCnGuxI.jpeg",
"fullname": "nguyen",
"name": "toanhihi",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/MonsterMMORPG/522286906747249 | 5,153 | 5 |
551816601997074 | [
{
"type": "text",
"value": "Looking for a combination of speed and quality? Look no further! I've created a space that merges Open WebUI's excellent interface and features with the lightning-fast performance of the Groq API. Experience top-tier models in no time. Try it out for free here:",
"raw": "Looking for a combination of speed and quality? Look no further! I've created a space that merges Open WebUI's excellent interface and features with the lightning-fast performance of the Groq API. Experience top-tier models in no time. Try it out for free here:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/L-AI/groq-chat",
"resource": {
"type": "space",
"id": "L-AI/groq-chat",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/L-AI/groq-chat",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "\"A big thank you to Groq for providing their fantastic API at no cost!\"",
"raw": "\"A big thank you to Groq for providing their fantastic API at no cost!\"",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Looking for a combination of speed and quality? Look no further! I've created a space that merges Open WebUI's excellent interface and features with the lightning-fast performance of the Groq API. Experience top-tier models in no time. Try it out for free here:
https://huggingface.co/spaces/L-AI/groq-chat
"A big thank you to Groq for providing their fantastic API at no cost!" | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641a05a7f5e9c66105fec9b2/INWVc96PQFO0ZPciozg7w.jpeg",
"fullname": "Artur Lauche",
"name": "Artples",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"Artples",
"amosgyamfi",
"Ripp218",
"snakeying",
"ygmpkk",
"efecelik"
],
"count": 6
},
{
"reaction": "โค๏ธ",
"users": [
"ijohn07"
],
"count": 1
}
] | 2024-07-29T16:33:35.000Z | 2024-07-29T16:33:35.989Z | [] | /posts/Artples/551816601997074 | 2,403 | 0 |
968846619112965 | [
{
"type": "mention",
"value": null,
"raw": "@MaartenGr",
"resource": null,
"url": null,
"href": null,
"user": "MaartenGr",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " nice post ",
"raw": " nice post ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-quantization",
"resource": null,
"url": null,
"href": "https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-quantization",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " (\"A Visual Guide to Quantization\")",
"raw": " (\"A Visual Guide to Quantization\")",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Would it make sense for you to publish it here too?",
"raw": "Would it make sense for you to publish it here too?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | @MaartenGr nice post https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-quantization ("A Visual Guide to Quantization")
Would it make sense for you to publish it here too? | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60a76b174e24361791fe822d/inEvYwrd4z0xvRQN3ikdE.jpeg",
"fullname": "Sylvain Lesage",
"name": "severo",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 128,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62ea1ac3cc08a09aa6d3ec95/_74xXYEYLLjNVJ9zQucfn.jpeg",
"fullname": "Maarten Grootendorst",
"name": "MaartenGr",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 22
}
] | [] | 2024-07-29T15:09:26.000Z | 2024-07-30T07:38:52.530Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62ea1ac3cc08a09aa6d3ec95/_74xXYEYLLjNVJ9zQucfn.jpeg",
"fullname": "Maarten Grootendorst",
"name": "MaartenGr",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 22,
"isFollowing": false
}
] | /posts/severo/968846619112965 | 1,035 | 1 |
403721444206150 | [
{
"type": "text",
"value": "I am looking for anotation tool/software for video segmentations, landmarking and feature-detection. ",
"raw": "I am looking for anotation tool/software for video segmentations, landmarking and feature-detection. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I had bumped into xlabelAnything, however I could not run it on my machine.",
"raw": "I had bumped into xlabelAnything, however I could not run it on my machine.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " Any recommendations ?",
"raw": " Any recommendations ?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I am looking for anotation tool/software for video segmentations, landmarking and feature-detection.
I had bumped into xlabelAnything, however I could not run it on my machine.
Any recommendations ?
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ca45c90609f1def7e2775a/mlxL5CKq0z9obRKKG_P-o.png",
"fullname": "Saugat Kafley",
"name": "Saugatkafley",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
} | [] | [] | [] | 2024-07-29T13:05:00.000Z | 2024-07-30T04:19:37.516Z | [
{
"avatarUrl": "/avatars/e7266a995f9f36f25aa46cb0ce517e43.svg",
"fullname": "whitesmaverick",
"name": "whitesmaverick",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/Saugatkafley/403721444206150 | 888 | 1 |
819761092372121 | [
{
"type": "text",
"value": "Hey ! I'm working on a 100% synthetic Dataset Hub here (you can search for any kind of datasets an the app invents them). The link is here: ",
"raw": "Hey ! I'm working on a 100% synthetic Dataset Hub here (you can search for any kind of datasets an the app invents them). The link is here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub",
"resource": {
"type": "space",
"id": "infinite-dataset-hub/infinite-dataset-hub",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Question for the Community:",
"raw": "Question for the Community:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Which models should I use to generate images and audio samples for those datasets ? ๐ค",
"raw": "Which models should I use to generate images and audio samples for those datasets ? ๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hey ! I'm working on a 100% synthetic Dataset Hub here (you can search for any kind of datasets an the app invents them). The link is here: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub
Question for the Community:
Which models should I use to generate images and audio samples for those datasets ? ๐ค | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594214747713-5e9ecfc04957053f60648a3e.png",
"fullname": "Quentin Lhoest",
"name": "lhoestq",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 193,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"victor",
"ashercn97",
"cfahlgren1",
"John6666",
"Hev832",
"Nymbo",
"efecelik",
"gabrielmbmb",
"drlordbasil",
"KingNish",
"GPT007",
"osanseviero",
"Jank-com"
],
"count": 13
},
{
"reaction": "๐ฅ",
"users": [
"awacke1",
"Jank-com"
],
"count": 2
}
] | 2024-07-29T09:46:33.000Z | 2024-07-31T11:21:01.159Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594214747713-5e9ecfc04957053f60648a3e.png",
"fullname": "Quentin Lhoest",
"name": "lhoestq",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 193,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/AXnwP_G2WkJ0gkBepd_t7.png",
"fullname": "Marc Kovka",
"name": "GPT007",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 7,
"isFollowing": false
}
] | /posts/lhoestq/819761092372121 | 3,901 | 4 |
446225760997052 | [
{
"type": "text",
"value": "Just made this!",
"raw": "Just made this!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/takeraparterer/Charformer",
"resource": null,
"url": null,
"href": "https://github.com/takeraparterer/Charformer",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Just made this!
https://github.com/takeraparterer/Charformer | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6316fb937b0ee0136e5f1220/poHBoJ7QAF_s2CCaosdvQ.jpeg",
"fullname": "Firstname Lastname",
"name": "takeraparterer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"takeraparterer",
"victor",
"Hev832",
"John6666"
],
"count": 4
},
{
"reaction": "โค๏ธ",
"users": [
"takeraparterer",
"stefan-it",
"notlober"
],
"count": 3
},
{
"reaction": "๐ฅ",
"users": [
"takeraparterer",
"efecelik"
],
"count": 2
},
{
"reaction": "๐",
"users": [
"takeraparterer"
],
"count": 1
},
{
"reaction": "๐",
"users": [
"TuringsSolutions"
],
"count": 1
}
] | 2024-07-29T06:57:58.000Z | 2024-08-05T22:15:53.957Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 148,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6316fb937b0ee0136e5f1220/poHBoJ7QAF_s2CCaosdvQ.jpeg",
"fullname": "Firstname Lastname",
"name": "takeraparterer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
}
] | /posts/takeraparterer/446225760997052 | 3,213 | 20 |
762807238717942 | [
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/nevmenandr/w2v-chess",
"resource": {
"type": "model",
"id": "nevmenandr/w2v-chess",
"discussionNum": null
},
"url": "https://huggingface.co/nevmenandr/w2v-chess",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "code_fence",
"value": null,
"raw": "```python\nimport gensim\nfrom sklearn.decomposition import PCA\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nmodel = gensim.models.Word2Vec.load('white_moves.model')\ndict_moves = model.wv.vocab\ndict_moves_appr = {}\nfor k in dict_moves:\n if not k.startswith('->'):\n continue\n dict_moves_appr[k] = dict_moves[k]\nX = model[model.wv.vocab]\npca = PCA(n_components=2)\nresult = pca.fit_transform(X)\nfig, ax = plt.subplots()\nax.plot(Y[:, 0], Y[:, 1], 'o')\nax.set_title('White moves')\nlab = list(dict_moves_appr)\nfor i, lb in enumerate(lab):\n plt.annotate(lb, xy=(Y[i, 0], Y[i, 1]))\nplt.show()\n```",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": "python",
"code": "import gensim\nfrom sklearn.decomposition import PCA\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nmodel = gensim.models.Word2Vec.load('white_moves.model')\ndict_moves = model.wv.vocab\ndict_moves_appr = {}\nfor k in dict_moves:\n if not k.startswith('->'):\n continue\n dict_moves_appr[k] = dict_moves[k]\nX = model[model.wv.vocab]\npca = PCA(n_components=2)\nresult = pca.fit_transform(X)\nfig, ax = plt.subplots()\nax.plot(Y[:, 0], Y[:, 1], 'o')\nax.set_title('White moves')\nlab = list(dict_moves_appr)\nfor i, lb in enumerate(lab):\n plt.annotate(lb, xy=(Y[i, 0], Y[i, 1]))\nplt.show()",
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "biblically accurate angel",
"raw": "biblically accurate angel",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | https://huggingface.co/nevmenandr/w2v-chess
```python
import gensim
from sklearn.decomposition import PCA
import matplotlib
import matplotlib.pyplot as plt
model = gensim.models.Word2Vec.load('white_moves.model')
dict_moves = model.wv.vocab
dict_moves_appr = {}
for k in dict_moves:
if not k.startswith('->'):
continue
dict_moves_appr[k] = dict_moves[k]
X = model[model.wv.vocab]
pca = PCA(n_components=2)
result = pca.fit_transform(X)
fig, ax = plt.subplots()
ax.plot(Y[:, 0], Y[:, 1], 'o')
ax.set_title('White moves')
lab = list(dict_moves_appr)
for i, lb in enumerate(lab):
plt.annotate(lb, xy=(Y[i, 0], Y[i, 1]))
plt.show()
```
biblically accurate angel | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/664b25abfebdc298f724bdf0/pPb3xhDAOV_aLS642kPIW.png",
"fullname": "Boris Orekhov",
"name": "nevmenandr",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/664b25abfebdc298f724bdf0/uoDlrCGW46TlqDsTSx6bd.png"
}
] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"Daemontatox",
"victor",
"efecelik",
"Hev832"
],
"count": 4
},
{
"reaction": "๐",
"users": [
"ZeroWw"
],
"count": 1
}
] | 2024-07-28T22:41:34.000Z | 2024-07-28T22:41:34.324Z | [] | /posts/nevmenandr/762807238717942 | 2,607 | 0 |
987006720529829 | [
{
"type": "text",
"value": "Just published Image Captioning Editor Gradio APP - Edit Your Captions Super Easy Including Batch Editing - For Windows, RunPod and Massed Compute. Developed by me. Have lots of amazing features that all you need. Let me know if missing any features. ",
"raw": "Just published Image Captioning Editor Gradio APP - Edit Your Captions Super Easy Including Batch Editing - For Windows, RunPod and Massed Compute. Developed by me. Have lots of amazing features that all you need. Let me know if missing any features. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Gradio is amazing to develop such amazing apps in short time. Used Claude 3.5 to develop it :)",
"raw": "Gradio is amazing to develop such amazing apps in short time. Used Claude 3.5 to develop it :)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Scripts are available here : ",
"raw": "Scripts are available here : ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.patreon.com/posts/108992085",
"resource": null,
"url": null,
"href": "https://www.patreon.com/posts/108992085",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Just published Image Captioning Editor Gradio APP - Edit Your Captions Super Easy Including Batch Editing - For Windows, RunPod and Massed Compute. Developed by me. Have lots of amazing features that all you need. Let me know if missing any features.
Gradio is amazing to develop such amazing apps in short time. Used Claude 3.5 to develop it :)
Scripts are available here : https://www.patreon.com/posts/108992085 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gรถzรผkara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/yONXAKKAn9qxV-Z3HMOlO.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/ADmHeb4drcICWk8TuvVql.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/FVxS1novhdMVEiCD3Lyeo.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/nRVe1fs6sKjTYUr-g0NRK.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"victor",
"MonsterMMORPG",
"leeloolee",
"efecelik"
],
"count": 4
},
{
"reaction": "๐ฅ",
"users": [
"MonsterMMORPG",
"leeloolee",
"jijifujiji123"
],
"count": 3
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"leeloolee"
],
"count": 2
},
{
"reaction": "โค๏ธ",
"users": [
"MonsterMMORPG",
"leeloolee"
],
"count": 2
},
{
"reaction": "๐ค",
"users": [
"MonsterMMORPG",
"leeloolee"
],
"count": 2
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"leeloolee"
],
"count": 2
},
{
"reaction": "โ",
"users": [
"MonsterMMORPG",
"leeloolee"
],
"count": 2
},
{
"reaction": "๐ง ",
"users": [
"MonsterMMORPG",
"leeloolee"
],
"count": 2
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"leeloolee"
],
"count": 2
},
{
"reaction": "๐ค",
"users": [
"MonsterMMORPG",
"leeloolee"
],
"count": 2
},
{
"reaction": "๐คฏ",
"users": [
"MonsterMMORPG",
"leeloolee"
],
"count": 2
}
] | 2024-07-28T22:00:30.000Z | 2024-07-28T22:00:30.266Z | [] | /posts/MonsterMMORPG/987006720529829 | 1,817 | 0 |
161498281853215 | [
{
"type": "text",
"value": "lamagenius",
"raw": "lamagenius",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Smart free open source AI powered search engine",
"raw": "Smart free open source AI powered search engine",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://hf.co/chat/assistant/66a5fc9f02b1826ba4cabd72",
"resource": null,
"url": null,
"href": "https://hf.co/chat/assistant/66a5fc9f02b1826ba4cabd72",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | lamagenius
Smart free open source AI powered search engine
https://hf.co/chat/assistant/66a5fc9f02b1826ba4cabd72 | {
"avatarUrl": "/avatars/d773a7dd9b706759131fc482ab71ced7.svg",
"fullname": "[email protected]",
"name": "Taf2023",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 8,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64841af2295256340e4b9f88/cIVa7foXfVw0Wq25REzmQ.png"
}
] | [] | [] | 2024-07-28T08:24:06.000Z | 2024-07-28T18:18:42.236Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/BpUrt2nGdAwq8lnLJgMlr.jpeg",
"fullname": "Edo Erpani",
"name": "edoerpino7",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 148,
"isFollowing": false
}
] | /posts/Taf2023/161498281853215 | 904 | 2 |
681081193566580 | [
{
"type": "text",
"value": "Hi HF Community!๐ค",
"raw": "Hi HF Community!๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "In the past days, OpenAI announced their search engine, SearchGPT: today, I'm glad to introduce you SearchPhi, an AI-powered and open-source web search tool that aims to reproduce similar features to SearchGPT, built upon ",
"raw": "In the past days, OpenAI announced their search engine, SearchGPT: today, I'm glad to introduce you SearchPhi, an AI-powered and open-source web search tool that aims to reproduce similar features to SearchGPT, built upon ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct",
"resource": {
"type": "model",
"id": "microsoft/Phi-3-mini-4k-instruct",
"discussionNum": null
},
"url": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", llama.cpp๐ฆ and Streamlit. ",
"raw": ", llama.cpp๐ฆ and Streamlit. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Although not as capable as SearchGPT, SearchPhi v0.0-beta.0 is a first step toward a fully functional and multimodal search engine :)",
"raw": "Although not as capable as SearchGPT, SearchPhi v0.0-beta.0 is a first step toward a fully functional and multimodal search engine :)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "If you want to know more, head over to the GitHub repository (",
"raw": "If you want to know more, head over to the GitHub repository (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/AstraBert/SearchPhi",
"resource": null,
"url": null,
"href": "https://github.com/AstraBert/SearchPhi",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ") and, to test it out, use this HF space: ",
"raw": ") and, to test it out, use this HF space: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/as-cle-bert/SearchPhi",
"resource": {
"type": "space",
"id": "as-cle-bert/SearchPhi",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/as-cle-bert/SearchPhi",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Have fun!๐ฑ",
"raw": "Have fun!๐ฑ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi HF Community!๐ค
In the past days, OpenAI announced their search engine, SearchGPT: today, I'm glad to introduce you SearchPhi, an AI-powered and open-source web search tool that aims to reproduce similar features to SearchGPT, built upon https://huggingface.co/microsoft/Phi-3-mini-4k-instruct, llama.cpp๐ฆ and Streamlit.
Although not as capable as SearchGPT, SearchPhi v0.0-beta.0 is a first step toward a fully functional and multimodal search engine :)
If you want to know more, head over to the GitHub repository (https://github.com/AstraBert/SearchPhi) and, to test it out, use this HF space: https://huggingface.co/spaces/as-cle-bert/SearchPhi
Have fun!๐ฑ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e330e7edc2f7306e252448/ucpk9c8x0UafGM4mXTrRy.jpeg",
"fullname": "Astra Clelia Bertelli",
"name": "as-cle-bert",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 639,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65e330e7edc2f7306e252448/Bkz9Rkw27Ij_t86z7yMxc.png"
}
] | [] | [
{
"reaction": "๐คฏ",
"users": [
"yasmineee",
"TuringsSolutions",
"John6666",
"SizorCloud",
"louisbrulenaudet",
"316usman",
"victor",
"Ramikan-BR",
"leeloolee",
"Hev832",
"kramp",
"Ripp218",
"osanseviero"
],
"count": 13
},
{
"reaction": "๐ฅ",
"users": [
"adorkin",
"Nischalj10",
"Ramikan-BR",
"ucsahin",
"ugurcanvurgun",
"Goodiro",
"Ripp218"
],
"count": 7
},
{
"reaction": "๐",
"users": [
"RakshitAralimatti",
"316usman",
"NHLOCAL",
"Ramikan-BR",
"Ripp218"
],
"count": 5
},
{
"reaction": "๐",
"users": [
"Ramikan-BR",
"mlabonne",
"Goodiro",
"Ripp218"
],
"count": 4
},
{
"reaction": "โค๏ธ",
"users": [
"Ramikan-BR",
"mlabonne",
"efecelik",
"Ripp218"
],
"count": 4
},
{
"reaction": "๐ง ",
"users": [
"Ramikan-BR",
"Ripp218"
],
"count": 2
}
] | 2024-07-28T06:54:26.000Z | 2024-07-28T06:54:26.641Z | [] | /posts/as-cle-bert/681081193566580 | 5,043 | 0 |
203981221653529 | [
{
"type": "text",
"value": "Datasets are down, I offer a solution",
"raw": "Datasets are down, I offer a solution",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "code_fence",
"value": null,
"raw": "```\ngit lfs install\n```",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "git lfs install",
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "code_fence",
"value": null,
"raw": "```\ngit clone https://huggingface.co/datasets/{dataset/id}\n```",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "git clone https://huggingface.co/datasets/{dataset/id}",
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "code_fence",
"value": null,
"raw": "```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"id\")\n```",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": "python",
"code": "from datasets import load_dataset\n\ndataset = load_dataset(\"id\")",
"label": null
}
] | Datasets are down, I offer a solution
```
git lfs install
```
```
git clone https://huggingface.co/datasets/{dataset/id}
```
```python
from datasets import load_dataset
dataset = load_dataset("id")
``` | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659f000b83abded48e190901/BnXL_XYbVX6PHngfQLECW.png",
"fullname": "Noa Roggendorff",
"name": "nroggendorff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"Tonic",
"John6666",
"LeroyDyer",
"Hev832",
"GPT007",
"Nymbo",
"TuringsSolutions",
"louisbrulenaudet",
"sugatoray",
"not-lain",
"sunnyg",
"Blane187"
],
"count": 12
},
{
"reaction": "๐",
"users": [
"ZennyKenny"
],
"count": 1
}
] | 2024-07-27T17:46:14.000Z | 2024-07-27T17:46:14.924Z | [] | /posts/nroggendorff/203981221653529 | 4,080 | 0 |
868685803350724 | [
{
"type": "text",
"value": "โ๏ธ Find reusable synthetic data pipeline code and corresponding datasets on the ",
"raw": "โ๏ธ Find reusable synthetic data pipeline code and corresponding datasets on the ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@huggingface",
"resource": null,
"url": null,
"href": null,
"user": "huggingface",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " Hub.",
"raw": " Hub.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Find your pipline and use ",
"raw": "Find your pipline and use ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "inline_code",
"value": null,
"raw": "`$ distilabel pipeline run --config \"hugging_face_dataset_url/pipeline.yaml\"`",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "$ distilabel pipeline run --config \"hugging_face_dataset_url/pipeline.yaml\"",
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Some components I used",
"raw": "Some components I used",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Embedded dataset viewer ",
"raw": "- Embedded dataset viewer ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/docs/hub/main/en/datasets-viewer-embed",
"resource": null,
"url": null,
"href": "https://huggingface.co/docs/hub/main/en/datasets-viewer-embed",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Hugging Face fsspec ",
"raw": "- Hugging Face fsspec ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/docs/huggingface_hub/main/en/guides/hf_file_system",
"resource": null,
"url": null,
"href": "https://huggingface.co/docs/huggingface_hub/main/en/guides/hf_file_system",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- distilabel ",
"raw": "- distilabel ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://distilabel.argilla.io/latest/",
"resource": null,
"url": null,
"href": "https://distilabel.argilla.io/latest/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Gradio leaderboard by Freddy Boulton ",
"raw": "- Gradio leaderboard by Freddy Boulton ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/freddyaboulton/gradio_leaderboard",
"resource": {
"type": "space",
"id": "freddyaboulton/gradio_leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/freddyaboulton/gradio_leaderboard",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Gradio modal by Ali Abid",
"raw": "- Gradio modal by Ali Abid",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Space: ",
"raw": "Space: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/davidberenstein1957/distilabel-synthetic-data-pipeline-explorer",
"resource": {
"type": "space",
"id": "davidberenstein1957/distilabel-synthetic-data-pipeline-explorer",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/davidberenstein1957/distilabel-synthetic-data-pipeline-explorer",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | โ๏ธ Find reusable synthetic data pipeline code and corresponding datasets on the @huggingface Hub.
Find your pipline and use `$ distilabel pipeline run --config "hugging_face_dataset_url/pipeline.yaml"`
Some components I used
- Embedded dataset viewer https://huggingface.co/docs/hub/main/en/datasets-viewer-embed
- Hugging Face fsspec https://huggingface.co/docs/huggingface_hub/main/en/guides/hf_file_system
- distilabel https://distilabel.argilla.io/latest/
- Gradio leaderboard by Freddy Boulton https://huggingface.co/spaces/freddyaboulton/gradio_leaderboard
- Gradio modal by Ali Abid
Space: https://huggingface.co/spaces/davidberenstein1957/distilabel-synthetic-data-pipeline-explorer
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 148,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/634ff41ff32062e9eb7b06a3/I7XNzXnrA7isGPN6o5AN2.png"
}
] | [] | [
{
"reaction": "๐ค",
"users": [
"guerrat",
"John6666",
"mlabonne",
"sugatoray",
"MaziyarPanahi",
"louisbrulenaudet",
"dvilasuero",
"davidberenstein1957"
],
"count": 8
}
] | 2024-07-27T08:31:55.000Z | 2024-07-29T09:40:40.794Z | [] | /posts/davidberenstein1957/868685803350724 | 2,397 | 1 |
726196603155745 | [
{
"type": "text",
"value": "Hello, HuggingFace๐ค community ๐,",
"raw": "Hello, HuggingFace๐ค community ๐,",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "All the amazing people quantising LLMs to AWQ and GPTQ ๐ง๐ค",
"raw": "All the amazing people quantising LLMs to AWQ and GPTQ ๐ง๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Can you please mention the perplexity you achieved ๐ OR any other metric to measure the quantisation qualitatively? ๐",
"raw": "Can you please mention the perplexity you achieved ๐ OR any other metric to measure the quantisation qualitatively? ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The GGUF community follows this really well! ๐",
"raw": "The GGUF community follows this really well! ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "And if it is not too much to ask, the script used for quantisation would be amazing! ๐",
"raw": "And if it is not too much to ask, the script used for quantisation would be amazing! ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Thanks for the quants for the GPU poor! ๐ป",
"raw": "Thanks for the quants for the GPU poor! ๐ป",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hello, HuggingFace๐ค community ๐,
All the amazing people quantising LLMs to AWQ and GPTQ ๐ง๐ค
Can you please mention the perplexity you achieved ๐ OR any other metric to measure the quantisation qualitatively? ๐
The GGUF community follows this really well! ๐
And if it is not too much to ask, the script used for quantisation would be amazing! ๐
Thanks for the quants for the GPU poor! ๐ป | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"MichaelFried",
"John6666",
"Duttones",
"Nelathan"
],
"count": 4
}
] | 2024-07-27T06:44:41.000Z | 2024-07-29T00:34:02.001Z | [
{
"avatarUrl": "/avatars/ab4dd498bbc0d5931f733b5a364fa765.svg",
"fullname": "Vitor Lima",
"name": "Duttones",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
}
] | /posts/singhsidhukuldeep/726196603155745 | 2,346 | 1 |
851286980273412 | [
{
"type": "text",
"value": "Exciting news!",
"raw": "Exciting news!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "After a long wait, Ikari and me finally made a new release of our last model on NeverSleep repo: Lumimaid-v0.2",
"raw": "After a long wait, Ikari and me finally made a new release of our last model on NeverSleep repo: Lumimaid-v0.2",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This model can be used in different size, from the small Llama-3.1-8B to the gigantic Mistral-Large-123B, finetuned by us.",
"raw": "This model can be used in different size, from the small Llama-3.1-8B to the gigantic Mistral-Large-123B, finetuned by us.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Try them now!",
"raw": "Try them now!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B",
"resource": {
"type": "model",
"id": "NeverSleep/Lumimaid-v0.2-8B",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B",
"resource": {
"type": "model",
"id": "NeverSleep/Lumimaid-v0.2-12B",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B",
"resource": {
"type": "model",
"id": "NeverSleep/Lumimaid-v0.2-70B",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/Lumimaid-v0.2-123B",
"resource": {
"type": "model",
"id": "NeverSleep/Lumimaid-v0.2-123B",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/Lumimaid-v0.2-123B",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "All the datasets we used will be added and credit will be given!",
"raw": "All the datasets we used will be added and credit will be given!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "For the quant, we wait for fix to be applied (",
"raw": "For the quant, we wait for fix to be applied (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/ggerganov/llama.cpp/pull/8676",
"resource": null,
"url": null,
"href": "https://github.com/ggerganov/llama.cpp/pull/8676",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ")",
"raw": ")",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Hope you will enjoy them!",
"raw": "Hope you will enjoy them!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Exciting news!
After a long wait, Ikari and me finally made a new release of our last model on NeverSleep repo: Lumimaid-v0.2
This model can be used in different size, from the small Llama-3.1-8B to the gigantic Mistral-Large-123B, finetuned by us.
Try them now!
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-123B
All the datasets we used will be added and credit will be given!
For the quant, we wait for fix to be applied (https://github.com/ggerganov/llama.cpp/pull/8676)
Hope you will enjoy them! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ab1241ad514ca8d1430003/d-43TcOxG-zqAbzrH2m7H.png",
"fullname": "Undi",
"name": "Undi95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3283,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/EB0RKSi-ahvgBgOLL3xjH.png"
}
] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"antiven0m",
"Chief-Inspector",
"dillfrescott",
"flflow",
"p1xelsr",
"IkariDev",
"Ainonake",
"den0620",
"Nelathan",
"John6666",
"s0lu",
"UniversalLove333",
"isr431",
"MarinaraSpaghetti",
"IAmTheCollector",
"Lustrous",
"rs75",
"EloyOn",
"xxx777xxxASD",
"kccccccccccccccccccc",
"himmeow",
"Hev832",
"Lewdiculous",
"AtAndDev",
"louisbrulenaudet",
"victor",
"DazzlingXeno",
"fearlessdots",
"Tonight223",
"Jackdiy",
"Ashlinggg",
"Morenosolo"
],
"count": 32
},
{
"reaction": "๐ค",
"users": [
"IkariDev",
"p1xelsr",
"UniversalLove333",
"AtAndDev",
"MarinaraSpaghetti"
],
"count": 5
},
{
"reaction": "๐ฅ",
"users": [
"unbreaker88",
"AtAndDev",
"MarinaraSpaghetti",
"Morenosolo"
],
"count": 4
},
{
"reaction": "๐ค",
"users": [
"Szarka",
"ar08"
],
"count": 2
}
] | 2024-07-26T18:10:57.000Z | 2024-09-10T09:40:24.374Z | [
{
"avatarUrl": "/avatars/80eb489f00cf499ab4d87ff349102222.svg",
"fullname": "No Name",
"name": "Ainonake",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
},
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ab1241ad514ca8d1430003/d-43TcOxG-zqAbzrH2m7H.png",
"fullname": "Undi",
"name": "Undi95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3283,
"isFollowing": false
},
{
"avatarUrl": "/avatars/2342ea65fd7291a0e26b0642c67ca476.svg",
"fullname": "Jandk",
"name": "Jackdiy",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/Undi95/851286980273412 | 11,601 | 4 |
328466287104951 | [
{
"type": "text",
"value": "I can't resist an opportunity to update an old baseline. Read a new article on my latest look at improving MobileNet-V1 and EfficientNet-B0 baselines.",
"raw": "I can't resist an opportunity to update an old baseline. Read a new article on my latest look at improving MobileNet-V1 and EfficientNet-B0 baselines.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/rwightman/mobilenet-baselines",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/rwightman/mobilenet-baselines",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/timm/mobilenetv1_100.ra4_e3600_r224_in1k",
"resource": {
"type": "model",
"id": "timm/mobilenetv1_100.ra4_e3600_r224_in1k",
"discussionNum": null
},
"url": "https://huggingface.co/timm/mobilenetv1_100.ra4_e3600_r224_in1k",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/timm/efficientnet_b0.ra4_e3600_r224_in1k",
"resource": {
"type": "model",
"id": "timm/efficientnet_b0.ra4_e3600_r224_in1k",
"discussionNum": null
},
"url": "https://huggingface.co/timm/efficientnet_b0.ra4_e3600_r224_in1k",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I can't resist an opportunity to update an old baseline. Read a new article on my latest look at improving MobileNet-V1 and EfficientNet-B0 baselines.
https://huggingface.co/blog/rwightman/mobilenet-baselines
https://huggingface.co/timm/mobilenetv1_100.ra4_e3600_r224_in1k
https://huggingface.co/timm/efficientnet_b0.ra4_e3600_r224_in1k | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1667002643224-604a5184dca2c7ac7508b849.jpeg",
"fullname": "Ross Wightman",
"name": "rwightman",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 214,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ง ",
"users": [
"efecelik",
"mrdbourke",
"1aurent"
],
"count": 3
},
{
"reaction": "๐",
"users": [
"kylewascher"
],
"count": 1
}
] | 2024-07-26T16:33:36.000Z | 2024-07-26T16:50:49.372Z | [] | /posts/rwightman/328466287104951 | 1,986 | 0 |
119699878001376 | [
{
"type": "text",
"value": "At Hugging Face we have an open-source Cookbook with many applied AI recipes ๐",
"raw": "At Hugging Face we have an open-source Cookbook with many applied AI recipes ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Here are some of the latest recipes contributed โฅฅ",
"raw": "Here are some of the latest recipes contributed โฅฅ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- \"Information Extraction with Haystack and NuExtract\": Use Haystack and transformers to build structured data extraction pipelines using LLMs by ",
"raw": "- \"Information Extraction with Haystack and NuExtract\": Use Haystack and transformers to build structured data extraction pipelines using LLMs by ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@anakin87",
"resource": null,
"url": null,
"href": null,
"user": "anakin87",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/learn/cookbook/en/information_extraction_haystack_nuextract",
"resource": null,
"url": null,
"href": "https://huggingface.co/learn/cookbook/en/information_extraction_haystack_nuextract",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- \"Build RAG with Hugging Face and Milvus\": Learn how to use Milvus with sentence transformers to build RAG pipelines ",
"raw": "- \"Build RAG with Hugging Face and Milvus\": Learn how to use Milvus with sentence transformers to build RAG pipelines ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/learn/cookbook/rag_with_hf_and_milvus",
"resource": null,
"url": null,
"href": "https://huggingface.co/learn/cookbook/rag_with_hf_and_milvus",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- \"Code Search with Vector Embeddings and Qdrant\": Search a codebase by building a retrieval pipeline using Qdrant and sentence transformers ",
"raw": "- \"Code Search with Vector Embeddings and Qdrant\": Search a codebase by building a retrieval pipeline using Qdrant and sentence transformers ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/learn/cookbook/code_search",
"resource": null,
"url": null,
"href": "https://huggingface.co/learn/cookbook/code_search",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Data analyst agent: get your dataโs insights in the blink of an eye โจ: great recipe by our own ",
"raw": "- Data analyst agent: get your dataโs insights in the blink of an eye โจ: great recipe by our own ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@m-ric",
"resource": null,
"url": null,
"href": null,
"user": "m-ric",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " showing how to build an agent that can do data analysis! ๐ฑ ",
"raw": " showing how to build an agent that can do data analysis! ๐ฑ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/learn/cookbook/agent_data_analyst",
"resource": null,
"url": null,
"href": "https://huggingface.co/learn/cookbook/agent_data_analyst",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | At Hugging Face we have an open-source Cookbook with many applied AI recipes ๐
Here are some of the latest recipes contributed โฅฅ
- "Information Extraction with Haystack and NuExtract": Use Haystack and transformers to build structured data extraction pipelines using LLMs by @anakin87 https://huggingface.co/learn/cookbook/en/information_extraction_haystack_nuextract
- "Build RAG with Hugging Face and Milvus": Learn how to use Milvus with sentence transformers to build RAG pipelines https://huggingface.co/learn/cookbook/rag_with_hf_and_milvus
- "Code Search with Vector Embeddings and Qdrant": Search a codebase by building a retrieval pipeline using Qdrant and sentence transformers https://huggingface.co/learn/cookbook/code_search
- Data analyst agent: get your dataโs insights in the blink of an eye โจ: great recipe by our own @m-ric showing how to build an agent that can do data analysis! ๐ฑ https://huggingface.co/learn/cookbook/agent_data_analyst
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/Y6HLRk3hYoLsf0tLmUSPC.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626505d493e0b04d75710566/9rfJc9ORXU9J5a42Ev3v6.png",
"fullname": "Stefano Fiorucci",
"name": "anakin87",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 66
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg",
"fullname": "Aymeric Roucher",
"name": "m-ric",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 476
}
] | [
{
"reaction": "โค๏ธ",
"users": [
"anakin87",
"AtAndDev",
"Ismail-707",
"not-lain",
"prithivMLmods",
"ijohn07",
"ziroo",
"louisbrulenaudet"
],
"count": 8
},
{
"reaction": "๐",
"users": [
"anakin87",
"lazarustda",
"AtAndDev",
"not-lain",
"efecelik"
],
"count": 5
},
{
"reaction": "๐",
"users": [
"anakin87",
"AtAndDev",
"not-lain"
],
"count": 3
},
{
"reaction": "๐",
"users": [
"belyakoff",
"AtAndDev",
"not-lain"
],
"count": 3
},
{
"reaction": "โ",
"users": [
"titan087"
],
"count": 1
}
] | 2024-07-26T15:42:03.000Z | 2024-07-26T15:42:03.653Z | [] | /posts/merve/119699878001376 | 3,655 | 0 |
454864775422313 | [
{
"type": "text",
"value": "Barefoot developer experiment: AI-powered app creation with my poor coding skills ๐ง ๐ป",
"raw": "Barefoot developer experiment: AI-powered app creation with my poor coding skills ๐ง ๐ป",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I recently discussed the \"barefoot developer\" concept - using AI to build apps for specific needs without coding expertise. Decided to put it to the test. ๐ฌ",
"raw": "I recently discussed the \"barefoot developer\" concept - using AI to build apps for specific needs without coding expertise. Decided to put it to the test. ๐ฌ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐
โฒ๏ธ The challenge: Create a menu bar Pomodoro app for my computer to boost my focus. Previous attempt? Messy. ",
"raw": "๐
โฒ๏ธ The challenge: Create a menu bar Pomodoro app for my computer to boost my focus. Previous attempt? Messy. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐คฏ The twist: I've never coded in Swift. ",
"raw": "๐คฏ The twist: I've never coded in Swift. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โก๏ธ The result: 30 minutes. No joke. Elegant, functional, and shareable. ",
"raw": "โก๏ธ The result: 30 minutes. No joke. Elegant, functional, and shareable. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Want to try it yourself? Grab the open source code and app here: ",
"raw": "๐ Want to try it yourself? Grab the open source code and app here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Code: ",
"raw": "- Code: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/fdaudens/pomodoro2",
"resource": null,
"url": null,
"href": "https://github.com/fdaudens/pomodoro2",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- App: ",
"raw": "- App: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/fdaudens/pomodoro2/releases/tag/v1.0.0",
"resource": null,
"url": null,
"href": "https://github.com/fdaudens/pomodoro2/releases/tag/v1.0.0",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Key takeaways:",
"raw": "Key takeaways:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ AI-assisted development is evolving rapidly ",
"raw": "๐ AI-assisted development is evolving rapidly ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐งฉ Domain expertise + AI tools can yield impressive results ",
"raw": "๐งฉ Domain expertise + AI tools can yield impressive results ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ This approach democratizes app creation ",
"raw": "๐ This approach democratizes app creation ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ค What's your take on AI-powered development? Have you experimented with it? ",
"raw": "๐ค What's your take on AI-powered development? Have you experimented with it? ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "#AIinDevelopment #BarefootDeveloper #OpenSource",
"raw": "#AIinDevelopment #BarefootDeveloper #OpenSource",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Barefoot developer experiment: AI-powered app creation with my poor coding skills ๐ง ๐ป
I recently discussed the "barefoot developer" concept - using AI to build apps for specific needs without coding expertise. Decided to put it to the test. ๐ฌ
๐
โฒ๏ธ The challenge: Create a menu bar Pomodoro app for my computer to boost my focus. Previous attempt? Messy.
๐คฏ The twist: I've never coded in Swift.
โก๏ธ The result: 30 minutes. No joke. Elegant, functional, and shareable.
๐ Want to try it yourself? Grab the open source code and app here:
- Code: https://github.com/fdaudens/pomodoro2
- App: https://github.com/fdaudens/pomodoro2/releases/tag/v1.0.0
Key takeaways:
๐ AI-assisted development is evolving rapidly
๐งฉ Domain expertise + AI tools can yield impressive results
๐ This approach democratizes app creation
๐ค What's your take on AI-powered development? Have you experimented with it?
#AIinDevelopment #BarefootDeveloper #OpenSource | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [] | [] | [] | 2024-07-26T14:46:10.000Z | 2024-07-26T14:46:10.510Z | [] | /posts/fdaudens/454864775422313 | 610 | 0 |
161395148988600 | [
{
"type": "text",
"value": "Hi All, ",
"raw": "Hi All, ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "In my latest blog post, I created a comprehensive guide on LLM Benchmarking. ",
"raw": "In my latest blog post, I created a comprehensive guide on LLM Benchmarking. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โ 20+ key benchmarks, from MMLU to TruthfulQA",
"raw": "โ 20+ key benchmarks, from MMLU to TruthfulQA",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โ How each benchmark assesses different LLM capabilities",
"raw": "โ How each benchmark assesses different LLM capabilities",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โ Why benchmarking matters for real-world AI applications",
"raw": "โ Why benchmarking matters for real-world AI applications",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โ Future trends in AI evaluation",
"raw": "โ Future trends in AI evaluation",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Read the blog here: ",
"raw": "Read the blog here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://wp.me/p7Qix-wO",
"resource": null,
"url": null,
"href": "https://wp.me/p7Qix-wO",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Please let me know your thoughts, suggestions, and comments. ",
"raw": "Please let me know your thoughts, suggestions, and comments. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi All,
In my latest blog post, I created a comprehensive guide on LLM Benchmarking.
โ 20+ key benchmarks, from MMLU to TruthfulQA
โ How each benchmark assesses different LLM capabilities
โ Why benchmarking matters for real-world AI applications
โ Future trends in AI evaluation
Read the blog here: https://wp.me/p7Qix-wO
Please let me know your thoughts, suggestions, and comments.
| {
"avatarUrl": "/avatars/808e9d7ac99837fe79169d0b8d49c366.svg",
"fullname": "Ajith V Prabhakar",
"name": "ajithprabhakar",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
} | [] | [] | [] | 2024-07-26T14:14:08.000Z | 2024-07-26T14:15:11.110Z | [] | /posts/ajithprabhakar/161395148988600 | 514 | 0 |
431447743586428 | [
{
"type": "text",
"value": "Try to find a better int4 algorithm for LLAMA3.1? For the 8B model, AutoRound boasts an average improvement across 10 zero-shot tasks, scoring 63.93 versus 63.15 (AWQ). Notably, on the MMLU task, it achieved 66.72 compared to 65.25, and on the ARC-C task, it scored 52.13 against 50.94. For further details and comparisons, visit the leaderboard at ",
"raw": "Try to find a better int4 algorithm for LLAMA3.1? For the 8B model, AutoRound boasts an average improvement across 10 zero-shot tasks, scoring 63.93 versus 63.15 (AWQ). Notably, on the MMLU task, it achieved 66.72 compared to 65.25, and on the ARC-C task, it scored 52.13 against 50.94. For further details and comparisons, visit the leaderboard at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard",
"resource": {
"type": "space",
"id": "Intel/low_bit_open_llm_leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ".",
"raw": ".",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Try to find a better int4 algorithm for LLAMA3.1? For the 8B model, AutoRound boasts an average improvement across 10 zero-shot tasks, scoring 63.93 versus 63.15 (AWQ). Notably, on the MMLU task, it achieved 66.72 compared to 65.25, and on the ARC-C task, it scored 52.13 against 50.94. For further details and comparisons, visit the leaderboard at https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard. | {
"avatarUrl": "/avatars/2e88e32ac0d45a1c624026e497eb00b3.svg",
"fullname": "wenhua cheng",
"name": "wenhuach",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
} | [] | [] | [] | 2024-07-26T08:47:10.000Z | 2024-07-26T08:47:50.532Z | [] | /posts/wenhuach/431447743586428 | 632 | 0 |
472528729644134 | [
{
"type": "text",
"value": "When ",
"raw": "When ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@MistralAI",
"resource": null,
"url": null,
"href": null,
"user": "MistralAI",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " drops a blog post labelled \"Large Enough,\" it's going to get serious! ๐๐ก",
"raw": " drops a blog post labelled \"Large Enough,\" it's going to get serious! ๐๐ก",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Mistral-Large-Instruct-2407, just call it Mistral-Large2, is a 123B parameters Instruct model with 128k context ๐๐",
"raw": "- Mistral-Large-Instruct-2407, just call it Mistral-Large2, is a 123B parameters Instruct model with 128k context ๐๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Multilingual in 11 languages; English ๐ฌ๐ง, French ๐ซ๐ท, German ๐ฉ๐ช, Spanish ๐ช๐ธ, Italian ๐ฎ๐น, Chinese ๐จ๐ณ, Japanese ๐ฏ๐ต, Korean ๐ฐ๐ท, Portuguese ๐ต๐น, Dutch ๐ณ๐ฑ, and Polish ๐ต๐ฑ. ๐ฃ๏ธ๐บ๏ธ",
"raw": "- Multilingual in 11 languages; English ๐ฌ๐ง, French ๐ซ๐ท, German ๐ฉ๐ช, Spanish ๐ช๐ธ, Italian ๐ฎ๐น, Chinese ๐จ๐ณ, Japanese ๐ฏ๐ต, Korean ๐ฐ๐ท, Portuguese ๐ต๐น, Dutch ๐ณ๐ฑ, and Polish ๐ต๐ฑ. ๐ฃ๏ธ๐บ๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Also highly focused on programming, trained on 80+ coding languages such as Python, Java, C, C++, Javascript, bash ๐ป๐ง",
"raw": "- Also highly focused on programming, trained on 80+ coding languages such as Python, Java, C, C++, Javascript, bash ๐ป๐ง",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Supports native function calling and structured output. ๐ ๏ธ๐",
"raw": "- Supports native function calling and structured output. ๐ ๏ธ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Released under Mistral Research License (Non-Commercial License, Research only๐) ",
"raw": "- Released under Mistral Research License (Non-Commercial License, Research only๐) ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Open weights only๐, no data or code released ๐๐",
"raw": "- Open weights only๐, no data or code released ๐๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Definitely firing shots at ",
"raw": "Definitely firing shots at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@Meta",
"resource": null,
"url": null,
"href": null,
"user": "Meta",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " Llama3.1: ๐ฏ๐ฅ",
"raw": " Llama3.1: ๐ฏ๐ฅ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "MMLU - 84.0% (ML2) vs 79.3% (L3.1-70B) vs 85.2% (L3.1-405B)",
"raw": "MMLU - 84.0% (ML2) vs 79.3% (L3.1-70B) vs 85.2% (L3.1-405B)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "GSM8K - 93% (ML2) vs 95.5% (L3.1-70B-Ins) vs 96.8% (L3.1-405B-Ins)",
"raw": "GSM8K - 93% (ML2) vs 95.5% (L3.1-70B-Ins) vs 96.8% (L3.1-405B-Ins)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Also, it's kinda chunky! ๐ฆ๐ช",
"raw": "Also, it's kinda chunky! ๐ฆ๐ช",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "fp16/ bf16 - ~250GB VRAM",
"raw": "fp16/ bf16 - ~250GB VRAM",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "fp8/ int8 - ~125GB VRAM",
"raw": "fp8/ int8 - ~125GB VRAM",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "int4 - ~60GB VRAM",
"raw": "int4 - ~60GB VRAM",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I tried quantising it to AWQ and GPTQ, but couldn't with 30GB V-RAM. โ๐ฅ๏ธ",
"raw": "I tried quantising it to AWQ and GPTQ, but couldn't with 30GB V-RAM. โ๐ฅ๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Also calling out AWQ and GPTQ on not supporting multi-GPU quantisation! ๐ฅ๏ธโก",
"raw": "Also calling out AWQ and GPTQ on not supporting multi-GPU quantisation! ๐ฅ๏ธโก",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "God sent ",
"raw": "God sent ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@casperhansen",
"resource": null,
"url": null,
"href": null,
"user": "casperhansen",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " has posted AWQ quantised INT4 model (68.68 GB) with the perplexity of 2.889: ",
"raw": " has posted AWQ quantised INT4 model (68.68 GB) with the perplexity of 2.889: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/casperhansen/mistral-large-instruct-2407-awq",
"resource": {
"type": "model",
"id": "casperhansen/mistral-large-instruct-2407-awq",
"discussionNum": null
},
"url": "https://huggingface.co/casperhansen/mistral-large-instruct-2407-awq",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ๐ฅ๐",
"raw": " ๐ฅ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Looks like open AI is going to beat OpenAI! ๐๐ค",
"raw": "Looks like open AI is going to beat OpenAI! ๐๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Blog post: ",
"raw": "Blog post: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://mistral.ai/news/mistral-large-2407/",
"resource": null,
"url": null,
"href": "https://mistral.ai/news/mistral-large-2407/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Models: ",
"raw": "Models: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/mistralai/Mistral-Large-Instruct-2407",
"resource": {
"type": "model",
"id": "mistralai/Mistral-Large-Instruct-2407",
"discussionNum": null
},
"url": "https://huggingface.co/mistralai/Mistral-Large-Instruct-2407",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | When @MistralAI drops a blog post labelled "Large Enough," it's going to get serious! ๐๐ก
- Mistral-Large-Instruct-2407, just call it Mistral-Large2, is a 123B parameters Instruct model with 128k context ๐๐
- Multilingual in 11 languages; English ๐ฌ๐ง, French ๐ซ๐ท, German ๐ฉ๐ช, Spanish ๐ช๐ธ, Italian ๐ฎ๐น, Chinese ๐จ๐ณ, Japanese ๐ฏ๐ต, Korean ๐ฐ๐ท, Portuguese ๐ต๐น, Dutch ๐ณ๐ฑ, and Polish ๐ต๐ฑ. ๐ฃ๏ธ๐บ๏ธ
- Also highly focused on programming, trained on 80+ coding languages such as Python, Java, C, C++, Javascript, bash ๐ป๐ง
- Supports native function calling and structured output. ๐ ๏ธ๐
- Released under Mistral Research License (Non-Commercial License, Research only๐)
- Open weights only๐, no data or code released ๐๐
Definitely firing shots at @Meta Llama3.1: ๐ฏ๐ฅ
MMLU - 84.0% (ML2) vs 79.3% (L3.1-70B) vs 85.2% (L3.1-405B)
GSM8K - 93% (ML2) vs 95.5% (L3.1-70B-Ins) vs 96.8% (L3.1-405B-Ins)
Also, it's kinda chunky! ๐ฆ๐ช
fp16/ bf16 - ~250GB VRAM
fp8/ int8 - ~125GB VRAM
int4 - ~60GB VRAM
I tried quantising it to AWQ and GPTQ, but couldn't with 30GB V-RAM. โ๐ฅ๏ธ
Also calling out AWQ and GPTQ on not supporting multi-GPU quantisation! ๐ฅ๏ธโก
God sent @casperhansen has posted AWQ quantised INT4 model (68.68 GB) with the perplexity of 2.889: https://huggingface.co/casperhansen/mistral-large-instruct-2407-awq ๐ฅ๐
Looks like open AI is going to beat OpenAI! ๐๐ค
Blog post: https://mistral.ai/news/mistral-large-2407/
Models: https://huggingface.co/mistralai/Mistral-Large-Instruct-2407 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/662bf5bfe93bb73804ef9344/kEfvW733vanVsdxyiJp4z.png"
}
] | [
{
"avatarUrl": "/avatars/f0c9185347d53a1f7393346ae3e58635.svg",
"fullname": "casperhansen",
"name": "casperhansen",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 66
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61e8c67cee1e1440121f0240/9sb__WsO5mwmdHHa6xKNc.jpeg",
"fullname": "Meta World Peace",
"name": "Meta",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5
}
] | [] | 2024-07-26T06:43:53.000Z | 2024-07-26T06:43:53.282Z | [] | /posts/singhsidhukuldeep/472528729644134 | 653 | 0 |
475207808661590 | [
{
"type": "text",
"value": "Custom Enterprise LLM/RAG with Real-Time Fine-Tuning ",
"raw": "Custom Enterprise LLM/RAG with Real-Time Fine-Tuning ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://mltblog.com/3WcTS9C",
"resource": null,
"url": null,
"href": "https://mltblog.com/3WcTS9C",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " -- Just released!",
"raw": " -- Just released!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This article features an application of xLLM to extract information from a corporate corpus, using prompts referred to as โqueriesโ. The goal is to serve the business user โ typically an employee of the company or someone allowed access โ with condensed, relevant pieces of information including links, examples, PDFs, tables, charts, definitions and so on, to professional queries.",
"raw": "This article features an application of xLLM to extract information from a corporate corpus, using prompts referred to as โqueriesโ. The goal is to serve the business user โ typically an employee of the company or someone allowed access โ with condensed, relevant pieces of information including links, examples, PDFs, tables, charts, definitions and so on, to professional queries.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "My custom sub-LLM designed from scratch does not rely on any Python library or API, and performs better than search tools available on the market, in terms of speed and results relevancy. It offers the user the ability to fine-tune parameters in real time, and can detect user intent to deliver appropriate output. The good performance comes from the quality of the well-structured input sources, combined with smart crawling to retrieve the embedded knowledge graph and integrate it into the backend tables. Traditional tools rely mostly on tokens, embeddings, billions of parameters and frontend tricks such as prompt engineering to fix backend issues.",
"raw": "My custom sub-LLM designed from scratch does not rely on any Python library or API, and performs better than search tools available on the market, in terms of speed and results relevancy. It offers the user the ability to fine-tune parameters in real time, and can detect user intent to deliver appropriate output. The good performance comes from the quality of the well-structured input sources, combined with smart crawling to retrieve the embedded knowledge graph and integrate it into the backend tables. Traditional tools rely mostly on tokens, embeddings, billions of parameters and frontend tricks such as prompt engineering to fix backend issues.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "To the contrary, my approach focuses on building a solid backend foundational architecture from the ground up. Tokens and embeddings are not the most important components, by a long shot. Cosine similarity and dot products are replaced by pointwise mutual information. There is no neural network, no training, and a small number of explainable parameters, easy to fine-tune.",
"raw": "To the contrary, my approach focuses on building a solid backend foundational architecture from the ground up. Tokens and embeddings are not the most important components, by a long shot. Cosine similarity and dot products are replaced by pointwise mutual information. There is no neural network, no training, and a small number of explainable parameters, easy to fine-tune.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Read more, access the code and data, at ",
"raw": "Read more, access the code and data, at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://mltblog.com/3WcTS9C",
"resource": null,
"url": null,
"href": "https://mltblog.com/3WcTS9C",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Custom Enterprise LLM/RAG with Real-Time Fine-Tuning https://mltblog.com/3WcTS9C -- Just released!
This article features an application of xLLM to extract information from a corporate corpus, using prompts referred to as โqueriesโ. The goal is to serve the business user โ typically an employee of the company or someone allowed access โ with condensed, relevant pieces of information including links, examples, PDFs, tables, charts, definitions and so on, to professional queries.
My custom sub-LLM designed from scratch does not rely on any Python library or API, and performs better than search tools available on the market, in terms of speed and results relevancy. It offers the user the ability to fine-tune parameters in real time, and can detect user intent to deliver appropriate output. The good performance comes from the quality of the well-structured input sources, combined with smart crawling to retrieve the embedded knowledge graph and integrate it into the backend tables. Traditional tools rely mostly on tokens, embeddings, billions of parameters and frontend tricks such as prompt engineering to fix backend issues.
To the contrary, my approach focuses on building a solid backend foundational architecture from the ground up. Tokens and embeddings are not the most important components, by a long shot. Cosine similarity and dot products are replaced by pointwise mutual information. There is no neural network, no training, and a small number of explainable parameters, easy to fine-tune.
Read more, access the code and data, at https://mltblog.com/3WcTS9C | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/669c89e98f2dbc203f9e74ab/higvnXEHeo_Ig2bgTpn47.png",
"fullname": "Vincent Granville",
"name": "vincentg64",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 17,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ง ",
"users": [
"efecelik"
],
"count": 1
}
] | 2024-07-26T00:24:16.000Z | 2024-07-26T00:24:16.325Z | [] | /posts/vincentg64/475207808661590 | 571 | 0 |
336846441665373 | [
{
"type": "text",
"value": "Aspen Institute's wake-up call for journalism: Embrace AI or risk obsolescence ๐ฐ๐ค",
"raw": "Aspen Institute's wake-up call for journalism: Embrace AI or risk obsolescence ๐ฐ๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "\"Every new technology comes with risksโit's how the media industry responds that determines how (or whether) news providers can prevail.\" โ ",
"raw": "\"Every new technology comes with risksโit's how the media industry responds that determines how (or whether) news providers can prevail.\" โ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Vivian Schiller",
"raw": "Vivian Schiller",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "ย ",
"raw": "ย ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Bonus: In need of ideas for AI projects? The report is a goldmine of real-world experiments. Here are some lesser-known innovations:",
"raw": "Bonus: In need of ideas for AI projects? The report is a goldmine of real-world experiments. Here are some lesser-known innovations:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Spotting patterns: Semafor uses chatbots to assess newsroom performance",
"raw": "๐ Spotting patterns: Semafor uses chatbots to assess newsroom performance",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Extending reach: Politico summarizes state and federal legislation with AI",
"raw": "๐ Extending reach: Politico summarizes state and federal legislation with AI",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐๏ธ Transformation: Washington Post uses AI-generated voices for newsletter narration",
"raw": "๐๏ธ Transformation: Washington Post uses AI-generated voices for newsletter narration",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Comprehensive coverage: Richland Source covers 10,000 Ohio high school sports games yearly with AI",
"raw": "๐ Comprehensive coverage: Richland Source covers 10,000 Ohio high school sports games yearly with AI",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Summarization: Gannett adds AI-generated bullet points to stories",
"raw": "๐ Summarization: Gannett adds AI-generated bullet points to stories",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Translation: Finnish broadcaster Yle built an AI tool to reach Ukrainian immigrants",
"raw": "๐ Translation: Finnish broadcaster Yle built an AI tool to reach Ukrainian immigrants",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ฅ Internal info: AP's Merlin tool pinpoints key video moments",
"raw": "๐ฅ Internal info: AP's Merlin tool pinpoints key video moments",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ณ๏ธ Civic engagement: Spotlight PA's AI assistant answers election questions",
"raw": "๐ณ๏ธ Civic engagement: Spotlight PA's AI assistant answers election questions",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ฃ๏ธ Personalization: Baltimore Times customizes health news with AI voice readers",
"raw": "๐ฃ๏ธ Personalization: Baltimore Times customizes health news with AI voice readers",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ฏ Targeted ads: NYT's AI tool aligns content with advertisers' focus",
"raw": "๐ฏ Targeted ads: NYT's AI tool aligns content with advertisers' focus",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ผ Conversion: WSJ uses ML to boost subscription renewals",
"raw": "๐ผ Conversion: WSJ uses ML to boost subscription renewals",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "A must-read for all in media: ",
"raw": "A must-read for all in media: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://aspendigital.org/wp-content/uploads/2024/07/Aspen-Digital_Here-Come-the-Robots_July-2024.pdf",
"resource": null,
"url": null,
"href": "https://aspendigital.org/wp-content/uploads/2024/07/Aspen-Digital_Here-Come-the-Robots_July-2024.pdf",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "#AIinJournalism #MediaInnovation #FutureofNews",
"raw": "#AIinJournalism #MediaInnovation #FutureofNews",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Aspen Institute's wake-up call for journalism: Embrace AI or risk obsolescence ๐ฐ๐ค
"Every new technology comes with risksโit's how the media industry responds that determines how (or whether) news providers can prevail." โ
Vivian Schiller
ย
Bonus: In need of ideas for AI projects? The report is a goldmine of real-world experiments. Here are some lesser-known innovations:
๐ Spotting patterns: Semafor uses chatbots to assess newsroom performance
๐ Extending reach: Politico summarizes state and federal legislation with AI
๐๏ธ Transformation: Washington Post uses AI-generated voices for newsletter narration
๐ Comprehensive coverage: Richland Source covers 10,000 Ohio high school sports games yearly with AI
๐ Summarization: Gannett adds AI-generated bullet points to stories
๐ Translation: Finnish broadcaster Yle built an AI tool to reach Ukrainian immigrants
๐ฅ Internal info: AP's Merlin tool pinpoints key video moments
๐ณ๏ธ Civic engagement: Spotlight PA's AI assistant answers election questions
๐ฃ๏ธ Personalization: Baltimore Times customizes health news with AI voice readers
๐ฏ Targeted ads: NYT's AI tool aligns content with advertisers' focus
๐ผ Conversion: WSJ uses ML to boost subscription renewals
A must-read for all in media: https://aspendigital.org/wp-content/uploads/2024/07/Aspen-Digital_Here-Come-the-Robots_July-2024.pdf
#AIinJournalism #MediaInnovation #FutureofNews | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [] | [] | [] | 2024-07-25T20:01:46.000Z | 2024-07-25T20:01:46.720Z | [] | /posts/fdaudens/336846441665373 | 593 | 0 |
852983376807703 | [
{
"type": "text",
"value": "๐ Hello from Project Fluently Team!",
"raw": "๐ Hello from Project Fluently Team!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โจ Finally we can give you some details about Supple Diffusion. We worked on it for a long time and we have little left, we apologize that we had to increase the work time.",
"raw": "โจ Finally we can give you some details about Supple Diffusion. We worked on it for a long time and we have little left, we apologize that we had to increase the work time.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ ๏ธ Some technical information. The first version will be the Small version (there will also be Medium, Large, Huge, possibly Tiny), it will be based on the SD1 architecture, that is, one text encoder, U-net, VAE. Now about each component, the first is a text encoder, it will be a CLIP model (perhaps not CLIP-L-path14), CLIP was specially retrained by us in order to achieve the universality of the model in understanding completely different styles and to simplify the prompt as much as possible. Next, we did U-net, U-net in a rather complicated way, first we trained different parts (types) of data with different U-nets, then we carried out merging using different methods, then we trained DPO and SPO using methods, and then we looked at the remaining shortcomings and further trained model, details will come later. We left VAE the same as in SD1 architecture.",
"raw": "๐ ๏ธ Some technical information. The first version will be the Small version (there will also be Medium, Large, Huge, possibly Tiny), it will be based on the SD1 architecture, that is, one text encoder, U-net, VAE. Now about each component, the first is a text encoder, it will be a CLIP model (perhaps not CLIP-L-path14), CLIP was specially retrained by us in order to achieve the universality of the model in understanding completely different styles and to simplify the prompt as much as possible. Next, we did U-net, U-net in a rather complicated way, first we trained different parts (types) of data with different U-nets, then we carried out merging using different methods, then we trained DPO and SPO using methods, and then we looked at the remaining shortcomings and further trained model, details will come later. We left VAE the same as in SD1 architecture.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Compatibility. Another goal of the Supple model series is full compatibility with Auto1111 and ComfyUI already at the release stage, the model is fully supported by these interfaces and the diffusers library and does not require adaptation, your usual Sampling methods are also compatible, such as DPM++ 2M Karras, DPM++ SDE and others.",
"raw": "๐ Compatibility. Another goal of the Supple model series is full compatibility with Auto1111 and ComfyUI already at the release stage, the model is fully supported by these interfaces and the diffusers library and does not require adaptation, your usual Sampling methods are also compatible, such as DPM++ 2M Karras, DPM++ SDE and others.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ง Today, without demo images (there wasnโt much time), final work is underway on the model and we are already preparing to develop the Medium version, the release of the Small version will most likely be in mid-August or earlier.",
"raw": "๐ง Today, without demo images (there wasnโt much time), final work is underway on the model and we are already preparing to develop the Medium version, the release of the Small version will most likely be in mid-August or earlier.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ป Feel free to ask your questions in the comments below the post, we will be happy to answer them, have a nice day!",
"raw": "๐ป Feel free to ask your questions in the comments below the post, we will be happy to answer them, have a nice day!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ Hello from Project Fluently Team!
โจ Finally we can give you some details about Supple Diffusion. We worked on it for a long time and we have little left, we apologize that we had to increase the work time.
๐ ๏ธ Some technical information. The first version will be the Small version (there will also be Medium, Large, Huge, possibly Tiny), it will be based on the SD1 architecture, that is, one text encoder, U-net, VAE. Now about each component, the first is a text encoder, it will be a CLIP model (perhaps not CLIP-L-path14), CLIP was specially retrained by us in order to achieve the universality of the model in understanding completely different styles and to simplify the prompt as much as possible. Next, we did U-net, U-net in a rather complicated way, first we trained different parts (types) of data with different U-nets, then we carried out merging using different methods, then we trained DPO and SPO using methods, and then we looked at the remaining shortcomings and further trained model, details will come later. We left VAE the same as in SD1 architecture.
๐ Compatibility. Another goal of the Supple model series is full compatibility with Auto1111 and ComfyUI already at the release stage, the model is fully supported by these interfaces and the diffusers library and does not require adaptation, your usual Sampling methods are also compatible, such as DPM++ 2M Karras, DPM++ SDE and others.
๐ง Today, without demo images (there wasnโt much time), final work is underway on the model and we are already preparing to develop the Medium version, the release of the Small version will most likely be in mid-August or earlier.
๐ป Feel free to ask your questions in the comments below the post, we will be happy to answer them, have a nice day! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/o-5N9QyjHgmSMk69e3O55.png",
"fullname": "Evgeniy Hristoforu",
"name": "ehristoforu",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 235,
"isFollowing": false
} | [] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"Nymbo",
"WaveCut",
"YaTharThShaRma999",
"not-lain",
"comarproject",
"dreamdrop-art",
"GPT007",
"louisbrulenaudet",
"osanseviero",
"ehristoforu"
],
"count": 10
},
{
"reaction": "๐",
"users": [
"ehristoforu",
"Nymbo",
"YaTharThShaRma999",
"John6666",
"dreamdrop-art",
"1aurent",
"GPT007"
],
"count": 7
},
{
"reaction": "๐",
"users": [
"ehristoforu",
"Nymbo",
"John6666",
"YaTharThShaRma999",
"dreamdrop-art",
"GPT007"
],
"count": 6
},
{
"reaction": "๐ฅ",
"users": [
"ehristoforu",
"Nymbo",
"YaTharThShaRma999",
"dreamdrop-art",
"GPT007"
],
"count": 5
},
{
"reaction": "๐",
"users": [
"ehristoforu",
"Nymbo",
"YaTharThShaRma999",
"dreamdrop-art",
"GPT007"
],
"count": 5
},
{
"reaction": "๐",
"users": [
"ehristoforu",
"Nymbo",
"YaTharThShaRma999",
"dreamdrop-art",
"GPT007"
],
"count": 5
},
{
"reaction": "๐ค",
"users": [
"ehristoforu",
"Nymbo",
"diegotluz",
"dreamdrop-art",
"GPT007"
],
"count": 5
},
{
"reaction": "๐คฏ",
"users": [
"ehristoforu",
"Nymbo",
"dreamdrop-art",
"GPT007"
],
"count": 4
},
{
"reaction": "๐ค",
"users": [
"ehristoforu",
"Nymbo",
"dreamdrop-art",
"GPT007"
],
"count": 4
},
{
"reaction": "๐ง ",
"users": [
"ehristoforu",
"Nymbo",
"dreamdrop-art",
"GPT007"
],
"count": 4
},
{
"reaction": "โ",
"users": [
"ehristoforu",
"Nymbo",
"dreamdrop-art",
"GPT007"
],
"count": 4
}
] | 2024-07-25T19:22:16.000Z | 2024-08-13T04:33:15.316Z | [
{
"avatarUrl": "/avatars/9937e8edd311e4c5c9610c16ef2a6df6.svg",
"fullname": "Jonas Kander",
"name": "Degstek",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/ehristoforu/852983376807703 | 3,542 | 1 |
610480712922619 | [
{
"type": "text",
"value": "๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐ฎ๐๐ฎ ๐ฎ๐ป๐ฎ๐น๐๐๐: ๐ฑ๐ฟ๐ผ๐ฝ ๐๐ผ๐๐ฟ ๐ฑ๐ฎ๐๐ฎ ๐ณ๐ถ๐น๐ฒ, ๐น๐ฒ๐ ๐๐ต๐ฒ ๐๐๐ ๐ฑ๐ผ ๐๐ต๐ฒ ๐ฎ๐ป๐ฎ๐น๐๐๐ถ๐ ๐โ๏ธ",
"raw": "๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐ฎ๐๐ฎ ๐ฎ๐ป๐ฎ๐น๐๐๐: ๐ฑ๐ฟ๐ผ๐ฝ ๐๐ผ๐๐ฟ ๐ฑ๐ฎ๐๐ฎ ๐ณ๐ถ๐น๐ฒ, ๐น๐ฒ๐ ๐๐ต๐ฒ ๐๐๐ ๐ฑ๐ผ ๐๐ต๐ฒ ๐ฎ๐ป๐ฎ๐น๐๐๐ถ๐ ๐โ๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Need to make quick exploratory data analysis? โก๏ธ Get help from an agent.",
"raw": "Need to make quick exploratory data analysis? โก๏ธ Get help from an agent.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I was impressed by Llama-3.1's capacity to derive insights from data. Given a csv file, it makes quick work of exploratory data analysis and can derive interesting insights.",
"raw": "I was impressed by Llama-3.1's capacity to derive insights from data. Given a csv file, it makes quick work of exploratory data analysis and can derive interesting insights.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "On the data from the Kaggle titanic challenge, that records which passengers survived the Titanic wreckage, it was able by itself to derive interesting trends like \"passengers that paid higher fares were more likely to survive\" or \"survival rate was much higher for women than men\".",
"raw": "On the data from the Kaggle titanic challenge, that records which passengers survived the Titanic wreckage, it was able by itself to derive interesting trends like \"passengers that paid higher fares were more likely to survive\" or \"survival rate was much higher for women than men\".",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The cookbook even lets the agent built its own submission to the challenge, and it ranks under 3,000 out of 17,000 submissions: ๐ not bad at all!",
"raw": "The cookbook even lets the agent built its own submission to the challenge, and it ranks under 3,000 out of 17,000 submissions: ๐ not bad at all!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Try it for yourself in this Space demo ๐ ",
"raw": "Try it for yourself in this Space demo ๐ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/m-ric/agent-data-analyst",
"resource": {
"type": "space",
"id": "m-ric/agent-data-analyst",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/m-ric/agent-data-analyst",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐ฎ๐๐ฎ ๐ฎ๐ป๐ฎ๐น๐๐๐: ๐ฑ๐ฟ๐ผ๐ฝ ๐๐ผ๐๐ฟ ๐ฑ๐ฎ๐๐ฎ ๐ณ๐ถ๐น๐ฒ, ๐น๐ฒ๐ ๐๐ต๐ฒ ๐๐๐ ๐ฑ๐ผ ๐๐ต๐ฒ ๐ฎ๐ป๐ฎ๐น๐๐๐ถ๐ ๐โ๏ธ
Need to make quick exploratory data analysis? โก๏ธ Get help from an agent.
I was impressed by Llama-3.1's capacity to derive insights from data. Given a csv file, it makes quick work of exploratory data analysis and can derive interesting insights.
On the data from the Kaggle titanic challenge, that records which passengers survived the Titanic wreckage, it was able by itself to derive interesting trends like "passengers that paid higher fares were more likely to survive" or "survival rate was much higher for women than men".
The cookbook even lets the agent built its own submission to the challenge, and it ranks under 3,000 out of 17,000 submissions: ๐ not bad at all!
Try it for yourself in this Space demo ๐ https://huggingface.co/spaces/m-ric/agent-data-analyst | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg",
"fullname": "Aymeric Roucher",
"name": "m-ric",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 476,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"ucsahin",
"osanseviero",
"Taylor658",
"introvoyz041",
"ranajoy98",
"sergiopaniego",
"rjmehta",
"nbroad",
"John6666",
"Nelathan",
"jzou1995",
"asoria"
],
"count": 12
},
{
"reaction": "๐",
"users": [
"louisbrulenaudet"
],
"count": 1
}
] | 2024-07-25T16:57:12.000Z | 2024-07-30T03:48:40.769Z | [
{
"avatarUrl": "/avatars/d709f3dfd1c532c6eb28c98b76bed7af.svg",
"fullname": "Michael Booth",
"name": "mjboothaus",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "/avatars/2377b762baac9a3b6659e7bc238a6027.svg",
"fullname": "Davor Kondic",
"name": "dkondic",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/m-ric/610480712922619 | 2,265 | 2 |
602595245870297 | [
{
"type": "text",
"value": "In principle, it's possible to \"abliterate\" refusals in any Llama 3.1 8B models via application of a LoRA, using only mergekit.",
"raw": "In principle, it's possible to \"abliterate\" refusals in any Llama 3.1 8B models via application of a LoRA, using only mergekit.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Proof of concept below:",
"raw": "Proof of concept below:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"resource": {
"type": "model",
"id": "grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"discussionNum": null
},
"url": "https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | In principle, it's possible to "abliterate" refusals in any Llama 3.1 8B models via application of a LoRA, using only mergekit.
Proof of concept below:
https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65c992424936ab38ecf706b0/aq7vuHFPO1S93fwJk0Cuq.jpeg",
"fullname": "Jim Lai",
"name": "grimjim",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 163,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"John6666",
"sfdsf435",
"Meggido",
"darkstorm2150",
"Metin",
"IAmTheCollector",
"ucsahin",
"efecelik"
],
"count": 8
},
{
"reaction": "๐",
"users": [
"ZeroWw"
],
"count": 1
},
{
"reaction": "โค๏ธ",
"users": [
"ijohn07"
],
"count": 1
}
] | 2024-07-25T16:53:54.000Z | 2024-07-26T15:52:48.766Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65c992424936ab38ecf706b0/aq7vuHFPO1S93fwJk0Cuq.jpeg",
"fullname": "Jim Lai",
"name": "grimjim",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 163,
"isFollowing": false
},
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
}
] | /posts/grimjim/602595245870297 | 2,283 | 5 |
422078687475543 | [
{
"type": "text",
"value": "Made a demo for all my Brazil XL LoRA models so far. Use it for free at ",
"raw": "Made a demo for all my Brazil XL LoRA models so far. Use it for free at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/lucianosb/brazilxl-demo",
"resource": {
"type": "space",
"id": "lucianosb/brazilxl-demo",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/lucianosb/brazilxl-demo",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Brazil XL is an initiative that brings better representations of Brazilian culture to Stable Diffusion. I started this when I noticed some keywords would not generate the desired subject on any base model, so I trained my own models and I'm sharing them with the HF community.",
"raw": "Brazil XL is an initiative that brings better representations of Brazilian culture to Stable Diffusion. I started this when I noticed some keywords would not generate the desired subject on any base model, so I trained my own models and I'm sharing them with the HF community.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I'll keep updating the space as new models get trained on the following months.",
"raw": "I'll keep updating the space as new models get trained on the following months.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Made a demo for all my Brazil XL LoRA models so far. Use it for free at https://huggingface.co/spaces/lucianosb/brazilxl-demo
Brazil XL is an initiative that brings better representations of Brazilian culture to Stable Diffusion. I started this when I noticed some keywords would not generate the desired subject on any base model, so I trained my own models and I'm sharing them with the HF community.
I'll keep updating the space as new models get trained on the following months. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1663599585288-noauth.png",
"fullname": "Luciano Santa Brรญgida",
"name": "lucianosb",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 27,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"fdaudens",
"John6666"
],
"count": 2
}
] | 2024-07-25T11:38:21.000Z | 2024-07-25T11:38:21.544Z | [] | /posts/lucianosb/422078687475543 | 1,294 | 0 |
660573758994799 | [
{
"type": "text",
"value": "We have recently merged Video-LLaVA to transformers! ๐ค๐๏ธ",
"raw": "We have recently merged Video-LLaVA to transformers! ๐ค๐๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "What makes this model different?",
"raw": "What makes this model different?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Demo: ",
"raw": "Demo: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/llava-hf/video-llava",
"resource": {
"type": "space",
"id": "llava-hf/video-llava",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/llava-hf/video-llava",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Model: ",
"raw": "Model: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/LanguageBind/Video-LLaVA-7B-hf",
"resource": {
"type": "model",
"id": "LanguageBind/Video-LLaVA-7B-hf",
"discussionNum": null
},
"url": "https://huggingface.co/LanguageBind/Video-LLaVA-7B-hf",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Compared to other models that take image and video input and either project them separately or downsampling video and projecting selected frames, Video-LLaVA is converting images and videos to unified representation and project them using a shared projection layer.",
"raw": "Compared to other models that take image and video input and either project them separately or downsampling video and projecting selected frames, Video-LLaVA is converting images and videos to unified representation and project them using a shared projection layer.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It uses Vicuna 1.5 as the language model and LanguageBind's own encoders that's based on OpenCLIP, these encoders project the modalities to an unified representation before passing to projection layer.",
"raw": "It uses Vicuna 1.5 as the language model and LanguageBind's own encoders that's based on OpenCLIP, these encoders project the modalities to an unified representation before passing to projection layer.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I feel like one of the coolest features of this model is the joint understanding which is also introduced recently with many models",
"raw": "I feel like one of the coolest features of this model is the joint understanding which is also introduced recently with many models",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It's a relatively older model but ahead of it's time and works very well! Which means, e.g. you can pass model an image of a cat and a video of a cat and ask questions like whether the cat in the image exists in video or not ๐คฉ",
"raw": "It's a relatively older model but ahead of it's time and works very well! Which means, e.g. you can pass model an image of a cat and a video of a cat and ask questions like whether the cat in the image exists in video or not ๐คฉ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | We have recently merged Video-LLaVA to transformers! ๐ค๐๏ธ
What makes this model different?
Demo: https://huggingface.co/spaces/llava-hf/video-llava
Model: https://huggingface.co/LanguageBind/Video-LLaVA-7B-hf
Compared to other models that take image and video input and either project them separately or downsampling video and projecting selected frames, Video-LLaVA is converting images and videos to unified representation and project them using a shared projection layer.
It uses Vicuna 1.5 as the language model and LanguageBind's own encoders that's based on OpenCLIP, these encoders project the modalities to an unified representation before passing to projection layer.
I feel like one of the coolest features of this model is the joint understanding which is also introduced recently with many models
It's a relatively older model but ahead of it's time and works very well! Which means, e.g. you can pass model an image of a cat and a video of a cat and ask questions like whether the cat in the image exists in video or not ๐คฉ
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/H6V5yN2cymnsO4GKr6A6R.png"
}
] | [] | [
{
"reaction": "๐ฅ",
"users": [
"adorkin",
"orkut",
"osanseviero",
"Andyrasika",
"SizorCloud"
],
"count": 5
},
{
"reaction": "๐",
"users": [
"prithivMLmods",
"Andyrasika",
"NHLOCAL"
],
"count": 3
},
{
"reaction": "โค๏ธ",
"users": [
"Tonic"
],
"count": 1
}
] | 2024-07-25T11:03:44.000Z | 2024-07-25T11:03:44.465Z | [] | /posts/merve/660573758994799 | 2,267 | 0 |
794863533092905 | [
{
"type": "text",
"value": "The Meta Llama-3.1 model series can be used for distilling and fine-tuning but this requires annotated preference data so I created a Human Feedback Collector based on Gradio that directly logs data to the Hugging Face Hub. ",
"raw": "The Meta Llama-3.1 model series can be used for distilling and fine-tuning but this requires annotated preference data so I created a Human Feedback Collector based on Gradio that directly logs data to the Hugging Face Hub. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Model meta-llama/Meta-Llama-3.1-8B-Instruct",
"raw": "- Model meta-llama/Meta-Llama-3.1-8B-Instruct",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Data SFT, KTO and DPO data",
"raw": "- Data SFT, KTO and DPO data",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Runs on free Zero GPUs in Hugging Face Spaces",
"raw": "- Runs on free Zero GPUs in Hugging Face Spaces",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Might need some human curation in Argilla",
"raw": "- Might need some human curation in Argilla",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Or provide some AI feedback with distilabel",
"raw": "- Or provide some AI feedback with distilabel",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/collections/davidberenstein1957/chatinterface-llm-human-feedback-collectors-66a22859c9e703d2af7500c1",
"resource": null,
"url": null,
"href": "https://huggingface.co/collections/davidberenstein1957/chatinterface-llm-human-feedback-collectors-66a22859c9e703d2af7500c1",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | The Meta Llama-3.1 model series can be used for distilling and fine-tuning but this requires annotated preference data so I created a Human Feedback Collector based on Gradio that directly logs data to the Hugging Face Hub.
- Model meta-llama/Meta-Llama-3.1-8B-Instruct
- Data SFT, KTO and DPO data
- Runs on free Zero GPUs in Hugging Face Spaces
- Might need some human curation in Argilla
- Or provide some AI feedback with distilabel
https://huggingface.co/collections/davidberenstein1957/chatinterface-llm-human-feedback-collectors-66a22859c9e703d2af7500c1 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 148,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"fdaudens",
"davidberenstein1957",
"osanseviero",
"Dihelson",
"Nelathan",
"Ramikan-BR"
],
"count": 6
},
{
"reaction": "๐",
"users": [
"davidberenstein1957",
"osanseviero",
"NickyNicky",
"Dihelson",
"danielus",
"Ramikan-BR"
],
"count": 6
},
{
"reaction": "โค๏ธ",
"users": [
"Ramikan-BR"
],
"count": 1
}
] | 2024-07-25T10:28:22.000Z | 2024-07-25T10:28:22.239Z | [] | /posts/davidberenstein1957/794863533092905 | 1,409 | 0 |
191397440129338 | [
{
"type": "text",
"value": "Bellman Swedish finetune based on llama3.1 8b is now available:",
"raw": "Bellman Swedish finetune based on llama3.1 8b is now available:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/neph1/llama-3.1-instruct-bellman-8b-swedish",
"resource": {
"type": "model",
"id": "neph1/llama-3.1-instruct-bellman-8b-swedish",
"discussionNum": null
},
"url": "https://huggingface.co/neph1/llama-3.1-instruct-bellman-8b-swedish",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "More quants and fp16 are coming. Working out some issues with colab..",
"raw": "More quants and fp16 are coming. Working out some issues with colab..",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Bellman Swedish finetune based on llama3.1 8b is now available:
https://huggingface.co/neph1/llama-3.1-instruct-bellman-8b-swedish
More quants and fp16 are coming. Working out some issues with colab.. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/653cd3049107029eb004f968/Y4XphXmk8P51GlIi6u9cd.png",
"fullname": "Rickard Edรฉn",
"name": "neph1",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 13,
"isFollowing": false
} | [] | [] | [] | 2024-07-25T06:53:31.000Z | 2024-07-25T06:53:31.693Z | [] | /posts/neph1/191397440129338 | 725 | 0 |
750213477785569 | [
{
"type": "text",
"value": "Super Exciting New Paper By Meta๐ค๐ง ๐",
"raw": "Super Exciting New Paper By Meta๐ค๐ง ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Discrete Flow Matching:",
"raw": "Discrete Flow Matching:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Introduces a new framework/algorithm for generating text/code without having to predict auto-regressively or one โwordโ at a time as traditional GPT models do. It generates all parts of the text/code at once.",
"raw": "Introduces a new framework/algorithm for generating text/code without having to predict auto-regressively or one โwordโ at a time as traditional GPT models do. It generates all parts of the text/code at once.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The algorithm does this by slowly transforming random noise (source) into meaningful text (data). It learns how to transform samples along a path created between source and target using a \"probability velocity\" that describes how probabilities change over time. During generation, DFM starts with a random sample and iteratively updates it using this learned velocity, gradually transforming it into a sample from the target distribution. This allows for non-autoregressive generation.",
"raw": "The algorithm does this by slowly transforming random noise (source) into meaningful text (data). It learns how to transform samples along a path created between source and target using a \"probability velocity\" that describes how probabilities change over time. During generation, DFM starts with a random sample and iteratively updates it using this learned velocity, gradually transforming it into a sample from the target distribution. This allows for non-autoregressive generation.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "They were able to scale models of up to 1.7B parameters achieving impressive scores on HumanEval and MBPP for coding, significantly closing the gap between autoregressive models and discrete flow models.",
"raw": "They were able to scale models of up to 1.7B parameters achieving impressive scores on HumanEval and MBPP for coding, significantly closing the gap between autoregressive models and discrete flow models.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Though in its infancy, it sure does hold a promising future as leading research scientists argue non-autoregressive methods yield better reasoning.",
"raw": "Though in its infancy, it sure does hold a promising future as leading research scientists argue non-autoregressive methods yield better reasoning.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Super Exciting New Paper By Meta๐ค๐ง ๐
Discrete Flow Matching:
Introduces a new framework/algorithm for generating text/code without having to predict auto-regressively or one โwordโ at a time as traditional GPT models do. It generates all parts of the text/code at once.
The algorithm does this by slowly transforming random noise (source) into meaningful text (data). It learns how to transform samples along a path created between source and target using a "probability velocity" that describes how probabilities change over time. During generation, DFM starts with a random sample and iteratively updates it using this learned velocity, gradually transforming it into a sample from the target distribution. This allows for non-autoregressive generation.
They were able to scale models of up to 1.7B parameters achieving impressive scores on HumanEval and MBPP for coding, significantly closing the gap between autoregressive models and discrete flow models.
Though in its infancy, it sure does hold a promising future as leading research scientists argue non-autoregressive methods yield better reasoning. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6438a9027de34e8ea7e4b257/vib8QSd1AWMr_bR9ig_xJ.jpeg",
"fullname": "Jaward Sesay",
"name": "Jaward",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 189,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/rc-5f46W9hFkvqIQS9s_x.qt"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/J6qwFBFaK5ntJwRfitlPR.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/fI4nSh1rcp24Jc-70Yira.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/jqR_WEwmMPRdSfl65Yv19.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/B7rXyixBfUoZrjeGikjGc.jpeg"
}
] | [] | [
{
"reaction": "๐",
"users": [
"eek",
"takeraparterer",
"victor",
"Xdotnet",
"fatimarizwan",
"ZeroWw",
"Kkordik"
],
"count": 7
},
{
"reaction": "๐",
"users": [
"takeraparterer",
"fatimarizwan",
"ZeroWw"
],
"count": 3
},
{
"reaction": "๐ง ",
"users": [
"fatimarizwan",
"den0620"
],
"count": 2
}
] | 2024-07-25T06:36:56.000Z | 2024-07-25T06:36:56.835Z | [] | /posts/Jaward/750213477785569 | 1,715 | 0 |
909191721549783 | [
{
"type": "text",
"value": "Yet another post hailing how good Meta Llama 3.1 is? ๐ค I guess not!",
"raw": "Yet another post hailing how good Meta Llama 3.1 is? ๐ค I guess not!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "While Llama 3.1 is truly impressive, especially 405B (which gives GPT-4o a run for its money! ๐ช)",
"raw": "While Llama 3.1 is truly impressive, especially 405B (which gives GPT-4o a run for its money! ๐ช)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I was surprised to see that on the Open LLM Leaderboard, Llama 3.1 70B was not able to dethrone the current king Qwen2-72B! ๐",
"raw": "I was surprised to see that on the Open LLM Leaderboard, Llama 3.1 70B was not able to dethrone the current king Qwen2-72B! ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Not only that, for a few benchmarks like MATH Lvl 5, it was completely lagging behind Qwen2-72B! ๐",
"raw": "Not only that, for a few benchmarks like MATH Lvl 5, it was completely lagging behind Qwen2-72B! ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Also, the benchmarks are completely off compared to the official numbers from Meta! ๐คฏ",
"raw": "Also, the benchmarks are completely off compared to the official numbers from Meta! ๐คฏ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Based on the responses, I still believe Llama 3.1 will perform better than Qwen2 on LMSYS Chatbot Arena. ๐ค But it still lags behind on too many benchmarks! ๐โโ๏ธ",
"raw": "Based on the responses, I still believe Llama 3.1 will perform better than Qwen2 on LMSYS Chatbot Arena. ๐ค But it still lags behind on too many benchmarks! ๐โโ๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Open LLM Leaderboard: ",
"raw": "Open LLM Leaderboard: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard",
"resource": {
"type": "space",
"id": "open-llm-leaderboard/open_llm_leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ๐",
"raw": " ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Hopefully, this is just an Open LLM Leaderboard error! ",
"raw": "Hopefully, this is just an Open LLM Leaderboard error! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@open-llm-leaderboard",
"resource": null,
"url": null,
"href": null,
"user": "open-llm-leaderboard",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " SOS! ๐จ",
"raw": " SOS! ๐จ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Yet another post hailing how good Meta Llama 3.1 is? ๐ค I guess not!
While Llama 3.1 is truly impressive, especially 405B (which gives GPT-4o a run for its money! ๐ช)
I was surprised to see that on the Open LLM Leaderboard, Llama 3.1 70B was not able to dethrone the current king Qwen2-72B! ๐
Not only that, for a few benchmarks like MATH Lvl 5, it was completely lagging behind Qwen2-72B! ๐
Also, the benchmarks are completely off compared to the official numbers from Meta! ๐คฏ
Based on the responses, I still believe Llama 3.1 will perform better than Qwen2 on LMSYS Chatbot Arena. ๐ค But it still lags behind on too many benchmarks! ๐โโ๏ธ
Open LLM Leaderboard: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard ๐
Hopefully, this is just an Open LLM Leaderboard error! @open-llm-leaderboard SOS! ๐จ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/662bf5bfe93bb73804ef9344/ytL9OZf0VHAgxKkqM3OK5.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/662bf5bfe93bb73804ef9344/2w1x6af1zUzChaJfPxDg5.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"eek",
"osanseviero",
"dillfrescott",
"quyettv",
"d8rt8v"
],
"count": 5
},
{
"reaction": "๐",
"users": [
"Ruaruaruabick",
"Winnougan"
],
"count": 2
},
{
"reaction": "๐",
"users": [
"SicariusSicariiStuff"
],
"count": 1
}
] | 2024-07-25T06:20:26.000Z | 2024-07-26T17:47:27.707Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg",
"fullname": "leroy Samuel Dyer",
"name": "LeroyDyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 82,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64bffde3acf6b898b369355a/htPoH4DFNmbMrlbHgBFpz.png",
"fullname": "nlpguy",
"name": "nlpguy",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 7,
"isFollowing": false
},
{
"avatarUrl": "/avatars/d92c459b18a4aa5642e5c4bd3b8e3fe4.svg",
"fullname": "Mendonca",
"name": "Dihelson",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6569216f9c96f1a47bf45788/mCLqmAs4dOjKdxNQVAp1w.png",
"fullname": "Sica Rius",
"name": "SicariusSicariiStuff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 132,
"isFollowing": false
},
{
"avatarUrl": "/avatars/3e9a9c3349e282d72e558febd93d6f12.svg",
"fullname": "quyettv",
"name": "quyettv",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "/avatars/981ae724a0e86c2afbf7fe59d640f39e.svg",
"fullname": "none",
"name": "none344",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/singhsidhukuldeep/909191721549783 | 1,553 | 8 |
143199321478313 | [
{
"type": "text",
"value": "๐ Introducing the Model Drops Tracker! ๐ต๏ธโโ๏ธ",
"raw": "๐ Introducing the Model Drops Tracker! ๐ต๏ธโโ๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Feeling overwhelmed by the AI model release frenzy? ๐คฏ You're not alone!",
"raw": "Feeling overwhelmed by the AI model release frenzy? ๐คฏ You're not alone!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I built this simple tool to help us all keep up:",
"raw": "I built this simple tool to help us all keep up:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Filter recent models from the ๐ค Hub",
"raw": "- Filter recent models from the ๐ค Hub",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Set minimum likes threshold",
"raw": "- Set minimum likes threshold",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Choose how recent you want to go",
"raw": "- Choose how recent you want to go",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Try it out and let me know what you think: ",
"raw": "Try it out and let me know what you think: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/fdaudens/Model-Drops-Tracker",
"resource": {
"type": "space",
"id": "fdaudens/Model-Drops-Tracker",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/fdaudens/Model-Drops-Tracker",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Any features you'd like to see added?",
"raw": "Any features you'd like to see added?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "#AIModels",
"raw": "#AIModels",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ Introducing the Model Drops Tracker! ๐ต๏ธโโ๏ธ
Feeling overwhelmed by the AI model release frenzy? ๐คฏ You're not alone!
I built this simple tool to help us all keep up:
- Filter recent models from the ๐ค Hub
- Set minimum likes threshold
- Choose how recent you want to go
Try it out and let me know what you think: https://huggingface.co/spaces/fdaudens/Model-Drops-Tracker
Any features you'd like to see added?
#AIModels | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/3UslU7TB7CHobIwSadfUa.mp4"
}
] | [] | [
{
"reaction": "๐",
"users": [
"davidberenstein1957",
"ajibawa-2023",
"jdzw2014",
"lucianosb",
"ecyht2",
"jgitsolutions",
"John6666",
"LewSypher",
"Nymbo"
],
"count": 9
},
{
"reaction": "๐",
"users": [
"adorkin"
],
"count": 1
}
] | 2024-07-24T19:36:33.000Z | 2024-07-26T18:33:31.204Z | [
{
"avatarUrl": "/avatars/52a153d04d325469e1be69bce610ebe5.svg",
"fullname": "ecyht2",
"name": "ecyht2",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/648a8d2d4ea19a8097e1c0d7/PKDiLe0WCwzNVWgvLvqr7.jpeg",
"fullname": "Henry Holloway",
"name": "henryholloway",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/fdaudens/143199321478313 | 2,286 | 3 |
735630595311197 | [
{
"type": "text",
"value": "๐๐๐ป๐๐๐ฎ๐ป-๐๐ฎ๐ฟ๐ด๐ฒ ๐ท๐๐๐ ๐ฟ๐ฒ๐น๐ฒ๐ฎ๐๐ฒ๐ฑ ๐ฏ๐ ๐ง๐ฒ๐ป๐ฐ๐ฒ๐ป๐: ๐๐ฎ๐ฟ๐ด๐ฒ๐๐ ๐ฒ๐๐ฒ๐ฟ ๐ผ๐ฝ๐ฒ๐ป ๐ ๐ผ๐ ๐๐๐ , ๐ผ๐ป๐น๐ ๐ฑ๐ฎ๐ ๐ฎ๐ฐ๐๐ถ๐๐ฒ ๐ฝ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ๐ ๐ฏ๐๐ ๐ฏ๐ฒ๐ฎ๐๐ ๐๐๐ฎ๐ ๐ ๐ฏ.๐ญ-๐ฐ๐ฌ๐ฑ๐ ๐ผ๐ป ๐บ๐ผ๐๐ ๐ฎ๐ฐ๐ฎ๐ฑ๐ฒ๐บ๐ถ๐ฐ ๐ฏ๐ฒ๐ป๐ฐ๐ต๐บ๐ฎ๐ฟ๐ธ๐ ๐",
"raw": "๐๐๐ป๐๐๐ฎ๐ป-๐๐ฎ๐ฟ๐ด๐ฒ ๐ท๐๐๐ ๐ฟ๐ฒ๐น๐ฒ๐ฎ๐๐ฒ๐ฑ ๐ฏ๐ ๐ง๐ฒ๐ป๐ฐ๐ฒ๐ป๐: ๐๐ฎ๐ฟ๐ด๐ฒ๐๐ ๐ฒ๐๐ฒ๐ฟ ๐ผ๐ฝ๐ฒ๐ป ๐ ๐ผ๐ ๐๐๐ , ๐ผ๐ป๐น๐ ๐ฑ๐ฎ๐ ๐ฎ๐ฐ๐๐ถ๐๐ฒ ๐ฝ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ๐ ๐ฏ๐๐ ๐ฏ๐ฒ๐ฎ๐๐ ๐๐๐ฎ๐ ๐ ๐ฏ.๐ญ-๐ฐ๐ฌ๐ฑ๐ ๐ผ๐ป ๐บ๐ผ๐๐ ๐ฎ๐ฐ๐ฎ๐ฑ๐ฒ๐บ๐ถ๐ฐ ๐ฏ๐ฒ๐ป๐ฐ๐ต๐บ๐ฎ๐ฟ๐ธ๐ ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โก Mixture of Experts (MoE) architecture: 389 B parameters in total, but only 52B are activated for any input",
"raw": "โก Mixture of Experts (MoE) architecture: 389 B parameters in total, but only 52B are activated for any input",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐งช Trained on 7T tokens, including 1.5T tokens of synthetic data",
"raw": "๐งช Trained on 7T tokens, including 1.5T tokens of synthetic data",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐๏ธ Architecture : Novel \"recycle routing\" prevents token dropping when experts are overrloaded",
"raw": "๐๏ธ Architecture : Novel \"recycle routing\" prevents token dropping when experts are overrloaded",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Great benchmark results: Surpasses Llama-3-405B-Instruct in most benchmarks although it has 8x fewer active parameters",
"raw": "๐ Great benchmark results: Surpasses Llama-3-405B-Instruct in most benchmarks although it has 8x fewer active parameters",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โฃ Impressive perf on MATH: 77.4",
"raw": "โฃ Impressive perf on MATH: 77.4",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ย Large context length: up to 256K tokens",
"raw": "๐ย Large context length: up to 256K tokens",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ License:",
"raw": "๐ License:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โฃ Commercial use allowed, except if your products have >100M monthly active users",
"raw": "โฃ Commercial use allowed, except if your products have >100M monthly active users",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โฃ No access in the EU",
"raw": "โฃ No access in the EU",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐คย Model weights available on HF!",
"raw": "๐คย Model weights available on HF!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Read the full paper here ๐ย ",
"raw": "Read the full paper here ๐ย ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2411.02265",
"resource": {
"type": "paper",
"id": "2411.02265",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2411.02265",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated\n Parameters by Tencent (2411.02265)"
}
] | ๐๐๐ป๐๐๐ฎ๐ป-๐๐ฎ๐ฟ๐ด๐ฒ ๐ท๐๐๐ ๐ฟ๐ฒ๐น๐ฒ๐ฎ๐๐ฒ๐ฑ ๐ฏ๐ ๐ง๐ฒ๐ป๐ฐ๐ฒ๐ป๐: ๐๐ฎ๐ฟ๐ด๐ฒ๐๐ ๐ฒ๐๐ฒ๐ฟ ๐ผ๐ฝ๐ฒ๐ป ๐ ๐ผ๐ ๐๐๐ , ๐ผ๐ป๐น๐ ๐ฑ๐ฎ๐ ๐ฎ๐ฐ๐๐ถ๐๐ฒ ๐ฝ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ๐ ๐ฏ๐๐ ๐ฏ๐ฒ๐ฎ๐๐ ๐๐๐ฎ๐ ๐ ๐ฏ.๐ญ-๐ฐ๐ฌ๐ฑ๐ ๐ผ๐ป ๐บ๐ผ๐๐ ๐ฎ๐ฐ๐ฎ๐ฑ๐ฒ๐บ๐ถ๐ฐ ๐ฏ๐ฒ๐ป๐ฐ๐ต๐บ๐ฎ๐ฟ๐ธ๐ ๐
โก Mixture of Experts (MoE) architecture: 389 B parameters in total, but only 52B are activated for any input
๐งช Trained on 7T tokens, including 1.5T tokens of synthetic data
๐๏ธ Architecture : Novel "recycle routing" prevents token dropping when experts are overrloaded
๐ Great benchmark results: Surpasses Llama-3-405B-Instruct in most benchmarks although it has 8x fewer active parameters
โฃ Impressive perf on MATH: 77.4
๐ย Large context length: up to 256K tokens
๐ License:
โฃ Commercial use allowed, except if your products have >100M monthly active users
โฃ No access in the EU
๐คย Model weights available on HF!
Read the full paper here ๐ย https://huggingface.co/papers/2411.02265 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg",
"fullname": "Aymeric Roucher",
"name": "m-ric",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 476,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63d10d4e8eaa4831005e92b5/3LPlzA5TVmgNvE-J3kg5R.png"
}
] | [] | [
{
"reaction": "๐ฅ",
"users": [
"Hooo5",
"YaTharThShaRma999",
"John6666",
"Chief-Inspector"
],
"count": 4
},
{
"reaction": "๐",
"users": [
"Hooo5",
"fdaudens",
"YaTharThShaRma999",
"prithivMLmods"
],
"count": 4
},
{
"reaction": "๐",
"users": [
"Rsln"
],
"count": 1
}
] | 2024-11-05T09:36:47.000Z | 2024-11-05T09:36:47.232Z | [] | /posts/m-ric/735630595311197 | 2,473 | 0 |
605570550011105 | [
{
"type": "text",
"value": "๐ Introducing Kompy.info Uzbek Educational Dataset - ",
"raw": "๐ Introducing Kompy.info Uzbek Educational Dataset - ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/nyuuzyou/kompy",
"resource": {
"type": "dataset",
"id": "nyuuzyou/kompy",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/nyuuzyou/kompy",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Dataset highlights:",
"raw": "Dataset highlights:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- 584,648 pages of educational content extracted from kompy.info, a comprehensive educational resource website",
"raw": "- 584,648 pages of educational content extracted from kompy.info, a comprehensive educational resource website",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Content exclusively in Uzbek language, focusing on technical and scientific topics",
"raw": "- Content exclusively in Uzbek language, focusing on technical and scientific topics",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Each entry contains: URL, page title, and extracted main text content",
"raw": "- Each entry contains: URL, page title, and extracted main text content",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Data extracted using trafilatura HTML extraction tool",
"raw": "- Data extracted using trafilatura HTML extraction tool",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Covers a wide range of academic and educational materials",
"raw": "- Covers a wide range of academic and educational materials",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Released to the public domain under Creative Commons Zero (CC0) license",
"raw": "- Released to the public domain under Creative Commons Zero (CC0) license",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The dataset presents a valuable resource for natural language processing tasks in the Uzbek language, particularly in educational and technical domains. It can be used for text classification, topic modeling, and content analysis of educational materials. The large-scale collection of Uzbek-language academic content makes it especially useful for developing educational technology applications and studying pedagogical approaches in Uzbek-language instruction. The dataset's monolingual nature provides a focused corpus for understanding technical and scientific terminology in Uzbek educational contexts.",
"raw": "The dataset presents a valuable resource for natural language processing tasks in the Uzbek language, particularly in educational and technical domains. It can be used for text classification, topic modeling, and content analysis of educational materials. The large-scale collection of Uzbek-language academic content makes it especially useful for developing educational technology applications and studying pedagogical approaches in Uzbek-language instruction. The dataset's monolingual nature provides a focused corpus for understanding technical and scientific terminology in Uzbek educational contexts.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ Introducing Kompy.info Uzbek Educational Dataset - https://huggingface.co/datasets/nyuuzyou/kompy
Dataset highlights:
- 584,648 pages of educational content extracted from kompy.info, a comprehensive educational resource website
- Content exclusively in Uzbek language, focusing on technical and scientific topics
- Each entry contains: URL, page title, and extracted main text content
- Data extracted using trafilatura HTML extraction tool
- Covers a wide range of academic and educational materials
- Released to the public domain under Creative Commons Zero (CC0) license
The dataset presents a valuable resource for natural language processing tasks in the Uzbek language, particularly in educational and technical domains. It can be used for text classification, topic modeling, and content analysis of educational materials. The large-scale collection of Uzbek-language academic content makes it especially useful for developing educational technology applications and studying pedagogical approaches in Uzbek-language instruction. The dataset's monolingual nature provides a focused corpus for understanding technical and scientific terminology in Uzbek educational contexts. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/643ac5d2e2b979ae6144d68c/Z7PCNopn4cQeAYnVJDoqG.png",
"fullname": "nyuuzyou",
"name": "nyuuzyou",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 58,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"djuna"
],
"count": 2
}
] | 2024-11-05T08:23:58.000Z | 2024-11-05T08:23:58.491Z | [] | /posts/nyuuzyou/605570550011105 | 1,404 | 0 |
357651569752343 | [
{
"type": "text",
"value": "๐ Excited to introduce a new member of the OS-Copilot family: OS-Atlas - an open-sourced foundational action model for GUI agents",
"raw": "๐ Excited to introduce a new member of the OS-Copilot family: OS-Atlas - an open-sourced foundational action model for GUI agents",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Paper: ",
"raw": "๐ Paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2410.23218",
"resource": {
"type": "paper",
"id": "2410.23218",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2410.23218",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "OS-ATLAS: A Foundation Action Model for Generalist GUI Agents (2410.23218)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Website: ",
"raw": "๐ Website: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://osatlas.github.io",
"resource": null,
"url": null,
"href": "https://osatlas.github.io",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ TL;DR: OS-Atlas offers:",
"raw": "๐ TL;DR: OS-Atlas offers:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "1. State-of-the-Art GUI Grounding: Helps GUI agents accurately locate GUI elements.",
"raw": "1. State-of-the-Art GUI Grounding: Helps GUI agents accurately locate GUI elements.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "2. Strong OOD Performance and Cross-platform Compatibility: Excels in out-of-domain agentic tasks across MacOS, Windows, Linux, Android, and Web. ",
"raw": "2. Strong OOD Performance and Cross-platform Compatibility: Excels in out-of-domain agentic tasks across MacOS, Windows, Linux, Android, and Web. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "3. Complete Infrastructure for GUI Data Synthesis: ",
"raw": "3. Complete Infrastructure for GUI Data Synthesis: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "You can easily build your own OS agent upon it!",
"raw": "You can easily build your own OS agent upon it!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ Excited to introduce a new member of the OS-Copilot family: OS-Atlas - an open-sourced foundational action model for GUI agents
๐ Paper: https://huggingface.co/papers/2410.23218
๐ Website: https://osatlas.github.io
๐ TL;DR: OS-Atlas offers:
1. State-of-the-Art GUI Grounding: Helps GUI agents accurately locate GUI elements.
2. Strong OOD Performance and Cross-platform Compatibility: Excels in out-of-domain agentic tasks across MacOS, Windows, Linux, Android, and Web.
3. Complete Infrastructure for GUI Data Synthesis:
You can easily build your own OS agent upon it!
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/656d73ed0bbc114fe6449704/gpteBU9GmKSHRVkRBUHld.png",
"fullname": "Symbol-LLM",
"name": "Symbol-LLM",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 27,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"Symbol-LLM",
"John6666",
"MH0386"
],
"count": 3
},
{
"reaction": "๐ฅ",
"users": [
"Symbol-LLM",
"topaasi"
],
"count": 2
}
] | 2024-11-05T07:44:56.000Z | 2024-11-05T07:44:56.791Z | [] | /posts/Symbol-LLM/357651569752343 | 2,077 | 0 |
152856622703579 | [
{
"type": "text",
"value": "๐ New feature of the Comparator of the ๐ค Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!",
"raw": "๐ New feature of the Comparator of the ๐ค Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ ๏ธ Here's how to use it:",
"raw": "๐ ๏ธ Here's how to use it:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "1. Select your model from the leaderboard.",
"raw": "1. Select your model from the leaderboard.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "2. Load its model tree.",
"raw": "2. Load its model tree.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.",
"raw": "3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "4. Press Load.",
"raw": "4. Press Load.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "See side-by-side performance metrics instantly!",
"raw": "See side-by-side performance metrics instantly!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Ready to dive in? ๐ Try the ๐ค Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: ",
"raw": "Ready to dive in? ๐ Try the ๐ค Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/open-llm-leaderboard/comparator",
"resource": {
"type": "space",
"id": "open-llm-leaderboard/comparator",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/open-llm-leaderboard/comparator",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ๐",
"raw": " ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ New feature of the Comparator of the ๐ค Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!
๐ ๏ธ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!
Ready to dive in? ๐ Try the ๐ค Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: https://huggingface.co/spaces/open-llm-leaderboard/comparator ๐ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1606406298765-noauth.jpeg",
"fullname": "Albert Villanova",
"name": "albertvillanova",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 192,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5fbfd09ee366524fe8e97cd3/qOVAuQMQ4EcpTf6TyfvEU.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5fbfd09ee366524fe8e97cd3/KdBSQXL0l8mGZKWyW4dhl.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"djuna"
],
"count": 2
}
] | 2024-11-05T07:36:05.000Z | 2024-11-05T07:36:05.400Z | [] | /posts/albertvillanova/152856622703579 | 1,168 | 0 |
349087953329777 | [
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu",
"resource": {
"type": "model",
"id": "Alfitaria/Q25-1.5B-VeoLu",
"discussionNum": null
},
"url": "https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Q2.5-1.5-VeoLu is a 1.5 billion parameter General Purpose Creative model trained on Qwen2.5-1.5B-Instruct. Intended mostly as an educational process for myself, Veo Lu nevertheless manages to be usable most of the time, while also being light enough to potentially run on a smartphone.",
"raw": "Q2.5-1.5-VeoLu is a 1.5 billion parameter General Purpose Creative model trained on Qwen2.5-1.5B-Instruct. Intended mostly as an educational process for myself, Veo Lu nevertheless manages to be usable most of the time, while also being light enough to potentially run on a smartphone.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu
Q2.5-1.5-VeoLu is a 1.5 billion parameter General Purpose Creative model trained on Qwen2.5-1.5B-Instruct. Intended mostly as an educational process for myself, Veo Lu nevertheless manages to be usable most of the time, while also being light enough to potentially run on a smartphone. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6685d39f64da708c0f553c5d/d9EvSPFssc-jproPdAszF.png",
"fullname": "Bot",
"name": "inflatebot",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 37,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6685d39f64da708c0f553c5d/zGH2GfvXTV_2fzJYsb3ro.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"bannerlord",
"djuna",
"AuriAetherwiing",
"EloyOn",
"Rsln"
],
"count": 6
}
] | 2024-11-04T21:39:54.000Z | 2024-11-04T23:30:44.740Z | [] | /posts/inflatebot/349087953329777 | 2,464 | 1 |
803833380128579 | [
{
"type": "text",
"value": "Hi HugginfgFacers!๐ค",
"raw": "Hi HugginfgFacers!๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "If you're into biomedical sciences, you will know the pain that, sometimes, searching PubMed can be๐โโ๏ธ",
"raw": "If you're into biomedical sciences, you will know the pain that, sometimes, searching PubMed can be๐โโ๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "For these purposes, I built a bot that scrapes PubMed for you, starting from the exact title of a publication or key word search - all beautifully rendered through Gradioโ
",
"raw": "For these purposes, I built a bot that scrapes PubMed for you, starting from the exact title of a publication or key word search - all beautifully rendered through Gradioโ
",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Find it here: ",
"raw": "Find it here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/as-cle-bert/BioMedicalPapersBot",
"resource": {
"type": "space",
"id": "as-cle-bert/BioMedicalPapersBot",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/as-cle-bert/BioMedicalPapersBot",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "And here's the GitHub repository๐ฑ: ",
"raw": "And here's the GitHub repository๐ฑ: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/AstraBert/BioMedicalPapersBot",
"resource": null,
"url": null,
"href": "https://github.com/AstraBert/BioMedicalPapersBot",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It's also available as a Docker image!๐ณ",
"raw": "It's also available as a Docker image!๐ณ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "code_fence",
"value": null,
"raw": "```\ndocker pull ghcr.io/astrabert/biomedicalpapersbot:main\n```",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": "docker pull ghcr.io/astrabert/biomedicalpapersbot:main",
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Best of luck with your research!",
"raw": "Best of luck with your research!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "PS: in the very near future some AI summarization features will be included!",
"raw": "PS: in the very near future some AI summarization features will be included!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi HugginfgFacers!๐ค
If you're into biomedical sciences, you will know the pain that, sometimes, searching PubMed can be๐โโ๏ธ
For these purposes, I built a bot that scrapes PubMed for you, starting from the exact title of a publication or key word search - all beautifully rendered through Gradioโ
Find it here: https://huggingface.co/spaces/as-cle-bert/BioMedicalPapersBot
And here's the GitHub repository๐ฑ: https://github.com/AstraBert/BioMedicalPapersBot
It's also available as a Docker image!๐ณ
```
docker pull ghcr.io/astrabert/biomedicalpapersbot:main
```
Best of luck with your research!
PS: in the very near future some AI summarization features will be included! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e330e7edc2f7306e252448/ucpk9c8x0UafGM4mXTrRy.jpeg",
"fullname": "Astra Clelia Bertelli",
"name": "as-cle-bert",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 639,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"ZyAi",
"eskayML"
],
"count": 2
},
{
"reaction": "๐",
"users": [
"John6666"
],
"count": 1
}
] | 2024-11-04T19:39:35.000Z | 2024-11-04T19:39:35.760Z | [] | /posts/as-cle-bert/803833380128579 | 1,598 | 0 |
174987179729363 | [
{
"type": "text",
"value": "Exciting Research Alert: Revolutionizing Dense Passage Retrieval with Entailment Tuning!",
"raw": "Exciting Research Alert: Revolutionizing Dense Passage Retrieval with Entailment Tuning!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The good folks at HKUST have developed a novel approach that significantly improves information retrieval by leveraging natural language inference.",
"raw": "The good folks at HKUST have developed a novel approach that significantly improves information retrieval by leveraging natural language inference.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The entailment tuning approach consists of several key steps to enhance dense passage retrieval performance.",
"raw": "The entailment tuning approach consists of several key steps to enhance dense passage retrieval performance.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Data Preparation",
"raw": "Data Preparation",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Convert questions into existence claims using rule-based transformations.",
"raw": "- Convert questions into existence claims using rule-based transformations.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Combine retrieval data with NLI data from SNLI and MNLI datasets.",
"raw": "- Combine retrieval data with NLI data from SNLI and MNLI datasets.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Unify the format of both data types using a consistent prompting framework.",
"raw": "- Unify the format of both data types using a consistent prompting framework.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Entailment Tuning Process",
"raw": "Entailment Tuning Process",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Initialize the model using pre-trained language models like BERT or RoBERTa.",
"raw": "- Initialize the model using pre-trained language models like BERT or RoBERTa.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Apply aggressive masking (ฮฒ=0.8) specifically to the hypothesis components while preserving premise information.",
"raw": "- Apply aggressive masking (ฮฒ=0.8) specifically to the hypothesis components while preserving premise information.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Train the model to predict the masked hypothesis tokens from the premise content.",
"raw": "- Train the model to predict the masked hypothesis tokens from the premise content.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Run the training for 10 epochs using 8 GPUs, taking approximately 1.5-3.5 hours.",
"raw": "- Run the training for 10 epochs using 8 GPUs, taking approximately 1.5-3.5 hours.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Training Arguments for Entailment Tuning (Yes! They Shared Them)",
"raw": "Training Arguments for Entailment Tuning (Yes! They Shared Them)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Use a learning rate of 2e-5 with 100 warmup steps.",
"raw": "- Use a learning rate of 2e-5 with 100 warmup steps.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Set batch size to 128.",
"raw": "- Set batch size to 128.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Apply weight decay of 0.01.",
"raw": "- Apply weight decay of 0.01.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Utilize the Adam optimizer with beta values (0.9, 0.999).",
"raw": "- Utilize the Adam optimizer with beta values (0.9, 0.999).",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Maintain maximum gradient norm at 1.0.",
"raw": "- Maintain maximum gradient norm at 1.0.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Deployment",
"raw": "Deployment",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Index passages using FAISS for efficient retrieval.",
"raw": "- Index passages using FAISS for efficient retrieval.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Shard vector store across multiple GPUs.",
"raw": "- Shard vector store across multiple GPUs.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Enable sub-millisecond retrieval of the top-100 passages per query.",
"raw": "- Enable sub-millisecond retrieval of the top-100 passages per query.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Integration with Existing Systems",
"raw": "Integration with Existing Systems",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Insert entailment tuning between pre-training and fine-tuning stages.",
"raw": "- Insert entailment tuning between pre-training and fine-tuning stages.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Maintain compatibility with current dense retrieval methods.",
"raw": "- Maintain compatibility with current dense retrieval methods.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Preserve existing contrastive learning approaches during fine-tuning.",
"raw": "- Preserve existing contrastive learning approaches during fine-tuning.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Simple, intuitive, and effective!",
"raw": "Simple, intuitive, and effective!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This advancement significantly improves the quality of retrieved passages for question-answering systems and retrieval-augmented generation tasks.",
"raw": "This advancement significantly improves the quality of retrieved passages for question-answering systems and retrieval-augmented generation tasks.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Exciting Research Alert: Revolutionizing Dense Passage Retrieval with Entailment Tuning!
The good folks at HKUST have developed a novel approach that significantly improves information retrieval by leveraging natural language inference.
The entailment tuning approach consists of several key steps to enhance dense passage retrieval performance.
Data Preparation
- Convert questions into existence claims using rule-based transformations.
- Combine retrieval data with NLI data from SNLI and MNLI datasets.
- Unify the format of both data types using a consistent prompting framework.
Entailment Tuning Process
- Initialize the model using pre-trained language models like BERT or RoBERTa.
- Apply aggressive masking (ฮฒ=0.8) specifically to the hypothesis components while preserving premise information.
- Train the model to predict the masked hypothesis tokens from the premise content.
- Run the training for 10 epochs using 8 GPUs, taking approximately 1.5-3.5 hours.
Training Arguments for Entailment Tuning (Yes! They Shared Them)
- Use a learning rate of 2e-5 with 100 warmup steps.
- Set batch size to 128.
- Apply weight decay of 0.01.
- Utilize the Adam optimizer with beta values (0.9, 0.999).
- Maintain maximum gradient norm at 1.0.
Deployment
- Index passages using FAISS for efficient retrieval.
- Shard vector store across multiple GPUs.
- Enable sub-millisecond retrieval of the top-100 passages per query.
Integration with Existing Systems
- Insert entailment tuning between pre-training and fine-tuning stages.
- Maintain compatibility with current dense retrieval methods.
- Preserve existing contrastive learning approaches during fine-tuning.
Simple, intuitive, and effective!
This advancement significantly improves the quality of retrieved passages for question-answering systems and retrieval-augmented generation tasks. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/662bf5bfe93bb73804ef9344/kffXQV_Z_yfdXofVCTzxK.jpeg"
}
] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"Kaba",
"louisbrulenaudet",
"tiendung",
"KarthikaRajagopal",
"AtAndDev",
"Bruno"
],
"count": 7
}
] | 2024-11-04T18:38:57.000Z | 2024-11-04T18:38:57.246Z | [] | /posts/singhsidhukuldeep/174987179729363 | 2,066 | 0 |
650249025161705 | [
{
"type": "text",
"value": "Smol TTS models are here! OuteTTS-0.1-350M - Zero shot voice cloning, built on LLaMa architecture, CC-BY license! ๐ฅ ",
"raw": "Smol TTS models are here! OuteTTS-0.1-350M - Zero shot voice cloning, built on LLaMa architecture, CC-BY license! ๐ฅ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Pure language modeling approach to TTS",
"raw": "> Pure language modeling approach to TTS",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Zero-shot voice cloning",
"raw": "> Zero-shot voice cloning",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> LLaMa architecture w/ Audio tokens (WavTokenizer)",
"raw": "> LLaMa architecture w/ Audio tokens (WavTokenizer)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> BONUS: Works on-device w/ llama.cpp โก",
"raw": "> BONUS: Works on-device w/ llama.cpp โก",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Three-step approach to TTS:",
"raw": "Three-step approach to TTS:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Audio tokenization using WavTokenizer (75 tok per second)",
"raw": "> Audio tokenization using WavTokenizer (75 tok per second)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> CTC forced alignment for word-to-audio token mapping",
"raw": "> CTC forced alignment for word-to-audio token mapping",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "> Structured prompt creation w/ transcription, duration, audio tokens",
"raw": "> Structured prompt creation w/ transcription, duration, audio tokens",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The model is extremely impressive for 350M parameters! Kudos to the ",
"raw": "The model is extremely impressive for 350M parameters! Kudos to the ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "OuteAI team on such a brilliant feat - I'd love to see this be applied on larger data and smarter backbones like SmolLM ๐ค",
"raw": "OuteAI team on such a brilliant feat - I'd love to see this be applied on larger data and smarter backbones like SmolLM ๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check out the models here: ",
"raw": "Check out the models here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/OuteAI/outetts-6728aa71a53a076e4ba4817c",
"resource": {
"type": "collection",
"id": "OuteAI/outetts-6728aa71a53a076e4ba4817c",
"discussionNum": null
},
"url": "https://huggingface.co/collections/OuteAI/outetts-6728aa71a53a076e4ba4817c",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Smol TTS models are here! OuteTTS-0.1-350M - Zero shot voice cloning, built on LLaMa architecture, CC-BY license! ๐ฅ
> Pure language modeling approach to TTS
> Zero-shot voice cloning
> LLaMa architecture w/ Audio tokens (WavTokenizer)
> BONUS: Works on-device w/ llama.cpp โก
Three-step approach to TTS:
> Audio tokenization using WavTokenizer (75 tok per second)
> CTC forced alignment for word-to-audio token mapping
> Structured prompt creation w/ transcription, duration, audio tokens
The model is extremely impressive for 350M parameters! Kudos to the
OuteAI team on such a brilliant feat - I'd love to see this be applied on larger data and smarter backbones like SmolLM ๐ค
Check out the models here: https://huggingface.co/collections/OuteAI/outetts-6728aa71a53a076e4ba4817c | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655385361868-61b85ce86eb1f2c5e6233736.jpeg",
"fullname": "Vaibhav Srivastav",
"name": "reach-vb",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 439,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61b85ce86eb1f2c5e6233736/jH7WoESriWkSMxZ5AFHKq.mp4"
}
] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"s3nh"
],
"count": 2
}
] | 2024-11-04T17:01:03.000Z | 2024-11-04T17:01:03.981Z | [] | /posts/reach-vb/650249025161705 | 1,195 | 0 |
744243343133134 | [
{
"type": "text",
"value": "Build datasets for AI on the Hugging Face Hubโ10x easier than ever!",
"raw": "Build datasets for AI on the Hugging Face Hubโ10x easier than ever!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Today, I'm excited to share our biggest feature since we joined Hugging Face. ",
"raw": "Today, I'm excited to share our biggest feature since we joined Hugging Face. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Hereโs how it works:",
"raw": "Hereโs how it works:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "1. Pick a datasetโupload your own or choose from 240K open datasets.",
"raw": "1. Pick a datasetโupload your own or choose from 240K open datasets.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "2. Paste the Hub dataset ID into Argilla and set up your labeling interface.",
"raw": "2. Paste the Hub dataset ID into Argilla and set up your labeling interface.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "3. Share the URL with your team or the whole community!",
"raw": "3. Share the URL with your team or the whole community!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "And the best part? Itโs:",
"raw": "And the best part? Itโs:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- No code โ no Python needed",
"raw": "- No code โ no Python needed",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Integrated โ all within the Hub",
"raw": "- Integrated โ all within the Hub",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Scalable โ from solo labeling to 100s of contributors",
"raw": "- Scalable โ from solo labeling to 100s of contributors",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I am incredibly proud of the team for shipping this after weeks of work and many quick iterations.",
"raw": "I am incredibly proud of the team for shipping this after weeks of work and many quick iterations.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Let's make this sentence obsolete: \"Everyone wants to do the model work, not the data work.\"",
"raw": "Let's make this sentence obsolete: \"Everyone wants to do the model work, not the data work.\"",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Read, share, and like the HF blog post:",
"raw": "Read, share, and like the HF blog post:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/argilla-ui-hub",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/argilla-ui-hub",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Build datasets for AI on the Hugging Face Hubโ10x easier than ever!
Today, I'm excited to share our biggest feature since we joined Hugging Face.
Hereโs how it works:
1. Pick a datasetโupload your own or choose from 240K open datasets.
2. Paste the Hub dataset ID into Argilla and set up your labeling interface.
3. Share the URL with your team or the whole community!
And the best part? Itโs:
- No code โ no Python needed
- Integrated โ all within the Hub
- Scalable โ from solo labeling to 100s of contributors
I am incredibly proud of the team for shipping this after weeks of work and many quick iterations.
Let's make this sentence obsolete: "Everyone wants to do the model work, not the data work."
Read, share, and like the HF blog post:
https://huggingface.co/blog/argilla-ui-hub | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60420dccc15e823a685f2b03/Dn7QTyy9SZ7jKN6xpufVD.png",
"fullname": "Daniel Vila",
"name": "dvilasuero",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 230,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"Anthony10",
"NickyNicky"
],
"count": 3
},
{
"reaction": "๐",
"users": [
"John6666"
],
"count": 1
}
] | 2024-11-04T16:32:11.000Z | 2024-11-04T16:32:11.170Z | [] | /posts/dvilasuero/744243343133134 | 606 | 0 |
220418838710285 | [
{
"type": "text",
"value": "Import any dataset from the Hub and configure your labeling tasks without needing any code!",
"raw": "Import any dataset from the Hub and configure your labeling tasks without needing any code!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Really excited about extending the Hugging Face Hub integration with many more streamlined features and workflows, and we would love to hear your feedback and ideas, so don't feel shy and reach out ๐ซถ๐ฝ",
"raw": "Really excited about extending the Hugging Face Hub integration with many more streamlined features and workflows, and we would love to hear your feedback and ideas, so don't feel shy and reach out ๐ซถ๐ฝ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/argilla-ui-hub",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/argilla-ui-hub",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Import any dataset from the Hub and configure your labeling tasks without needing any code!
Really excited about extending the Hugging Face Hub integration with many more streamlined features and workflows, and we would love to hear your feedback and ideas, so don't feel shy and reach out ๐ซถ๐ฝ
https://huggingface.co/blog/argilla-ui-hub
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 148,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"dvilasuero",
"NickyNicky"
],
"count": 3
}
] | 2024-11-04T16:15:53.000Z | 2024-11-04T16:15:53.521Z | [] | /posts/davidberenstein1957/220418838710285 | 2,055 | 0 |
336620283743824 | [
{
"type": "text",
"value": "๐ค๐กJust tried out ",
"raw": "๐ค๐กJust tried out ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@m-ric",
"resource": null,
"url": null,
"href": null,
"user": "m-ric",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 's new LLaMA-3.1 70B agent for data analysis. Impressive stuff. ",
"raw": " 's new LLaMA-3.1 70B agent for data analysis. Impressive stuff. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ข๐ Fed it the Titanic passenger dataset with minimal instructions. The agent autonomously dug in, tested hypotheses, and reached some intriguing conclusions:",
"raw": "๐ข๐ Fed it the Titanic passenger dataset with minimal instructions. The agent autonomously dug in, tested hypotheses, and reached some intriguing conclusions:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "\"Lower class passengers less likely to survive, slight negative correlation with age, and positive correlation between fare price and survival.\" ",
"raw": "\"Lower class passengers less likely to survive, slight negative correlation with age, and positive correlation between fare price and survival.\" ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐It even generated charts to visualize the findings! ",
"raw": "๐It even generated charts to visualize the findings! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ง ๐ผ Great potential for business intelligence, research, and decision-making when we can just upload datasets and let AI agents loose on them. ",
"raw": "๐ง ๐ผ Great potential for business intelligence, research, and decision-making when we can just upload datasets and let AI agents loose on them. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Check it out: ",
"raw": "๐ Check it out: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/m-ric/agent-data-analyst",
"resource": {
"type": "space",
"id": "m-ric/agent-data-analyst",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/m-ric/agent-data-analyst",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ค Any particular use cases you're excited about?",
"raw": "๐ค Any particular use cases you're excited about?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "#AIinDataAnalysis #MachineLearning #DataScience",
"raw": "#AIinDataAnalysis #MachineLearning #DataScience",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ค๐กJust tried out @m-ric 's new LLaMA-3.1 70B agent for data analysis. Impressive stuff.
๐ข๐ Fed it the Titanic passenger dataset with minimal instructions. The agent autonomously dug in, tested hypotheses, and reached some intriguing conclusions:
"Lower class passengers less likely to survive, slight negative correlation with age, and positive correlation between fare price and survival."
๐It even generated charts to visualize the findings!
๐ง ๐ผ Great potential for business intelligence, research, and decision-making when we can just upload datasets and let AI agents loose on them.
๐ Check it out: https://huggingface.co/spaces/m-ric/agent-data-analyst
๐ค Any particular use cases you're excited about?
#AIinDataAnalysis #MachineLearning #DataScience | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/iDJVQOcxoCJXAMFVwitoq.qt"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg",
"fullname": "Aymeric Roucher",
"name": "m-ric",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 476
}
] | [
{
"reaction": "๐ฅ",
"users": [
"osanseviero",
"m-ric"
],
"count": 2
}
] | 2024-07-24T15:12:35.000Z | 2024-07-25T17:17:10.453Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c83abc46421a2efe8160d0/Yy6IvAusEgxQ0qhCvB0Ka.jpeg",
"fullname": "Mac Szankin",
"name": "macsz",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/fdaudens/336620283743824 | 814 | 3 |
317039567154641 | [
{
"type": "text",
"value": "Hello there, ",
"raw": "Hello there, ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "New model released, my goal was to try finetune on the last Llama-3.1-8B-Instruct but not a small train, I wanted to do something useful.",
"raw": "New model released, my goal was to try finetune on the last Llama-3.1-8B-Instruct but not a small train, I wanted to do something useful.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "One of the rare model that I didn't made for RP, or in the goal to uncensor it (but I did anyway kek).",
"raw": "One of the rare model that I didn't made for RP, or in the goal to uncensor it (but I did anyway kek).",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The model was trained on 9M Claude conversations ONLY, giving him another writting style.",
"raw": "The model was trained on 9M Claude conversations ONLY, giving him another writting style.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude",
"resource": {
"type": "model",
"id": "Undi95/Meta-Llama-3.1-8B-Claude",
"discussionNum": null
},
"url": "https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " > OG release fp32, it's the epoch 2",
"raw": " > OG release fp32, it's the epoch 2",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude-bf16",
"resource": {
"type": "model",
"id": "Undi95/Meta-Llama-3.1-8B-Claude-bf16",
"discussionNum": null
},
"url": "https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude-bf16",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " > Base model resharded in bf16 waiting for available quant without issues",
"raw": " > Base model resharded in bf16 waiting for available quant without issues",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Since it's frustrating to be censored using a local model, orthogonal activation steering was used, trying to force the model to never refuse a prompt.",
"raw": "Since it's frustrating to be censored using a local model, orthogonal activation steering was used, trying to force the model to never refuse a prompt.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude-68fail-3000total",
"resource": {
"type": "model",
"id": "Undi95/Meta-Llama-3.1-8B-Claude-68fail-3000total",
"discussionNum": null
},
"url": "https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude-68fail-3000total",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " > Uncensored model, refuse 68 times on 3000 toxic prompt",
"raw": " > Uncensored model, refuse 68 times on 3000 toxic prompt",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude-39fail-3000total",
"resource": {
"type": "model",
"id": "Undi95/Meta-Llama-3.1-8B-Claude-39fail-3000total",
"discussionNum": null
},
"url": "https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude-39fail-3000total",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " > Uncensored model, refuse 39 times on 3000 toxic prompt",
"raw": " > Uncensored model, refuse 39 times on 3000 toxic prompt",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It still refuse some prompt but the majority of them is uncensored. OAS can make a model more dumb or make the base perplexity go higher, so I didn't snipe for 0 refusal.",
"raw": "It still refuse some prompt but the majority of them is uncensored. OAS can make a model more dumb or make the base perplexity go higher, so I didn't snipe for 0 refusal.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I don't do non-RP model a lot so any feedback is welcome, I would like to re-use this base for some others future project if needed.",
"raw": "I don't do non-RP model a lot so any feedback is welcome, I would like to re-use this base for some others future project if needed.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hello there,
New model released, my goal was to try finetune on the last Llama-3.1-8B-Instruct but not a small train, I wanted to do something useful.
One of the rare model that I didn't made for RP, or in the goal to uncensor it (but I did anyway kek).
The model was trained on 9M Claude conversations ONLY, giving him another writting style.
https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude > OG release fp32, it's the epoch 2
https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude-bf16 > Base model resharded in bf16 waiting for available quant without issues
Since it's frustrating to be censored using a local model, orthogonal activation steering was used, trying to force the model to never refuse a prompt.
https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude-68fail-3000total > Uncensored model, refuse 68 times on 3000 toxic prompt
https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude-39fail-3000total > Uncensored model, refuse 39 times on 3000 toxic prompt
It still refuse some prompt but the majority of them is uncensored. OAS can make a model more dumb or make the base perplexity go higher, so I didn't snipe for 0 refusal.
I don't do non-RP model a lot so any feedback is welcome, I would like to re-use this base for some others future project if needed. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ab1241ad514ca8d1430003/d-43TcOxG-zqAbzrH2m7H.png",
"fullname": "Undi",
"name": "Undi95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3283,
"isFollowing": false
} | [] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"Chief-Inspector",
"MarinaraSpaghetti",
"Herman555",
"John6666",
"AtonMountlook",
"Ramikan-BR",
"DuckyBlender",
"osanseviero",
"mambiux",
"den0620",
"win10"
],
"count": 11
}
] | 2024-07-24T15:08:08.000Z | 2024-07-24T22:22:31.327Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/666627d86cd2ef174a6e2257/QVRd7WN6kVCtT5BDpf8vq.png",
"fullname": "Invisietch",
"name": "invisietch",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 51,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ab1241ad514ca8d1430003/d-43TcOxG-zqAbzrH2m7H.png",
"fullname": "Undi",
"name": "Undi95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3283,
"isFollowing": false
},
{
"avatarUrl": "/avatars/18daa2d580f5f35cf850bc9df8a03755.svg",
"fullname": "Sporkness",
"name": "SporkySporkness",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
}
] | /posts/Undi95/317039567154641 | 10,121 | 4 |
746206932023722 | [
{
"type": "text",
"value": "Professional Threads Post Writer",
"raw": "Professional Threads Post Writer",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://hf.co/chat/assistant/66a0ee6fc397ecb70cee100d",
"resource": null,
"url": null,
"href": "https://hf.co/chat/assistant/66a0ee6fc397ecb70cee100d",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Professional Threads Post Writer
https://hf.co/chat/assistant/66a0ee6fc397ecb70cee100d | {
"avatarUrl": "/avatars/d773a7dd9b706759131fc482ab71ced7.svg",
"fullname": "[email protected]",
"name": "Taf2023",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 8,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64841af2295256340e4b9f88/KjqQvbwV8O9Jtz_-L49tS.webp"
}
] | [] | [] | 2024-07-24T12:11:18.000Z | 2024-07-24T12:11:18.282Z | [] | /posts/Taf2023/746206932023722 | 491 | 0 |
999630929802342 | [
{
"type": "text",
"value": "Llama 3.1 405B Instruct beats GPT-4o on MixEval-Hard",
"raw": "Llama 3.1 405B Instruct beats GPT-4o on MixEval-Hard",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Just ran MixEval for 405B, Sonnet-3.5 and 4o, with 405B landing right between the other two at 66.19",
"raw": "Just ran MixEval for 405B, Sonnet-3.5 and 4o, with 405B landing right between the other two at 66.19",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The GPT-4o result of 64.7 replicated locally but Sonnet-3.5 actually scored 70.25/69.45 in my replications ๐ค Still well ahead of the other 2 though.",
"raw": "The GPT-4o result of 64.7 replicated locally but Sonnet-3.5 actually scored 70.25/69.45 in my replications ๐ค Still well ahead of the other 2 though.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Sammple of 1 of the eval calls here: ",
"raw": "Sammple of 1 of the eval calls here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://wandb.ai/morgan/MixEval/weave/calls/07b05ae2-2ef5-4525-98a6-c59963b76fe1",
"resource": null,
"url": null,
"href": "https://wandb.ai/morgan/MixEval/weave/calls/07b05ae2-2ef5-4525-98a6-c59963b76fe1",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Quick auto-logging tracing for openai-compatible clients and many more here: ",
"raw": "Quick auto-logging tracing for openai-compatible clients and many more here: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://wandb.github.io/weave/quickstart/",
"resource": null,
"url": null,
"href": "https://wandb.github.io/weave/quickstart/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Llama 3.1 405B Instruct beats GPT-4o on MixEval-Hard
Just ran MixEval for 405B, Sonnet-3.5 and 4o, with 405B landing right between the other two at 66.19
The GPT-4o result of 64.7 replicated locally but Sonnet-3.5 actually scored 70.25/69.45 in my replications ๐ค Still well ahead of the other 2 though.
Sammple of 1 of the eval calls here: https://wandb.ai/morgan/MixEval/weave/calls/07b05ae2-2ef5-4525-98a6-c59963b76fe1
Quick auto-logging tracing for openai-compatible clients and many more here: https://wandb.github.io/weave/quickstart/
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1618571183509-5f05a97d5d08220171a0ad9d.png",
"fullname": "Morgan McGuire",
"name": "morgan",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 18,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f05a97d5d08220171a0ad9d/gaQche2YCXq0TTnmmH2Ol.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f05a97d5d08220171a0ad9d/TbXxia4lQrX5KLU-6405Z.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f05a97d5d08220171a0ad9d/TdTwni8FlXwUvBwEdrXFC.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f05a97d5d08220171a0ad9d/r0Zz8rrXpvj0oQ6avJWNi.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f05a97d5d08220171a0ad9d/gxIoZoSeQFVMtnBcD08nK.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"zenosai",
"macsz",
"Corvius"
],
"count": 3
},
{
"reaction": "๐ฅ",
"users": [
"Rohitkhatri75436"
],
"count": 1
}
] | 2024-07-24T11:27:03.000Z | 2024-07-24T11:27:03.893Z | [] | /posts/morgan/999630929802342 | 1,296 | 0 |
223487471308830 | [
{
"type": "text",
"value": "Hi HF community!๐ค",
"raw": "Hi HF community!๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Hope y'all are as excited as me for the release of Llama 3.1! ๐ฆ",
"raw": "Hope y'all are as excited as me for the release of Llama 3.1! ๐ฆ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Following the release, I built a space exploiting HF Inference API, thanks to a recipe you can find in this awesome GitHub repo (",
"raw": "Following the release, I built a space exploiting HF Inference API, thanks to a recipe you can find in this awesome GitHub repo (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/huggingface/huggingface-llama-recipes/",
"resource": null,
"url": null,
"href": "https://github.com/huggingface/huggingface-llama-recipes/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "): you can now run Llama-3.1-405B customizing its system instructions and other parameters, for free! ๐",
"raw": "): you can now run Llama-3.1-405B customizing its system instructions and other parameters, for free! ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Follow this link: ",
"raw": "Follow this link: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/as-cle-bert/Llama-3.1-405B-FP8",
"resource": {
"type": "space",
"id": "as-cle-bert/Llama-3.1-405B-FP8",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/as-cle-bert/Llama-3.1-405B-FP8",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " and let the fun begin!๐",
"raw": " and let the fun begin!๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi HF community!๐ค
Hope y'all are as excited as me for the release of Llama 3.1! ๐ฆ
Following the release, I built a space exploiting HF Inference API, thanks to a recipe you can find in this awesome GitHub repo (https://github.com/huggingface/huggingface-llama-recipes/): you can now run Llama-3.1-405B customizing its system instructions and other parameters, for free! ๐
Follow this link: https://huggingface.co/spaces/as-cle-bert/Llama-3.1-405B-FP8 and let the fun begin!๐ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e330e7edc2f7306e252448/ucpk9c8x0UafGM4mXTrRy.jpeg",
"fullname": "Astra Clelia Bertelli",
"name": "as-cle-bert",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 639,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"Nymbo",
"hg-crown",
"victor",
"Taylor658",
"osanseviero",
"louisbrulenaudet"
],
"count": 6
},
{
"reaction": "๐ค",
"users": [
"Nymbo",
"osanseviero"
],
"count": 2
},
{
"reaction": "๐ฅ",
"users": [
"alee5331",
"palash147"
],
"count": 2
},
{
"reaction": "๐คฏ",
"users": [
"Ruaruaruabick"
],
"count": 1
}
] | 2024-07-23T23:24:33.000Z | 2024-07-25T05:13:22.273Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg",
"fullname": "leroy Samuel Dyer",
"name": "LeroyDyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 82,
"isFollowing": false
}
] | /posts/as-cle-bert/223487471308830 | 2,599 | 1 |
518017846754230 | [
{
"type": "text",
"value": "JUST RELEASED: Fireplace 2 for Llama 3.1 8b Instruct!",
"raw": "JUST RELEASED: Fireplace 2 for Llama 3.1 8b Instruct!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Fireplace 2 is an 'expansion pack' of structured outputs you can request during your chat, using special request tokens to let Llama know you're looking for specific types of responses:",
"raw": "Fireplace 2 is an 'expansion pack' of structured outputs you can request during your chat, using special request tokens to let Llama know you're looking for specific types of responses:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Inline function calls",
"raw": "Inline function calls",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "SQL queries",
"raw": "SQL queries",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "JSON objects",
"raw": "JSON objects",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Data visualization with matplotlib",
"raw": "Data visualization with matplotlib",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2",
"resource": {
"type": "model",
"id": "ValiantLabs/Llama3.1-8B-Fireplace2",
"discussionNum": null
},
"url": "https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | JUST RELEASED: Fireplace 2 for Llama 3.1 8b Instruct!
Fireplace 2 is an 'expansion pack' of structured outputs you can request during your chat, using special request tokens to let Llama know you're looking for specific types of responses:
Inline function calls
SQL queries
JSON objects
Data visualization with matplotlib
https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63444f2687964b331809eb55/WvZivsvKsM_t0tBtakovK.png",
"fullname": "t.d.a.g.",
"name": "sequelbox",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 50,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"victor",
"Taylor658",
"flflow"
],
"count": 3
}
] | 2024-07-23T22:46:18.000Z | 2024-07-27T12:50:05.046Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63444f2687964b331809eb55/WvZivsvKsM_t0tBtakovK.png",
"fullname": "t.d.a.g.",
"name": "sequelbox",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 50,
"isFollowing": false
},
{
"avatarUrl": "/avatars/4efbc5a76387d633144269181ee29b17.svg",
"fullname": "zouhair ",
"name": "zouhaor",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/sequelbox/518017846754230 | 1,328 | 2 |
833816740809851 | [
{
"type": "text",
"value": "Meta Researchers: How many compute hours should we use to train Llama 3.1?",
"raw": "Meta Researchers: How many compute hours should we use to train Llama 3.1?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Mr. Zuck: Yes! ๐ค๐ช",
"raw": "Mr. Zuck: Yes! ๐ค๐ช",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Good folks at ",
"raw": "Good folks at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@AIatMeta",
"resource": null,
"url": null,
"href": null,
"user": "AIatMeta",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " did not just release the models but also published a 92-page detailed paper ๐ on their findings and technical aspects of the models and their training process!",
"raw": " did not just release the models but also published a 92-page detailed paper ๐ on their findings and technical aspects of the models and their training process!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Generally, we just gobble up these weights and forget the compute infrastructure used to train these models. ๐ฅ๏ธ๐",
"raw": "Generally, we just gobble up these weights and forget the compute infrastructure used to train these models. ๐ฅ๏ธ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Here are some interesting findings about the computing infrastructure of Llamas:",
"raw": "Here are some interesting findings about the computing infrastructure of Llamas:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Llama 1 and 2 models were trained on ",
"raw": "- Llama 1 and 2 models were trained on ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@Meta",
"resource": null,
"url": null,
"href": null,
"user": "Meta",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " 's AI Research SuperCluster. Llama 3 was migrated to Metaโs production clusters! ๐",
"raw": " 's AI Research SuperCluster. Llama 3 was migrated to Metaโs production clusters! ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- That's 16,000 H100 GPUs, with each GPU featuring 700W TDP and 80GB HBM3, arranged in Metaโs Grand Teton AI server platform. ๐ฅ๏ธ๐",
"raw": "- That's 16,000 H100 GPUs, with each GPU featuring 700W TDP and 80GB HBM3, arranged in Metaโs Grand Teton AI server platform. ๐ฅ๏ธ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- What about storing checkpoints? Used Tectonic, a distributed file system, for storage, with capacities reaching 240 PB and peak throughput of 7 TB/s. ๐พ๐",
"raw": "- What about storing checkpoints? Used Tectonic, a distributed file system, for storage, with capacities reaching 240 PB and peak throughput of 7 TB/s. ๐พ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Meta's mad lads saved each GPUโs model state, ranging from 1 MB to 4 GB per GPU, for recovery and debugging. ๐ ๏ธ๐",
"raw": "- Meta's mad lads saved each GPUโs model state, ranging from 1 MB to 4 GB per GPU, for recovery and debugging. ๐ ๏ธ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "If this sounds big, well, they document the humungous challenges that come with it:",
"raw": "If this sounds big, well, they document the humungous challenges that come with it:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- In the 54-day training period, there were 466 job interruptions. ๐๐",
"raw": "- In the 54-day training period, there were 466 job interruptions. ๐๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- About 78% of unexpected interruptions were attributed to confirmed or suspected hardware issues. Mostly GPUs! ๐ฅ๐ฅ๏ธ",
"raw": "- About 78% of unexpected interruptions were attributed to confirmed or suspected hardware issues. Mostly GPUs! ๐ฅ๐ฅ๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Saving all checkpoints is cool until you do it for the 300B+ parameters model. The bursty nature of checkpoint writes, essential for state-saving during training, periodically saturated the storage fabric, impacting performance. ๐๐พ",
"raw": "- Saving all checkpoints is cool until you do it for the 300B+ parameters model. The bursty nature of checkpoint writes, essential for state-saving during training, periodically saturated the storage fabric, impacting performance. ๐๐พ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- With all this, effective training timeโmeasured as the time spent on useful training over the elapsed timeโwas higher than 90%. โฑ๏ธ๐",
"raw": "- With all this, effective training timeโmeasured as the time spent on useful training over the elapsed timeโwas higher than 90%. โฑ๏ธ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I think this is the stuff that movies can be made on! ๐ฌ๐",
"raw": "I think this is the stuff that movies can be made on! ๐ฌ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper: ",
"raw": "Paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://ai.meta.com/research/publications/the-llama-3-herd-of-models/",
"resource": null,
"url": null,
"href": "https://ai.meta.com/research/publications/the-llama-3-herd-of-models/",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Meta Researchers: How many compute hours should we use to train Llama 3.1?
Mr. Zuck: Yes! ๐ค๐ช
Good folks at @AIatMeta did not just release the models but also published a 92-page detailed paper ๐ on their findings and technical aspects of the models and their training process!
Generally, we just gobble up these weights and forget the compute infrastructure used to train these models. ๐ฅ๏ธ๐
Here are some interesting findings about the computing infrastructure of Llamas:
- Llama 1 and 2 models were trained on @Meta 's AI Research SuperCluster. Llama 3 was migrated to Metaโs production clusters! ๐
- That's 16,000 H100 GPUs, with each GPU featuring 700W TDP and 80GB HBM3, arranged in Metaโs Grand Teton AI server platform. ๐ฅ๏ธ๐
- What about storing checkpoints? Used Tectonic, a distributed file system, for storage, with capacities reaching 240 PB and peak throughput of 7 TB/s. ๐พ๐
- Meta's mad lads saved each GPUโs model state, ranging from 1 MB to 4 GB per GPU, for recovery and debugging. ๐ ๏ธ๐
If this sounds big, well, they document the humungous challenges that come with it:
- In the 54-day training period, there were 466 job interruptions. ๐๐
- About 78% of unexpected interruptions were attributed to confirmed or suspected hardware issues. Mostly GPUs! ๐ฅ๐ฅ๏ธ
- Saving all checkpoints is cool until you do it for the 300B+ parameters model. The bursty nature of checkpoint writes, essential for state-saving during training, periodically saturated the storage fabric, impacting performance. ๐๐พ
- With all this, effective training timeโmeasured as the time spent on useful training over the elapsed timeโwas higher than 90%. โฑ๏ธ๐
I think this is the stuff that movies can be made on! ๐ฌ๐
Paper: https://ai.meta.com/research/publications/the-llama-3-herd-of-models/ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/662bf5bfe93bb73804ef9344/dqpSxD7BM-weUnMYRSGIC.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61e8c67cee1e1440121f0240/9sb__WsO5mwmdHHa6xKNc.jpeg",
"fullname": "Meta World Peace",
"name": "Meta",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5
}
] | [
{
"reaction": "๐คฏ",
"users": [
"YaTharThShaRma999",
"ZachZimm",
"palash147",
"Ketansomewhere",
"cnmoro",
"GPT007",
"John6666",
"Ruaruaruabick",
"ajibawa-2023",
"adorkin",
"Chief-Inspector",
"louisbrulenaudet"
],
"count": 12
},
{
"reaction": "๐ค",
"users": [
"bezir"
],
"count": 1
}
] | 2024-07-23T21:50:53.000Z | 2024-08-23T00:55:38.066Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg",
"fullname": "leroy Samuel Dyer",
"name": "LeroyDyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 82,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
},
{
"avatarUrl": "/avatars/744eddaa7dfc34a57df9ce32a78059a0.svg",
"fullname": "Tyrone Pierce",
"name": "piercyy",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
}
] | /posts/singhsidhukuldeep/833816740809851 | 2,764 | 3 |
705580461777947 | [
{
"type": "text",
"value": "๐ Exciting news! We've just launched \"Thundermoon\" - the latest version of Moondream, our open-source vision language model! ๐",
"raw": "๐ Exciting news! We've just launched \"Thundermoon\" - the latest version of Moondream, our open-source vision language model! ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Key improvements in this release:",
"raw": "Key improvements in this release:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "1. Massive leap in OCR capabilities",
"raw": "1. Massive leap in OCR capabilities",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "2. Enhanced document understanding",
"raw": "2. Enhanced document understanding",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "3. Significant boosts across key metrics:",
"raw": "3. Significant boosts across key metrics:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " * DocVQA: 61.9 (โ103%)",
"raw": " * DocVQA: 61.9 (โ103%)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " * TextVQA: 60.2 (โ5.2%)",
"raw": " * TextVQA: 60.2 (โ5.2%)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " * GQA: 64.9 (โ2.9%)",
"raw": " * GQA: 64.9 (โ2.9%)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "What does this mean? Moondream can now tackle complex document analysis tasks with unprecedented accuracy for a model of its size. From deciphering handwritten notes to interpreting data tables, the applications are vast.",
"raw": "What does this mean? Moondream can now tackle complex document analysis tasks with unprecedented accuracy for a model of its size. From deciphering handwritten notes to interpreting data tables, the applications are vast.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check out the image for a glimpse of Moondream in action, effortlessly extracting insights from a 1944 sugar industry document!",
"raw": "Check out the image for a glimpse of Moondream in action, effortlessly extracting insights from a 1944 sugar industry document!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Why it matters:",
"raw": "Why it matters:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "* Democratizing AI: As an open-source project, we're making advanced vision AI accessible to all developers.",
"raw": "* Democratizing AI: As an open-source project, we're making advanced vision AI accessible to all developers.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "* Efficiency: Proving that smaller models can deliver big results.",
"raw": "* Efficiency: Proving that smaller models can deliver big results.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "* Real-world impact: From historical document analysis to modern business intelligence, the potential use cases are exciting.",
"raw": "* Real-world impact: From historical document analysis to modern business intelligence, the potential use cases are exciting.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Curious to try it out? Try out the live demo here! ",
"raw": "Curious to try it out? Try out the live demo here! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://moondream.ai/playground",
"resource": null,
"url": null,
"href": "https://moondream.ai/playground",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ Exciting news! We've just launched "Thundermoon" - the latest version of Moondream, our open-source vision language model! ๐
Key improvements in this release:
1. Massive leap in OCR capabilities
2. Enhanced document understanding
3. Significant boosts across key metrics:
* DocVQA: 61.9 (โ103%)
* TextVQA: 60.2 (โ5.2%)
* GQA: 64.9 (โ2.9%)
What does this mean? Moondream can now tackle complex document analysis tasks with unprecedented accuracy for a model of its size. From deciphering handwritten notes to interpreting data tables, the applications are vast.
Check out the image for a glimpse of Moondream in action, effortlessly extracting insights from a 1944 sugar industry document!
Why it matters:
* Democratizing AI: As an open-source project, we're making advanced vision AI accessible to all developers.
* Efficiency: Proving that smaller models can deliver big results.
* Real-world impact: From historical document analysis to modern business intelligence, the potential use cases are exciting.
Curious to try it out? Try out the live demo here! https://moondream.ai/playground | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63117568fa95534e218da163/8h9zN8aKRxPLBnXW7sqY9.jpeg",
"fullname": "Vik Korrapati",
"name": "vikhyatk",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 365,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63117568fa95534e218da163/V0oHMWY9sul_4MUc11rgv.png"
}
] | [] | [
{
"reaction": "๐ฅ",
"users": [
"YaTharThShaRma999",
"Tom-Neverwinter",
"catastropiyush",
"rAIfle",
"rbrisita",
"louisbrulenaudet",
"Csplk"
],
"count": 7
},
{
"reaction": "๐",
"users": [
"ijohn07",
"sequelbox"
],
"count": 2
},
{
"reaction": "โค๏ธ",
"users": [
"Csplk",
"twenkid"
],
"count": 2
}
] | 2024-07-23T21:24:02.000Z | 2024-08-15T00:18:09.937Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63117568fa95534e218da163/8h9zN8aKRxPLBnXW7sqY9.jpeg",
"fullname": "Vik Korrapati",
"name": "vikhyatk",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 365,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6344a1b0762379fc63017e62/g4VIT8l2lZIj6AoQAwVy7.png",
"fullname": "John",
"name": "cmp-nct",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
},
{
"avatarUrl": "/avatars/c6bbbaa1178286a7f8ea418837a6b330.svg",
"fullname": "mnemic",
"name": "mnemic",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 10,
"isFollowing": false
}
] | /posts/vikhyatk/705580461777947 | 3,219 | 4 |
806846801467023 | [
{
"type": "text",
"value": "I just had a masterclass in open-source collaboration with the release of Llama 3.1 ๐ฆ๐ค",
"raw": "I just had a masterclass in open-source collaboration with the release of Llama 3.1 ๐ฆ๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Meta dropped Llama 3.1, and seeing firsthand the Hugging Face team working to integrate it is nothing short of impressive. Their swift integration, comprehensive documentation, and innovative tools showcase the power of open-source teamwork. ",
"raw": "Meta dropped Llama 3.1, and seeing firsthand the Hugging Face team working to integrate it is nothing short of impressive. Their swift integration, comprehensive documentation, and innovative tools showcase the power of open-source teamwork. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "For the curious minds:",
"raw": "For the curious minds:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Check out independent evaluations: ",
"raw": "๐ Check out independent evaluations: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard",
"resource": {
"type": "space",
"id": "open-llm-leaderboard/open_llm_leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ง Deep dive into the tech: ",
"raw": "๐ง Deep dive into the tech: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/llama31",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/llama31",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐จโ๐ณ Try different recipes (including running 8B on free Colab!): ",
"raw": "๐จโ๐ณ Try different recipes (including running 8B on free Colab!): ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/huggingface/huggingface-llama-recipes",
"resource": null,
"url": null,
"href": "https://github.com/huggingface/huggingface-llama-recipes",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Visualize open vs. closed LLM progress: ",
"raw": "๐ Visualize open vs. closed LLM progress: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/andrewrreed/closed-vs-open-arena-elo",
"resource": {
"type": "space",
"id": "andrewrreed/closed-vs-open-arena-elo",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/andrewrreed/closed-vs-open-arena-elo",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ค Generate synthetic data with distilabel, thanks to the new license allowing the use of outputs to train other LLMs ",
"raw": "๐ค Generate synthetic data with distilabel, thanks to the new license allowing the use of outputs to train other LLMs ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/llama31#synthetic-data-generation-with-distilabel",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/llama31#synthetic-data-generation-with-distilabel",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ก Pro tip: Experience the 405B version for free on HuggingChat, now with tool-calling capabilities! ",
"raw": "๐ก Pro tip: Experience the 405B version for free on HuggingChat, now with tool-calling capabilities! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/chat/",
"resource": null,
"url": null,
"href": "https://huggingface.co/chat/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "#OpenSourceAI #AIInnovation",
"raw": "#OpenSourceAI #AIInnovation",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I just had a masterclass in open-source collaboration with the release of Llama 3.1 ๐ฆ๐ค
Meta dropped Llama 3.1, and seeing firsthand the Hugging Face team working to integrate it is nothing short of impressive. Their swift integration, comprehensive documentation, and innovative tools showcase the power of open-source teamwork.
For the curious minds:
๐ Check out independent evaluations: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
๐ง Deep dive into the tech: https://huggingface.co/blog/llama31
๐จโ๐ณ Try different recipes (including running 8B on free Colab!): https://github.com/huggingface/huggingface-llama-recipes
๐ Visualize open vs. closed LLM progress: https://huggingface.co/spaces/andrewrreed/closed-vs-open-arena-elo
๐ค Generate synthetic data with distilabel, thanks to the new license allowing the use of outputs to train other LLMs https://huggingface.co/blog/llama31#synthetic-data-generation-with-distilabel
๐ก Pro tip: Experience the 405B version for free on HuggingChat, now with tool-calling capabilities! https://huggingface.co/chat/
#OpenSourceAI #AIInnovation | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/QBCQ3w4pegjRLDsa952DV.png"
}
] | [] | [
{
"reaction": "๐ฅ",
"users": [
"clem",
"gastonmorixe",
"victor",
"chuangxinlezhi",
"Utha",
"R-I-0816",
"fluxthedev",
"osanseviero"
],
"count": 8
},
{
"reaction": "โค๏ธ",
"users": [
"clboetticher",
"chuangxinlezhi",
"Utha",
"R-I-0816",
"osanseviero"
],
"count": 5
},
{
"reaction": "๐ค",
"users": [
"chuangxinlezhi",
"R-I-0816",
"osanseviero"
],
"count": 3
}
] | 2024-07-23T20:05:45.000Z | 2024-07-23T20:31:02.001Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem ๐ค",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1734,
"isFollowing": false
}
] | /posts/fdaudens/806846801467023 | 1,950 | 1 |
287713441577435 | [
{
"type": "text",
"value": "Any idea if this \"scheduled\"/\"dynamic\" batch size is available in HF Trainers ? I've never seenย itย personally",
"raw": "Any idea if this \"scheduled\"/\"dynamic\" batch size is available in HF Trainers ? I've never seenย itย personally",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Any idea if this "scheduled"/"dynamic" batch size is available in HF Trainers ? I've never seenย itย personally | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 188,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/626237d9bbcbd1c34f1bb231/s5tGAm8jlvTzYlRraGIMY.png"
}
] | [] | [] | 2024-07-23T18:06:22.000Z | 2024-07-23T18:06:22.315Z | [] | /posts/alielfilali01/287713441577435 | 719 | 0 |
371684391877712 | [
{
"type": "text",
"value": "๐จ Launching The Visual Haystacks (VHs) Benchmark: the first \"visual-centric\" Needle-In-A-Haystack (NIAH) benchmark to assess LMMs' capability in long-context visual retrieval and reasoning. ",
"raw": "๐จ Launching The Visual Haystacks (VHs) Benchmark: the first \"visual-centric\" Needle-In-A-Haystack (NIAH) benchmark to assess LMMs' capability in long-context visual retrieval and reasoning. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check it out!",
"raw": "Check it out!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/tsunghanwu/visual_haystacks",
"resource": {
"type": "dataset",
"id": "tsunghanwu/visual_haystacks",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/tsunghanwu/visual_haystacks",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://visual-haystacks.github.io/",
"resource": null,
"url": null,
"href": "https://visual-haystacks.github.io/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2407.13766",
"resource": null,
"url": null,
"href": "https://arxiv.org/abs/2407.13766",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/visual-haystacks/vhs_benchmark",
"resource": null,
"url": null,
"href": "https://github.com/visual-haystacks/vhs_benchmark",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐จ Launching The Visual Haystacks (VHs) Benchmark: the first "visual-centric" Needle-In-A-Haystack (NIAH) benchmark to assess LMMs' capability in long-context visual retrieval and reasoning.
Check it out!
https://huggingface.co/datasets/tsunghanwu/visual_haystacks
https://visual-haystacks.github.io/
https://arxiv.org/abs/2407.13766
https://github.com/visual-haystacks/vhs_benchmark | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669920369989-noauth.jpeg",
"fullname": "David Chan",
"name": "davidchan",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 6,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"osanseviero"
],
"count": 1
}
] | 2024-07-23T15:15:27.000Z | 2024-07-23T15:15:27.067Z | [] | /posts/davidchan/371684391877712 | 541 | 0 |
213182541433936 | [
{
"type": "text",
"value": "I just launched an exciting new multiplayer app powered by GPT-4o, enabling collaborative AI-driven queries in a single shared session! ",
"raw": "I just launched an exciting new multiplayer app powered by GPT-4o, enabling collaborative AI-driven queries in a single shared session! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "### ๐ Try It Out! ๐ Check out the GPT-4o Multiplayer App ",
"raw": "### ๐ Try It Out! ๐ Check out the GPT-4o Multiplayer App ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Experience the future of collaborative AI by visiting our space on Hugging Face: ",
"raw": "Experience the future of collaborative AI by visiting our space on Hugging Face: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/awacke1/ChatStreamlitMultiplayer",
"resource": {
"type": "space",
"id": "awacke1/ChatStreamlitMultiplayer",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/awacke1/ChatStreamlitMultiplayer",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ This innovative tool lets you and your team reason over:",
"raw": "๐ This innovative tool lets you and your team reason over:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "###๐ Text",
"raw": "###๐ Text",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "###๐ผ๏ธ Image",
"raw": "###๐ผ๏ธ Image",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "###๐ต Audio",
"raw": "###๐ต Audio",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "###๐ฅ Video",
"raw": "###๐ฅ Video",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "## ๐ Key Features",
"raw": "## ๐ Key Features",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "### Shared Contributions",
"raw": "### Shared Contributions",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Collaborate in real-time, seeing each other's inputs and contributions.",
"raw": "Collaborate in real-time, seeing each other's inputs and contributions.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Enhances teamwork and fosters a collective approach to problem-solving.",
"raw": "Enhances teamwork and fosters a collective approach to problem-solving.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "### Diverse Media Integration",
"raw": "### Diverse Media Integration",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Seamlessly analyze and reason with text, images, audio, and video.",
"raw": "Seamlessly analyze and reason with text, images, audio, and video.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Breakthrough capabilities in handling complex media types, including air traffic control images and audio.",
"raw": "Breakthrough capabilities in handling complex media types, including air traffic control images and audio.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "## ๐ ๏ธ Real-World Testing",
"raw": "## ๐ ๏ธ Real-World Testing",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This morning, we tested the app using images and audio from air traffic controlโa challenge that was nearly impossible to handle with ease just a few years ago. ๐๐ฌ",
"raw": "This morning, we tested the app using images and audio from air traffic controlโa challenge that was nearly impossible to handle with ease just a few years ago. ๐๐ฌ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ฑ The Future of AI Collaboration",
"raw": "๐ฑ The Future of AI Collaboration",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "We believe AI Pair Programming is evolving into a new era of intelligence through shared contributions and teamwork. As we continue to develop, this app will enable groups to:",
"raw": "We believe AI Pair Programming is evolving into a new era of intelligence through shared contributions and teamwork. As we continue to develop, this app will enable groups to:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Generate detailed text responses ๐",
"raw": "Generate detailed text responses ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Collaborate on code responses ๐ป",
"raw": "Collaborate on code responses ๐ป",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Develop new AI programs together ๐ค",
"raw": "Develop new AI programs together ๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I just launched an exciting new multiplayer app powered by GPT-4o, enabling collaborative AI-driven queries in a single shared session!
### ๐ Try It Out! ๐ Check out the GPT-4o Multiplayer App
Experience the future of collaborative AI by visiting our space on Hugging Face: https://huggingface.co/spaces/awacke1/ChatStreamlitMultiplayer
๐ This innovative tool lets you and your team reason over:
###๐ Text
###๐ผ๏ธ Image
###๐ต Audio
###๐ฅ Video
## ๐ Key Features
### Shared Contributions
Collaborate in real-time, seeing each other's inputs and contributions.
Enhances teamwork and fosters a collective approach to problem-solving.
### Diverse Media Integration
Seamlessly analyze and reason with text, images, audio, and video.
Breakthrough capabilities in handling complex media types, including air traffic control images and audio.
## ๐ ๏ธ Real-World Testing
This morning, we tested the app using images and audio from air traffic controlโa challenge that was nearly impossible to handle with ease just a few years ago. ๐๐ฌ
๐ฑ The Future of AI Collaboration
We believe AI Pair Programming is evolving into a new era of intelligence through shared contributions and teamwork. As we continue to develop, this app will enable groups to:
Generate detailed text responses ๐
Collaborate on code responses ๐ป
Develop new AI programs together ๐ค | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1656147940537-620630b603825909dcbeba35.jpeg",
"fullname": "Aaron C Wacker",
"name": "awacke1",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 184,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"victor"
],
"count": 1
}
] | 2024-07-23T11:34:36.000Z | 2024-07-23T11:34:36.131Z | [] | /posts/awacke1/213182541433936 | 1,342 | 0 |
546261052602944 | [
{
"type": "text",
"value": "We ",
"raw": "We ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://mii-llm.ai",
"resource": null,
"url": null,
"href": "https://mii-llm.ai",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " just released a new LLM Italian benchmark and a set of evaluation: MMLU-PRO-ITA",
"raw": " just released a new LLM Italian benchmark and a set of evaluation: MMLU-PRO-ITA",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Thanks to ",
"raw": "Thanks to ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@efederici",
"resource": null,
"url": null,
"href": null,
"user": "efederici",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " who released ",
"raw": " who released ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/efederici/MMLU-Pro-ita",
"resource": {
"type": "dataset",
"id": "efederici/MMLU-Pro-ita",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/efederici/MMLU-Pro-ita",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " a machine translated version of MMLU-PRO and thanks to a community shared computational effort we published in the \"Eval Aggiuntive\" tab of ",
"raw": " a machine translated version of MMLU-PRO and thanks to a community shared computational effort we published in the \"Eval Aggiuntive\" tab of ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard",
"resource": null,
"url": null,
"href": "https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " the results on Italian open source LLMs. ",
"raw": " the results on Italian open source LLMs. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "If you want to deepen read the blog article on hf ",
"raw": "If you want to deepen read the blog article on hf ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/giux78/mmlu-pro-ita",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/giux78/mmlu-pro-ita",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | We https://mii-llm.ai just released a new LLM Italian benchmark and a set of evaluation: MMLU-PRO-ITA
Thanks to @efederici who released https://huggingface.co/datasets/efederici/MMLU-Pro-ita a machine translated version of MMLU-PRO and thanks to a community shared computational effort we published in the "Eval Aggiuntive" tab of https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard the results on Italian open source LLMs.
If you want to deepen read the blog article on hf https://huggingface.co/blog/giux78/mmlu-pro-ita | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fef4eb7770b06e11c2c6381/1NMdigjCGtn0yvQZSi5NJ.png",
"fullname": "Alessandro Ercolani",
"name": "giux78",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 44,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/612246596d9ce900691744d2/9DlHVQDqblKz7QPTA6nDa.jpeg",
"fullname": "Edoardo Federici",
"name": "efederici",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 27
}
] | [
{
"reaction": "๐",
"users": [
"giux78",
"Ramikan-BR",
"osanseviero"
],
"count": 3
},
{
"reaction": "โค๏ธ",
"users": [
"Ramikan-BR",
"anakin87"
],
"count": 2
},
{
"reaction": "๐ฅ",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "๐",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "๐",
"users": [
"ZeroWw"
],
"count": 1
}
] | 2024-07-23T10:16:30.000Z | 2024-07-23T10:16:30.860Z | [] | /posts/giux78/546261052602944 | 1,639 | 0 |
537873019611478 | [
{
"type": "text",
"value": "[New tool] Follow interesting ML persons ๐ฉโ๐จ ๐จโ๐ค ๐ฉโ๐ซ with Followgraph",
"raw": "[New tool] Follow interesting ML persons ๐ฉโ๐จ ๐จโ๐ค ๐ฉโ๐ซ with Followgraph",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/severo/followgraph",
"resource": {
"type": "space",
"id": "severo/followgraph",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/severo/followgraph",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Please try it and tell me if it helped you discover high-quality content ๐ ๐",
"raw": "Please try it and tell me if it helped you discover high-quality content ๐ ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I repurposed \"Followgraph for Mastodon\" (",
"raw": "I repurposed \"Followgraph for Mastodon\" (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://followgraph.vercel.app/",
"resource": null,
"url": null,
"href": "https://followgraph.vercel.app/",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ").",
"raw": ").",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "My new follows: ",
"raw": "My new follows: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@TheBloke",
"resource": null,
"url": null,
"href": null,
"user": "TheBloke",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@mlabonne",
"resource": null,
"url": null,
"href": null,
"user": "mlabonne",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@teknium",
"resource": null,
"url": null,
"href": null,
"user": "teknium",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@KnutJaegersberg",
"resource": null,
"url": null,
"href": null,
"user": "KnutJaegersberg",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@SkalskiP",
"resource": null,
"url": null,
"href": null,
"user": "SkalskiP",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@AmelieSchreiber",
"resource": null,
"url": null,
"href": null,
"user": "AmelieSchreiber",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@lbourdois",
"resource": null,
"url": null,
"href": null,
"user": "lbourdois",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@ceyda",
"resource": null,
"url": null,
"href": null,
"user": "ceyda",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@andrewyng",
"resource": null,
"url": null,
"href": null,
"user": "andrewyng",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@Pclanglais",
"resource": null,
"url": null,
"href": null,
"user": "Pclanglais",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@karpathy",
"resource": null,
"url": null,
"href": null,
"user": "karpathy",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "And you?",
"raw": "And you?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | [New tool] Follow interesting ML persons ๐ฉโ๐จ ๐จโ๐ค ๐ฉโ๐ซ with Followgraph
https://huggingface.co/spaces/severo/followgraph
Please try it and tell me if it helped you discover high-quality content ๐ ๐
I repurposed "Followgraph for Mastodon" (https://followgraph.vercel.app/).
My new follows: @TheBloke @mlabonne @teknium @KnutJaegersberg @SkalskiP @AmelieSchreiber @lbourdois @ceyda @andrewyng @Pclanglais @karpathy
And you? | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60a76b174e24361791fe822d/inEvYwrd4z0xvRQN3ikdE.jpeg",
"fullname": "Sylvain Lesage",
"name": "severo",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 128,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60a76b174e24361791fe822d/iu1G6M41TwBhAbaXTgFQj.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64191ec8d459c9e7fbb0236b/7BeTgySZzmFCaVpntaYgP.jpeg",
"fullname": "Amelie Schreiber",
"name": "AmelieSchreiber",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 716
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6467fcf0946476c5d2194c14/zAy7PYR3HkC9NWcpZw8X1.png",
"fullname": "Andrew Ng",
"name": "andrewyng",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 176
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1623134857336-5f7c2cbbb1a525442ff96e39.jpeg",
"fullname": "Ceyda Cinarel",
"name": "ceyda",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 80
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1660434061546-62f83661fe21cc4875221c0f.jpeg",
"fullname": "Andrej K",
"name": "karpathy",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 474
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669551186189-63732ebbbd81fae2b3aaf3fb.jpeg",
"fullname": "Knut Jรคgersberg",
"name": "KnutJaegersberg",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 238
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/613b0a62a14099d5afed7830/pLuqSIYaNYhUqdjxlNrFn.png",
"fullname": "Loรฏck BOURDOIS",
"name": "lbourdois",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 88
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b8e2ba285851687028d395/JtUGAwVh_4cDEsjNcfpye.png",
"fullname": "Maxime Labonne",
"name": "mlabonne",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3452
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ce091a9e9ca8123d7a42b0/OEPggp82RwigxNLL35LgT.jpeg",
"fullname": "Pierre-Carl Langlais",
"name": "Pclanglais",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 189
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60f84d4d85dbbb185d2e9a53/Mlc0XjAgQR2cuhGNchz07.jpeg",
"fullname": "Piotr Skalski",
"name": "SkalskiP",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 2299
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317aade83d8d2fd903192d9/erOwgMXc_CZih3uMoyTAp.jpeg",
"fullname": "Teknium",
"name": "teknium",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4248
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6426d3f3a7723d62b53c259b/tvPikpAzKTKGN5wrpadOJ.jpeg",
"fullname": "Tom Jobbins",
"name": "TheBloke",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 22166
}
] | [
{
"reaction": "๐",
"users": [
"gokaygokay",
"Ramikan-BR",
"mmhamdy",
"lhoestq",
"appvoid",
"kramp",
"merve",
"mikestaub",
"osanseviero",
"lbourdois",
"louisbrulenaudet",
"ZennyKenny"
],
"count": 12
},
{
"reaction": "โค๏ธ",
"users": [
"mlabonne",
"Ramikan-BR",
"mmhamdy",
"lhoestq",
"merve",
"mikestaub",
"clem",
"osanseviero",
"lbourdois",
"efecelik"
],
"count": 10
}
] | 2024-07-23T10:07:25.000Z | 2024-07-31T08:19:18.768Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b8e2ba285851687028d395/JtUGAwVh_4cDEsjNcfpye.png",
"fullname": "Maxime Labonne",
"name": "mlabonne",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3452,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a813dedbb9e28866a91b27/zs-RWFuXs17IfPUhxQaei.jpeg",
"fullname": "appvoid",
"name": "appvoid",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 35,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60a76b174e24361791fe822d/inEvYwrd4z0xvRQN3ikdE.jpeg",
"fullname": "Sylvain Lesage",
"name": "severo",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 128,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6459fa0f5b3111fbe83286e1/UhCa7JNbtTjC6dgOjZtH0.jpeg",
"fullname": "Louis Brulรฉ Naudet",
"name": "louisbrulenaudet",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 176,
"isFollowing": false
}
] | /posts/severo/537873019611478 | 3,428 | 5 |
347522688239080 | [
{
"type": "text",
"value": "๐๐ปโโ๏ธ Hey there folks ",
"raw": "๐๐ปโโ๏ธ Hey there folks ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "made a demo for Nvidia Minitron on an A100. ",
"raw": "made a demo for Nvidia Minitron on an A100. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Minitron is a family of small language models (SLMs) obtained by pruning NVIDIA's Nemotron-4 15B model. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models.",
"raw": "Minitron is a family of small language models (SLMs) obtained by pruning NVIDIA's Nemotron-4 15B model. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to 40x fewer training tokens per model compared to training from scratch; this results in compute cost savings of 1.8x for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our arXiv paper for more details.",
"raw": "Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to 40x fewer training tokens per model compared to training from scratch; this results in compute cost savings of 1.8x for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our arXiv paper for more details.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Minitron models are for research and development only.",
"raw": "Minitron models are for research and development only.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "source : ",
"raw": "source : ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/nvidia/Minitron-8B-Base",
"resource": {
"type": "model",
"id": "nvidia/Minitron-8B-Base",
"discussionNum": null
},
"url": "https://huggingface.co/nvidia/Minitron-8B-Base",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "demo : ",
"raw": "demo : ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/Tonic/Minitron",
"resource": {
"type": "space",
"id": "Tonic/Minitron",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/Tonic/Minitron",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐๐ปโโ๏ธ Hey there folks
made a demo for Nvidia Minitron on an A100.
Minitron is a family of small language models (SLMs) obtained by pruning NVIDIA's Nemotron-4 15B model. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models.
Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to 40x fewer training tokens per model compared to training from scratch; this results in compute cost savings of 1.8x for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our arXiv paper for more details.
Minitron models are for research and development only.
source : https://huggingface.co/nvidia/Minitron-8B-Base
demo : https://huggingface.co/spaces/Tonic/Minitron | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3bb1cd0d8c2c2169f0b88/eT2TS0IlQbZtz-F_zHLz9.jpeg",
"fullname": "Joseph Pollack",
"name": "Tonic",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 310,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"nicolay-r",
"TuringsSolutions",
"ZeroWw",
"clem",
"Nymbo",
"osanseviero",
"louisbrulenaudet",
"Tonic"
],
"count": 8
},
{
"reaction": "โค๏ธ",
"users": [
"clem",
"osanseviero"
],
"count": 2
}
] | 2024-07-23T09:55:50.000Z | 2024-07-23T18:21:10.710Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6569216f9c96f1a47bf45788/mCLqmAs4dOjKdxNQVAp1w.png",
"fullname": "Sica Rius",
"name": "SicariusSicariiStuff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 132,
"isFollowing": false
}
] | /posts/Tonic/347522688239080 | 1,716 | 1 |
608490648185683 | [
{
"type": "text",
"value": "๐ฅThrilled to release our 8B version of Symbol-LLM-Instruct ! ",
"raw": "๐ฅThrilled to release our 8B version of Symbol-LLM-Instruct ! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It follows the two-stage training strategy proposed in the original paper and is continually optimized on LLaMA3-Chat-8B model.",
"raw": "It follows the two-stage training strategy proposed in the original paper and is continually optimized on LLaMA3-Chat-8B model.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Symbol-LLM was accepted by ACL'24 main conference ! See you in Thailand !",
"raw": "Symbol-LLM was accepted by ACL'24 main conference ! See you in Thailand !",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper link: ",
"raw": "Paper link: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2311.09278",
"resource": null,
"url": null,
"href": "https://arxiv.org/abs/2311.09278",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper Title: Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models",
"raw": "Paper Title: Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ฅThrilled to release our 8B version of Symbol-LLM-Instruct !
It follows the two-stage training strategy proposed in the original paper and is continually optimized on LLaMA3-Chat-8B model.
Symbol-LLM was accepted by ACL'24 main conference ! See you in Thailand !
Paper link: https://arxiv.org/abs/2311.09278
Paper Title: Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/656d73ed0bbc114fe6449704/gpteBU9GmKSHRVkRBUHld.png",
"fullname": "Symbol-LLM",
"name": "Symbol-LLM",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 27,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"Symbol-LLM",
"Xdotnet",
"Ramdevkijai",
"louisbrulenaudet",
"osanseviero"
],
"count": 5
},
{
"reaction": "๐ฅ",
"users": [
"nicolay-r",
"ToKrCZ",
"osanseviero"
],
"count": 3
},
{
"reaction": "๐ค",
"users": [
"Symbol-LLM"
],
"count": 1
},
{
"reaction": "๐",
"users": [
"ZeroWw"
],
"count": 1
}
] | 2024-07-23T08:19:31.000Z | 2024-07-25T20:50:47.254Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2846,
"isFollowing": false
}
] | /posts/Symbol-LLM/608490648185683 | 2,101 | 1 |
309759264789106 | [
{
"type": "text",
"value": "How to create custom LLMs from scratch",
"raw": "How to create custom LLMs from scratch",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "See my new podcast on this topic, at ",
"raw": "See my new podcast on this topic, at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://mltblog.com/3xS1bf5",
"resource": null,
"url": null,
"href": "https://mltblog.com/3xS1bf5",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Despite GPT, Claude, Gemini, LLama and the other host of LLMs that we have access to, a variety of organizations are still exploring their options when it comes to custom LLMs. Logging in to ChatGPT is easy enough, and so is creating a โcustomโ openAI GPT, but what does it take to create a truly custom LLM? When and why might this be useful, and will it be worth the effort?",
"raw": "Despite GPT, Claude, Gemini, LLama and the other host of LLMs that we have access to, a variety of organizations are still exploring their options when it comes to custom LLMs. Logging in to ChatGPT is easy enough, and so is creating a โcustomโ openAI GPT, but what does it take to create a truly custom LLM? When and why might this be useful, and will it be worth the effort?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | How to create custom LLMs from scratch
See my new podcast on this topic, at https://mltblog.com/3xS1bf5
Despite GPT, Claude, Gemini, LLama and the other host of LLMs that we have access to, a variety of organizations are still exploring their options when it comes to custom LLMs. Logging in to ChatGPT is easy enough, and so is creating a โcustomโ openAI GPT, but what does it take to create a truly custom LLM? When and why might this be useful, and will it be worth the effort? | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/669c89e98f2dbc203f9e74ab/higvnXEHeo_Ig2bgTpn47.png",
"fullname": "Vincent Granville",
"name": "vincentg64",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 17,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/669c89e98f2dbc203f9e74ab/0MywN8McYQVfu2fXSn_Mm.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"nicolay-r",
"victor",
"SFBAI"
],
"count": 3
}
] | 2024-07-23T06:10:05.000Z | 2024-07-23T06:10:26.640Z | [] | /posts/vincentg64/309759264789106 | 1,235 | 0 |
111964148485189 | [
{
"type": "text",
"value": "LazyLLM - Unusual Colab (Apple & Meta) Yields Impactful Work",
"raw": "LazyLLM - Unusual Colab (Apple & Meta) Yields Impactful Work",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "LLM inference typically consists of two stages: prefilling/tokenizing and decoding. In the prefilling stage, the model processes the entire input prompt, computing and caching key-value (KV) pairs for each token, which can be time-consuming for long prompts. This is followed by the decoding stage, where the model generates tokens sequentially, reusing the cached KVs. ",
"raw": "LLM inference typically consists of two stages: prefilling/tokenizing and decoding. In the prefilling stage, the model processes the entire input prompt, computing and caching key-value (KV) pairs for each token, which can be time-consuming for long prompts. This is followed by the decoding stage, where the model generates tokens sequentially, reusing the cached KVs. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "LazyLLM introduces a dynamic token pruning technique. Instead of computing KVs for all tokens during prefilling, LazyLLM selectively processes only the most important tokens based on attention scores, deferring less important ones to later steps if needed. It uses progressive token pruning across transformer layers and introduces an Aux Cache to store hidden states of pruned tokens. ",
"raw": "LazyLLM introduces a dynamic token pruning technique. Instead of computing KVs for all tokens during prefilling, LazyLLM selectively processes only the most important tokens based on attention scores, deferring less important ones to later steps if needed. It uses progressive token pruning across transformer layers and introduces an Aux Cache to store hidden states of pruned tokens. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This approach significantly reduces the time-to-first-token (TTFT) and overall generation time while maintaining accuracy across various tasks. LazyLLM outperforms baseline techniques like random token dropping and static pruning, and can be easily integrated into existing LLMs without fine-tuning, offering a practical solution for accelerating LLM inference, especially in long context scenarios.",
"raw": "This approach significantly reduces the time-to-first-token (TTFT) and overall generation time while maintaining accuracy across various tasks. LazyLLM outperforms baseline techniques like random token dropping and static pruning, and can be easily integrated into existing LLMs without fine-tuning, offering a practical solution for accelerating LLM inference, especially in long context scenarios.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "IN SIMPLE TERMS",
"raw": "IN SIMPLE TERMS",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "When you prompt a large language model (LLM), it usually looks at every single word/subword (or tokens) in your prompt before generating a response. This can be time consuming, especially for prompts with very long texts. This paper introduces a new technique that solves this problem by being more selective. Instead of looking at every word right away, it only focuses on the most important words first. It decides which words are important based on how much attention the model gives them. If it needs other words later, it can go back and look at them then. This approach is like skimming a text for key information before reading it in detail.",
"raw": "When you prompt a large language model (LLM), it usually looks at every single word/subword (or tokens) in your prompt before generating a response. This can be time consuming, especially for prompts with very long texts. This paper introduces a new technique that solves this problem by being more selective. Instead of looking at every word right away, it only focuses on the most important words first. It decides which words are important based on how much attention the model gives them. If it needs other words later, it can go back and look at them then. This approach is like skimming a text for key information before reading it in detail.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Read More: ",
"raw": "Read More: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/pdf/2407.14057",
"resource": null,
"url": null,
"href": "https://arxiv.org/pdf/2407.14057",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | LazyLLM - Unusual Colab (Apple & Meta) Yields Impactful Work
LLM inference typically consists of two stages: prefilling/tokenizing and decoding. In the prefilling stage, the model processes the entire input prompt, computing and caching key-value (KV) pairs for each token, which can be time-consuming for long prompts. This is followed by the decoding stage, where the model generates tokens sequentially, reusing the cached KVs.
LazyLLM introduces a dynamic token pruning technique. Instead of computing KVs for all tokens during prefilling, LazyLLM selectively processes only the most important tokens based on attention scores, deferring less important ones to later steps if needed. It uses progressive token pruning across transformer layers and introduces an Aux Cache to store hidden states of pruned tokens.
This approach significantly reduces the time-to-first-token (TTFT) and overall generation time while maintaining accuracy across various tasks. LazyLLM outperforms baseline techniques like random token dropping and static pruning, and can be easily integrated into existing LLMs without fine-tuning, offering a practical solution for accelerating LLM inference, especially in long context scenarios.
IN SIMPLE TERMS
When you prompt a large language model (LLM), it usually looks at every single word/subword (or tokens) in your prompt before generating a response. This can be time consuming, especially for prompts with very long texts. This paper introduces a new technique that solves this problem by being more selective. Instead of looking at every word right away, it only focuses on the most important words first. It decides which words are important based on how much attention the model gives them. If it needs other words later, it can go back and look at them then. This approach is like skimming a text for key information before reading it in detail.
Read More: https://arxiv.org/pdf/2407.14057 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6438a9027de34e8ea7e4b257/vib8QSd1AWMr_bR9ig_xJ.jpeg",
"fullname": "Jaward Sesay",
"name": "Jaward",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 189,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/3yYj8D7xtL1rgi_KHs-aO.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/IryreY8BXH2QmGyvAkE8k.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/zvvrP_VO4qFr3dKngJ81O.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/5_TduEda-J9JIRc0ABbKC.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/0WMekl3H9Wnizay2h0k3x.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"ZeroWw",
"nicolay-r",
"joelbryan",
"osanseviero"
],
"count": 4
}
] | 2024-07-23T01:16:16.000Z | 2024-07-23T19:22:26.671Z | [
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
}
] | /posts/Jaward/111964148485189 | 1,345 | 1 |
705751692421448 | [
{
"type": "text",
"value": "SNN Image Diffusion V2",
"raw": "SNN Image Diffusion V2",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Billionaires have been made for less than this. This is only one of the things it can it do. It can do API calls, function calls, optimize poker and blackjack odds, anything that is an optimization problem. It costs fractions of a penny and requires fractions of the compute of an LLM model. It can even communicate two ways with an LLM model.",
"raw": "Billionaires have been made for less than this. This is only one of the things it can it do. It can do API calls, function calls, optimize poker and blackjack odds, anything that is an optimization problem. It costs fractions of a penny and requires fractions of the compute of an LLM model. It can even communicate two ways with an LLM model.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | SNN Image Diffusion V2
Billionaires have been made for less than this. This is only one of the things it can it do. It can do API calls, function calls, optimize poker and blackjack odds, anything that is an optimization problem. It costs fractions of a penny and requires fractions of the compute of an LLM model. It can even communicate two ways with an LLM model.
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 148,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64274b69ba6cef0a6ebb0fd6/efHgrIMNS0ICfq7mLdP_T.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"zikazach",
"Xdotnet",
"nicolay-r"
],
"count": 3
}
] | 2024-07-22T23:46:21.000Z | 2024-07-23T21:46:14.484Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6316fb937b0ee0136e5f1220/poHBoJ7QAF_s2CCaosdvQ.jpeg",
"fullname": "Firstname Lastname",
"name": "takeraparterer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 148,
"isFollowing": false
},
{
"avatarUrl": "/avatars/0087f207c06a793c55ed0489ff793e70.svg",
"fullname": "nicolo",
"name": "nicolollo",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/TuringsSolutions/705751692421448 | 1,379 | 19 |
624836925605809 | [
{
"type": "text",
"value": "Hi HuggingFacers!๐ค",
"raw": "Hi HuggingFacers!๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Good news concerning ",
"raw": "Good news concerning ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/as-cle-bert/smolLM-arena",
"resource": {
"type": "space",
"id": "as-cle-bert/smolLM-arena",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/as-cle-bert/smolLM-arena",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", the chat arena where you can compare some of the Small Language Models (<1.7B) on the Hub and cast your vote to choose the best!๐ฑ",
"raw": ", the chat arena where you can compare some of the Small Language Models (<1.7B) on the Hub and cast your vote to choose the best!๐ฑ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The space now has a new interface with chatbots instead of textboxs, it runs faster and it also comes with usage instructions :)",
"raw": "The space now has a new interface with chatbots instead of textboxs, it runs faster and it also comes with usage instructions :)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Have fun!๐",
"raw": "Have fun!๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi HuggingFacers!๐ค
Good news concerning https://huggingface.co/spaces/as-cle-bert/smolLM-arena, the chat arena where you can compare some of the Small Language Models (<1.7B) on the Hub and cast your vote to choose the best!๐ฑ
The space now has a new interface with chatbots instead of textboxs, it runs faster and it also comes with usage instructions :)
Have fun!๐ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e330e7edc2f7306e252448/ucpk9c8x0UafGM4mXTrRy.jpeg",
"fullname": "Astra Clelia Bertelli",
"name": "as-cle-bert",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 639,
"isFollowing": false
} | [] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"prithivMLmods",
"nicolay-r",
"Ramikan-BR",
"cahlen",
"osanseviero",
"quyettv",
"louisbrulenaudet"
],
"count": 7
},
{
"reaction": "๐",
"users": [
"Ramikan-BR",
"Felladrin"
],
"count": 2
},
{
"reaction": "๐ฅ",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "๐",
"users": [
"Ramikan-BR"
],
"count": 1
}
] | 2024-07-22T21:39:50.000Z | 2024-07-22T21:39:50.217Z | [] | /posts/as-cle-bert/624836925605809 | 1,401 | 0 |
284224143230669 | [
{
"type": "text",
"value": "Introducing Whisper Diarization: Multilingual speech recognition with word-level timestamps and speaker segmentation, running 100% locally in your browser thanks to ๐ค Transformers.js!",
"raw": "Introducing Whisper Diarization: Multilingual speech recognition with word-level timestamps and speaker segmentation, running 100% locally in your browser thanks to ๐ค Transformers.js!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Tested on this iconic Letterman interview w/ Grace Hopper from 1983!",
"raw": "Tested on this iconic Letterman interview w/ Grace Hopper from 1983!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Demo: ",
"raw": "- Demo: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/Xenova/whisper-speaker-diarization",
"resource": {
"type": "space",
"id": "Xenova/whisper-speaker-diarization",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/Xenova/whisper-speaker-diarization",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- Source code: ",
"raw": "- Source code: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/Xenova/whisper-speaker-diarization/tree/main/whisper-speaker-diarization",
"resource": {
"type": "space",
"id": "Xenova/whisper-speaker-diarization",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/Xenova/whisper-speaker-diarization/tree/main/whisper-speaker-diarization",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Introducing Whisper Diarization: Multilingual speech recognition with word-level timestamps and speaker segmentation, running 100% locally in your browser thanks to ๐ค Transformers.js!
Tested on this iconic Letterman interview w/ Grace Hopper from 1983!
- Demo: https://huggingface.co/spaces/Xenova/whisper-speaker-diarization
- Source code: https://huggingface.co/spaces/Xenova/whisper-speaker-diarization/tree/main/whisper-speaker-diarization | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b253b7ac5ecaae3d1efe0c/hwiQ0uvz3t-L5a-NtBIO6.png",
"fullname": "Joshua",
"name": "Xenova",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 3736,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/cLCFwAvCkTp-hM6eY1B4q.mp4"
}
] | [] | [
{
"reaction": "๐",
"users": [
"freinold",
"ZeroWw",
"John6666",
"DmitryRyumin",
"okamirvs",
"Nymbo",
"ssboost",
"maruyasa",
"Deddy",
"GPT007",
"sudanenator",
"Rsln",
"dave3991",
"bmorphism",
"toshvelaga",
"osanseviero",
"d8rt8v",
"devstockgirl"
],
"count": 18
},
{
"reaction": "๐ฅ",
"users": [
"ssboost",
"prithivMLmods",
"Sylvestre",
"Deddy",
"nicolay-r",
"Gatozu35",
"AARon99",
"toshvelaga",
"osanseviero"
],
"count": 9
},
{
"reaction": "โค๏ธ",
"users": [
"julien-rodriguez",
"DataSoul",
"BoscoTheDog",
"clem",
"toshvelaga",
"osanseviero"
],
"count": 6
}
] | 2024-07-22T20:30:29.000Z | 2024-07-23T20:05:06.948Z | [
{
"avatarUrl": "/avatars/afbc48df2e8c47c35be48168113d83c0.svg",
"fullname": "s",
"name": "Tom-Neverwinter",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
}
] | /posts/Xenova/284224143230669 | 7,873 | 1 |
689972259553494 | [
{
"type": "text",
"value": "\"By the end of this blog post, you will have ",
"raw": "\"By the end of this blog post, you will have ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- learnt all the new goodies accompanying the latest macOS release ",
"raw": "- learnt all the new goodies accompanying the latest macOS release ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- AND successfully run a 7B parameter model using less than 4GB of memory on your Mac.\"",
"raw": "- AND successfully run a 7B parameter model using less than 4GB of memory on your Mac.\"",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Game-changer for local AI? Can't wait to try this! ",
"raw": "Game-changer for local AI? Can't wait to try this! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Brillant work by ",
"raw": "Brillant work by ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@pcuenq",
"resource": null,
"url": null,
"href": null,
"user": "pcuenq",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@osanseviero",
"resource": null,
"url": null,
"href": null,
"user": "osanseviero",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@reach-vb",
"resource": null,
"url": null,
"href": null,
"user": "reach-vb",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@FL33TW00D-HF",
"resource": null,
"url": null,
"href": null,
"user": "FL33TW00D-HF",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check it out: ",
"raw": "Check it out: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/mistral-coreml",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/mistral-coreml",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " #apple ",
"raw": " #apple ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | "By the end of this blog post, you will have
- learnt all the new goodies accompanying the latest macOS release
- AND successfully run a 7B parameter model using less than 4GB of memory on your Mac."
Game-changer for local AI? Can't wait to try this!
Brillant work by @pcuenq @osanseviero @reach-vb @FL33TW00D-HF
Check it out: https://huggingface.co/blog/mistral-coreml #apple | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/NwjC-VUQBapnb50qkopv4.mp4"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6597e9f42235d4056bc6980a/6N_Eira5Rj5e8ZdgekKPQ.jpeg",
"fullname": "Christopher Fleetwood",
"name": "FL33TW00D-HF",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 54
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2846
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1617264212503-603d25b75f9d390ab190b777.jpeg",
"fullname": "Pedro Cuenca",
"name": "pcuenq",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 434
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655385361868-61b85ce86eb1f2c5e6233736.jpeg",
"fullname": "Vaibhav Srivastav",
"name": "reach-vb",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 439
}
] | [
{
"reaction": "๐ฅ",
"users": [
"enzolib",
"FL33TW00D-HF",
"szymonrucinski"
],
"count": 3
}
] | 2024-07-22T18:09:25.000Z | 2024-07-22T21:03:17.173Z | [
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
}
] | /posts/fdaudens/689972259553494 | 655 | 1 |
421245113676574 | [
{
"type": "text",
"value": "๐ Ghost 8B Beta Released: Game-Changing Language Model",
"raw": "๐ Ghost 8B Beta Released: Game-Changing Language Model",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "--",
"raw": "--",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Ghost 8B Beta is a groundbreaking language model developed with a clear vision: to deliver exceptional multilingual support, superior knowledge capabilities, and all while remaining cost-effective. This model comes in two context length variations, 8k and 128k, ensuring flexibility for various tasks. Moreover, it boasts built-in multilingual functionality, making it a powerful tool for global communication and understanding.",
"raw": "Ghost 8B Beta is a groundbreaking language model developed with a clear vision: to deliver exceptional multilingual support, superior knowledge capabilities, and all while remaining cost-effective. This model comes in two context length variations, 8k and 128k, ensuring flexibility for various tasks. Moreover, it boasts built-in multilingual functionality, making it a powerful tool for global communication and understanding.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "--",
"raw": "--",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "* See detailed article: ",
"raw": "* See detailed article: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/lamhieu/ghost-8b-beta-released-game-changing-language-mode",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/lamhieu/ghost-8b-beta-released-game-changing-language-mode",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "* Model card: ",
"raw": "* Model card: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/ghost-x/ghost-8b-beta",
"resource": {
"type": "model",
"id": "ghost-x/ghost-8b-beta",
"discussionNum": null
},
"url": "https://huggingface.co/ghost-x/ghost-8b-beta",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "* Official website: ",
"raw": "* Official website: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://ghost-x.org/docs/models/ghost-8b-beta",
"resource": null,
"url": null,
"href": "https://ghost-x.org/docs/models/ghost-8b-beta",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ Ghost 8B Beta Released: Game-Changing Language Model
--
Ghost 8B Beta is a groundbreaking language model developed with a clear vision: to deliver exceptional multilingual support, superior knowledge capabilities, and all while remaining cost-effective. This model comes in two context length variations, 8k and 128k, ensuring flexibility for various tasks. Moreover, it boasts built-in multilingual functionality, making it a powerful tool for global communication and understanding.
--
* See detailed article: https://huggingface.co/blog/lamhieu/ghost-8b-beta-released-game-changing-language-mode
* Model card: https://huggingface.co/ghost-x/ghost-8b-beta
* Official website: https://ghost-x.org/docs/models/ghost-8b-beta | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/600ae38cc92b79f54efd4556/cSqRIslYl5L3I4WK3a31f.png",
"fullname": "Hieu Lam",
"name": "lamhieu",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 74,
"isFollowing": false
} | [] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"ZeroWw",
"danielus",
"nicolay-r",
"ecyht2",
"John6666",
"MDalprato",
"louisbrulenaudet"
],
"count": 7
},
{
"reaction": "๐คฏ",
"users": [
"stefan-it"
],
"count": 1
}
] | 2024-07-22T17:44:14.000Z | 2024-07-22T17:44:14.126Z | [] | /posts/lamhieu/421245113676574 | 2,104 | 0 |
797420456175789 | [
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/Azure/azureml-assets/pull/3180/files",
"resource": null,
"url": null,
"href": "https://github.com/Azure/azureml-assets/pull/3180/files",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "LLAMA-3.1 benches",
"raw": "LLAMA-3.1 benches",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ERROR: type should be string, got "\nhttps://github.com/Azure/azureml-assets/pull/3180/files\nLLAMA-3.1 benches" | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4VOzArmrRaX_DUTxGmm59.jpeg",
"fullname": "Charles McSneed",
"name": "ChuckMcSneed",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 57,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65644e982bdaccfcd536aff1/eWfOOX5Ljs8NgWwpEmdOp.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"ZeroWw"
],
"count": 1
}
] | 2024-07-22T17:08:09.000Z | 2024-07-26T10:49:35.773Z | [
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/660432d2d2e59abb3fd40b8c/TrvNnR8wHDh9lPHm81JfQ.png",
"fullname": "David Meriwether",
"name": "BigHuggyD",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 21,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65995c45539c808e84c38bf1/k0y3ULloWQEMvosQwHgrE.png",
"fullname": "Juk Armstrong",
"name": "jukofyork",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 60,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4VOzArmrRaX_DUTxGmm59.jpeg",
"fullname": "Charles McSneed",
"name": "ChuckMcSneed",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 57,
"isFollowing": false
}
] | /posts/ChuckMcSneed/797420456175789 | 592 | 26 |
252064064262436 | [
{
"type": "text",
"value": "Hey everyone ๐ค!",
"raw": "Hey everyone ๐ค!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Check out this cool little reproduction of the Clarity Upscaler (",
"raw": "Check out this cool little reproduction of the Clarity Upscaler (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/philz1337x/clarity-upscaler",
"resource": null,
"url": null,
"href": "https://github.com/philz1337x/clarity-upscaler",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ") using refiners (",
"raw": ") using refiners (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/finegrain-ai/refiners",
"resource": null,
"url": null,
"href": "https://github.com/finegrain-ai/refiners",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "): ",
"raw": "): ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/finegrain/enhancer",
"resource": null,
"url": null,
"href": "https://huggingface.co/spaces/finegrain/enhancer",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hey everyone ๐ค!
Check out this cool little reproduction of the Clarity Upscaler (https://github.com/philz1337x/clarity-upscaler) using refiners (https://github.com/finegrain-ai/refiners): https://huggingface.co/spaces/finegrain/enhancer | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669043420538-6364f1784f773b7e4cede70c.jpeg",
"fullname": "Laureฮทt Fainsin",
"name": "1aurent",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 79,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"limiteinductive",
"deltheil",
"jaebumskiyomi",
"piercus",
"1aurent",
"clem",
"John6666",
"Blane187",
"osanseviero"
],
"count": 9
}
] | 2024-07-22T15:33:56.000Z | 2024-07-24T08:41:25.935Z | [
{
"avatarUrl": "/avatars/54483699273ac58a4a6fe1fa4aab65fe.svg",
"fullname": "Robert Sinclair",
"name": "ZeroWw",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669043420538-6364f1784f773b7e4cede70c.jpeg",
"fullname": "Laureฮทt Fainsin",
"name": "1aurent",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 79,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem ๐ค",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1734,
"isFollowing": false
}
] | /posts/1aurent/252064064262436 | 2,662 | 3 |
312817668697546 | [
{
"type": "text",
"value": "Questions about data, synthetic data, human feedback and data quality?",
"raw": "Questions about data, synthetic data, human feedback and data quality?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Argilla has moved its community from Slack to the Hugging Face Discord server! ",
"raw": "Argilla has moved its community from Slack to the Hugging Face Discord server! ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "When part of the Hugging Face Discord, you can select โChannels & rolesโ and select โArgillaโ along with any of the other groups that are interesting to you. โArgillaโ will cover anything about argilla and distilabel, and it will give you access to 1) #argilla-distilabel-general, for all general discussions and news, and 2) #argilla-distilabel-help, for any usage-focused questions.",
"raw": "When part of the Hugging Face Discord, you can select โChannels & rolesโ and select โArgillaโ along with any of the other groups that are interesting to you. โArgillaโ will cover anything about argilla and distilabel, and it will give you access to 1) #argilla-distilabel-general, for all general discussions and news, and 2) #argilla-distilabel-help, for any usage-focused questions.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Questions about data, synthetic data, human feedback and data quality?
Argilla has moved its community from Slack to the Hugging Face Discord server!
When part of the Hugging Face Discord, you can select โChannels & rolesโ and select โArgillaโ along with any of the other groups that are interesting to you. โArgillaโ will cover anything about argilla and distilabel, and it will give you access to 1) #argilla-distilabel-general, for all general discussions and news, and 2) #argilla-distilabel-help, for any usage-focused questions.
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 148,
"isFollowing": false
} | [] | [] | [] | 2024-07-22T07:52:33.000Z | 2024-07-22T07:54:33.263Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 148,
"isFollowing": false
}
] | /posts/davidberenstein1957/312817668697546 | 672 | 1 |
765711664963513 | [
{
"type": "text",
"value": "๐ Good folks at ",
"raw": "๐ Good folks at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@nvidia",
"resource": null,
"url": null,
"href": null,
"user": "nvidia",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " just dropped: \"ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities\" ๐ง ๐ก",
"raw": " just dropped: \"ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities\" ๐ง ๐ก",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "In the past few months, the open LLM community has made significant progress in releasing open models (Llama-3-70B-Instruct (",
"raw": "In the past few months, the open LLM community has made significant progress in releasing open models (Llama-3-70B-Instruct (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@Meta",
"resource": null,
"url": null,
"href": null,
"user": "Meta",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " -AI) ๐ฆ, QWen2-72BInstruct (",
"raw": " -AI) ๐ฆ, QWen2-72BInstruct (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@AlibabaGroup",
"resource": null,
"url": null,
"href": null,
"user": "AlibabaGroup",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ) ๐, Nemotron-4-340B-Instruct (",
"raw": " ) ๐, Nemotron-4-340B-Instruct (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@nvidia",
"resource": null,
"url": null,
"href": null,
"user": "nvidia",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ) โ๏ธ, and Mixtral-8x22BInstruct-v0.1 (",
"raw": " ) โ๏ธ, and Mixtral-8x22BInstruct-v0.1 (",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@MistralAI",
"resource": null,
"url": null,
"href": null,
"user": "MistralAI",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ) ๐ช๏ธ) that are at par with proprietary models! ๐",
"raw": " ) ๐ช๏ธ) that are at par with proprietary models! ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "But top models like GPT-4 are still outperforming them in certain domains! ๐๐ช",
"raw": "But top models like GPT-4 are still outperforming them in certain domains! ๐๐ช",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This led us to having domain-focused open-LLMs (DeepSeek-Coder-V2 for coding and math ๐จโ๐ปโ, ChatQA 1.5 for conversational QA and retrieval-augmented generation (RAG) ๐ฌ๐, and InternVL 1.5 for vision-language tasks ๐ผ๏ธ๐ฃ๏ธ)",
"raw": "This led us to having domain-focused open-LLMs (DeepSeek-Coder-V2 for coding and math ๐จโ๐ปโ, ChatQA 1.5 for conversational QA and retrieval-augmented generation (RAG) ๐ฌ๐, and InternVL 1.5 for vision-language tasks ๐ผ๏ธ๐ฃ๏ธ)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The challenge that ChatQA 2 focuses on is of context length and RAG! ๐๐",
"raw": "The challenge that ChatQA 2 focuses on is of context length and RAG! ๐๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "These are the two capabilities essential for LLMs to process large volumes of information that cannot fit into a single prompt and are complementary to each other, depending on the downstream tasks and computational budgets. ๐งฉ๐",
"raw": "These are the two capabilities essential for LLMs to process large volumes of information that cannot fit into a single prompt and are complementary to each other, depending on the downstream tasks and computational budgets. ๐งฉ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The solution is a detailed continued training recipe to extend the context window of Llama3-70B-base from 8K to 128K tokens, along with a three-stage instruction tuning process to enhance the model's instruction-following, RAG performance, and long-context understanding capabilities. ๐๐ง",
"raw": "The solution is a detailed continued training recipe to extend the context window of Llama3-70B-base from 8K to 128K tokens, along with a three-stage instruction tuning process to enhance the model's instruction-following, RAG performance, and long-context understanding capabilities. ๐๐ง",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Paper: ",
"raw": "๐ Paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2407.14482",
"resource": {
"type": "paper",
"id": "2407.14482",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2407.14482",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG\n Capabilities (2407.14482)"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The interesting thing to notice from benchmarks was, how good QWen 2 is out of the box! ๐โจ",
"raw": "The interesting thing to notice from benchmarks was, how good QWen 2 is out of the box! ๐โจ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | ๐ Good folks at @nvidia just dropped: "ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities" ๐ง ๐ก
In the past few months, the open LLM community has made significant progress in releasing open models (Llama-3-70B-Instruct (@Meta -AI) ๐ฆ, QWen2-72BInstruct (@AlibabaGroup ) ๐, Nemotron-4-340B-Instruct (@nvidia ) โ๏ธ, and Mixtral-8x22BInstruct-v0.1 (@MistralAI ) ๐ช๏ธ) that are at par with proprietary models! ๐
But top models like GPT-4 are still outperforming them in certain domains! ๐๐ช
This led us to having domain-focused open-LLMs (DeepSeek-Coder-V2 for coding and math ๐จโ๐ปโ, ChatQA 1.5 for conversational QA and retrieval-augmented generation (RAG) ๐ฌ๐, and InternVL 1.5 for vision-language tasks ๐ผ๏ธ๐ฃ๏ธ)
The challenge that ChatQA 2 focuses on is of context length and RAG! ๐๐
These are the two capabilities essential for LLMs to process large volumes of information that cannot fit into a single prompt and are complementary to each other, depending on the downstream tasks and computational budgets. ๐งฉ๐
The solution is a detailed continued training recipe to extend the context window of Llama3-70B-base from 8K to 128K tokens, along with a three-stage instruction tuning process to enhance the model's instruction-following, RAG performance, and long-context understanding capabilities. ๐๐ง
๐ Paper: https://huggingface.co/papers/2407.14482
The interesting thing to notice from benchmarks was, how good QWen 2 is out of the box! ๐โจ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/662bf5bfe93bb73804ef9344/L-KtuQhHRZnj-Cr-Rrsp-.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61e8c67cee1e1440121f0240/9sb__WsO5mwmdHHa6xKNc.jpeg",
"fullname": "Meta World Peace",
"name": "Meta",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5
}
] | [
{
"reaction": "๐",
"users": [
"louisbrulenaudet"
],
"count": 1
}
] | 2024-07-22T05:08:52.000Z | 2024-07-22T05:08:52.486Z | [] | /posts/singhsidhukuldeep/765711664963513 | 702 | 0 |
281477193416730 | [
{
"type": "text",
"value": "Intelligence is all you need for roleplay.",
"raw": "Intelligence is all you need for roleplay.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Roleplay is overlooked as a special case of chain-of-thought, where context must be attended to and inferred state of the world and embodied minds must be persisted and evolved along credible narrative lines. LLMs are also being tasked to function as gamemasters. It's a challenging task which points to potential future benchmarks. The fact that the largest commercial LLMs are adept in generating text for roleplay intuitively implies that model intelligence is sufficient so long as it can generalize properly and pay attention to context without becoming confused.",
"raw": "Roleplay is overlooked as a special case of chain-of-thought, where context must be attended to and inferred state of the world and embodied minds must be persisted and evolved along credible narrative lines. LLMs are also being tasked to function as gamemasters. It's a challenging task which points to potential future benchmarks. The fact that the largest commercial LLMs are adept in generating text for roleplay intuitively implies that model intelligence is sufficient so long as it can generalize properly and pay attention to context without becoming confused.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This recent merge of mine composed using 3 academic fine-tunes, none of which were intended for roleplay, has survived the gauntlet of a Reddit post and appears to be a particularly strong 8B model when it comes to roleplay coherence.",
"raw": "This recent merge of mine composed using 3 academic fine-tunes, none of which were intended for roleplay, has survived the gauntlet of a Reddit post and appears to be a particularly strong 8B model when it comes to roleplay coherence.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B",
"resource": {
"type": "model",
"id": "grimjim/llama-3-Nephilim-v3-8B",
"discussionNum": null
},
"url": "https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " (bf16 weights)",
"raw": " (bf16 weights)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B-GGUF",
"resource": {
"type": "model",
"id": "grimjim/llama-3-Nephilim-v3-8B-GGUF",
"discussionNum": null
},
"url": "https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B-GGUF",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " (select quants)",
"raw": " (select quants)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Intelligence is all you need for roleplay.
Roleplay is overlooked as a special case of chain-of-thought, where context must be attended to and inferred state of the world and embodied minds must be persisted and evolved along credible narrative lines. LLMs are also being tasked to function as gamemasters. It's a challenging task which points to potential future benchmarks. The fact that the largest commercial LLMs are adept in generating text for roleplay intuitively implies that model intelligence is sufficient so long as it can generalize properly and pay attention to context without becoming confused.
This recent merge of mine composed using 3 academic fine-tunes, none of which were intended for roleplay, has survived the gauntlet of a Reddit post and appears to be a particularly strong 8B model when it comes to roleplay coherence.
https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B (bf16 weights)
https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B-GGUF (select quants) | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65c992424936ab38ecf706b0/aq7vuHFPO1S93fwJk0Cuq.jpeg",
"fullname": "Jim Lai",
"name": "grimjim",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 163,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"ZeroWw",
"SerialKicked",
"Duttones",
"takeraparterer",
"Cohee",
"jed-tiotuico"
],
"count": 7
}
] | 2024-07-22T01:20:52.000Z | 2024-07-22T22:54:52.894Z | [
{
"avatarUrl": "/avatars/ab4dd498bbc0d5931f733b5a364fa765.svg",
"fullname": "Vitor Lima",
"name": "Duttones",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
}
] | /posts/grimjim/281477193416730 | 2,750 | 1 |
957104236129665 | [
{
"type": "text",
"value": "LivePortrait AI: Transform Static Photos into Talking Videos. Now supporting Video-to-Video conversion and Superior Expression Transfer at Remarkable Speed",
"raw": "LivePortrait AI: Transform Static Photos into Talking Videos. Now supporting Video-to-Video conversion and Superior Expression Transfer at Remarkable Speed",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "A new tutorial is anticipated to showcase the latest changes and features in V3, including Video-to-Video capabilities and additional enhancements.",
"raw": "A new tutorial is anticipated to showcase the latest changes and features in V3, including Video-to-Video capabilities and additional enhancements.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This post provides information for both Windows (local) and Cloud installations (Massed Compute, RunPod, and free Kaggle Account).",
"raw": "This post provides information for both Windows (local) and Cloud installations (Massed Compute, RunPod, and free Kaggle Account).",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Windows Local Installation Tutorial ๏ธโคต๏ธ",
"raw": "๐ Windows Local Installation Tutorial ๏ธโคต๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โถ๏ธ ",
"raw": "โถ๏ธ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/FPtpNrmuwXk",
"resource": null,
"url": null,
"href": "https://youtu.be/FPtpNrmuwXk",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Cloud (no-GPU) Installations Tutorial for Massed Compute, RunPod and free Kaggle Account ๏ธโคต๏ธ",
"raw": "๐ Cloud (no-GPU) Installations Tutorial for Massed Compute, RunPod and free Kaggle Account ๏ธโคต๏ธ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โถ๏ธ ",
"raw": "โถ๏ธ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/wG7oPp01COg",
"resource": null,
"url": null,
"href": "https://youtu.be/wG7oPp01COg",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The V3 update introduces video-to-video functionality. If you're seeking a one-click installation method for LivePortrait, an open-source zero-shot image-to-animation application on Windows, for local use, this tutorial is essential. It introduces the cutting-edge image-to-animation open-source generator Live Portrait. Simply provide a static image and a driving video to create an impressive animation in seconds. LivePortrait is incredibly fast and adept at preserving facial expressions from the input video. The results are truly astonishing.",
"raw": "The V3 update introduces video-to-video functionality. If you're seeking a one-click installation method for LivePortrait, an open-source zero-shot image-to-animation application on Windows, for local use, this tutorial is essential. It introduces the cutting-edge image-to-animation open-source generator Live Portrait. Simply provide a static image and a driving video to create an impressive animation in seconds. LivePortrait is incredibly fast and adept at preserving facial expressions from the input video. The results are truly astonishing.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "With the V3 update adding video-to-video functionality, those interested in using LivePortrait but lacking a powerful GPU, using a Mac, or preferring cloud-based solutions will find this tutorial invaluable. It guides you through the one-click installation and usage of LivePortrait on #MassedCompute, #RunPod, and even a free #Kaggle account. After following this tutorial, you'll find running LivePortrait on cloud services as straightforward as running it locally. LivePortrait is the latest state-of-the-art static image to talking animation generator, surpassing even paid services in both speed and quality.",
"raw": "With the V3 update adding video-to-video functionality, those interested in using LivePortrait but lacking a powerful GPU, using a Mac, or preferring cloud-based solutions will find this tutorial invaluable. It guides you through the one-click installation and usage of LivePortrait on #MassedCompute, #RunPod, and even a free #Kaggle account. After following this tutorial, you'll find running LivePortrait on cloud services as straightforward as running it locally. LivePortrait is the latest state-of-the-art static image to talking animation generator, surpassing even paid services in both speed and quality.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | LivePortrait AI: Transform Static Photos into Talking Videos. Now supporting Video-to-Video conversion and Superior Expression Transfer at Remarkable Speed
A new tutorial is anticipated to showcase the latest changes and features in V3, including Video-to-Video capabilities and additional enhancements.
This post provides information for both Windows (local) and Cloud installations (Massed Compute, RunPod, and free Kaggle Account).
๐ Windows Local Installation Tutorial ๏ธโคต๏ธ
โถ๏ธ https://youtu.be/FPtpNrmuwXk
๐ Cloud (no-GPU) Installations Tutorial for Massed Compute, RunPod and free Kaggle Account ๏ธโคต๏ธ
โถ๏ธ https://youtu.be/wG7oPp01COg
The V3 update introduces video-to-video functionality. If you're seeking a one-click installation method for LivePortrait, an open-source zero-shot image-to-animation application on Windows, for local use, this tutorial is essential. It introduces the cutting-edge image-to-animation open-source generator Live Portrait. Simply provide a static image and a driving video to create an impressive animation in seconds. LivePortrait is incredibly fast and adept at preserving facial expressions from the input video. The results are truly astonishing.
With the V3 update adding video-to-video functionality, those interested in using LivePortrait but lacking a powerful GPU, using a Mac, or preferring cloud-based solutions will find this tutorial invaluable. It guides you through the one-click installation and usage of LivePortrait on #MassedCompute, #RunPod, and even a free #Kaggle account. After following this tutorial, you'll find running LivePortrait on cloud services as straightforward as running it locally. LivePortrait is the latest state-of-the-art static image to talking animation generator, surpassing even paid services in both speed and quality.
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gรถzรผkara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/XgzWk_uT2DvCVFtd2L__P.mp4"
}
] | [] | [
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"Duartebrizza",
"John6666",
"Ramikan-BR",
"attyru",
"Dihelson",
"Blane187",
"Rsln",
"osanseviero"
],
"count": 9
},
{
"reaction": "๐ฅ",
"users": [
"MonsterMMORPG",
"Ramikan-BR",
"bakudas",
"Dihelson",
"abdeljalilELmajjodi",
"Rsln",
"0xjorgev"
],
"count": 7
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"Ramikan-BR",
"yutianCCNU",
"bakudas",
"NHLOCAL"
],
"count": 5
},
{
"reaction": "๐คฏ",
"users": [
"MonsterMMORPG",
"Ramikan-BR",
"bakudas"
],
"count": 3
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "โค๏ธ",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "๐",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "๐ง ",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "๐ค",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "๐ค",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
},
{
"reaction": "โ",
"users": [
"MonsterMMORPG",
"Ramikan-BR"
],
"count": 2
}
] | 2024-07-21T22:05:35.000Z | 2024-07-22T10:08:03.626Z | [
{
"avatarUrl": "/avatars/d92c459b18a4aa5642e5c4bd3b8e3fe4.svg",
"fullname": "Mendonca",
"name": "Dihelson",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gรถzรผkara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
}
] | /posts/MonsterMMORPG/957104236129665 | 5,191 | 2 |
247059445469957 | [
{
"type": "text",
"value": "Hi HF Community!๐ค",
"raw": "Hi HF Community!๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "As you may know, small language models like the SmolLM series have been on the rise recently: although it may not be completely fair to compare them with larger models, my thought was that we could build a space where these SLMs could compete against each other in a chat arena, and here is what came out: ",
"raw": "As you may know, small language models like the SmolLM series have been on the rise recently: although it may not be completely fair to compare them with larger models, my thought was that we could build a space where these SLMs could compete against each other in a chat arena, and here is what came out: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/as-cle-bert/smolLM-arena",
"resource": {
"type": "space",
"id": "as-cle-bert/smolLM-arena",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/as-cle-bert/smolLM-arena",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ๐",
"raw": " ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Even though there might be some little issues and hiccups due to GPU resources allocation, this space offers you the possibility to compare and play around with several Small Language Models, coming also with a leaderboard page (make sure to refresh it for the latest updates!)๐",
"raw": "Even though there might be some little issues and hiccups due to GPU resources allocation, this space offers you the possibility to compare and play around with several Small Language Models, coming also with a leaderboard page (make sure to refresh it for the latest updates!)๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Have fun!๐",
"raw": "Have fun!๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Hi HF Community!๐ค
As you may know, small language models like the SmolLM series have been on the rise recently: although it may not be completely fair to compare them with larger models, my thought was that we could build a space where these SLMs could compete against each other in a chat arena, and here is what came out: https://huggingface.co/spaces/as-cle-bert/smolLM-arena ๐
Even though there might be some little issues and hiccups due to GPU resources allocation, this space offers you the possibility to compare and play around with several Small Language Models, coming also with a leaderboard page (make sure to refresh it for the latest updates!)๐
Have fun!๐ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e330e7edc2f7306e252448/ucpk9c8x0UafGM4mXTrRy.jpeg",
"fullname": "Astra Clelia Bertelli",
"name": "as-cle-bert",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 639,
"isFollowing": false
} | [] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"clfegg",
"osanseviero"
],
"count": 2
}
] | 2024-07-21T19:19:15.000Z | 2024-07-22T13:18:10.736Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/644cb09a22d211df644a0a6c/v0EHypMU4X3Oxxf3cao_O.png",
"fullname": "Jรบlio Cรฉsar",
"name": "Ramikan-BR",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 10,
"isFollowing": false
},
{
"avatarUrl": "/avatars/d92c459b18a4aa5642e5c4bd3b8e3fe4.svg",
"fullname": "Mendonca",
"name": "Dihelson",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e330e7edc2f7306e252448/ucpk9c8x0UafGM4mXTrRy.jpeg",
"fullname": "Astra Clelia Bertelli",
"name": "as-cle-bert",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 639,
"isFollowing": false
}
] | /posts/as-cle-bert/247059445469957 | 562 | 5 |
851413061927294 | [
{
"type": "text",
"value": "Introducing OpenCHAT mini: a lightweight, fast, and unlimited version of OpenGPT 4o.",
"raw": "Introducing OpenCHAT mini: a lightweight, fast, and unlimited version of OpenGPT 4o.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/KingNish/OpenCHAT-mini2",
"resource": {
"type": "space",
"id": "KingNish/OpenCHAT-mini2",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/KingNish/OpenCHAT-mini2",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It has unlimited web search, vision and image generation.",
"raw": "It has unlimited web search, vision and image generation.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Please take a look and share your review. Thank you! ๐ค",
"raw": "Please take a look and share your review. Thank you! ๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Introducing OpenCHAT mini: a lightweight, fast, and unlimited version of OpenGPT 4o.
https://huggingface.co/spaces/KingNish/OpenCHAT-mini2
It has unlimited web search, vision and image generation.
Please take a look and share your review. Thank you! ๐ค | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6612aedf09f16e7347dfa7e1/bPYjBXCedY_1fSIPjoBTY.jpeg",
"fullname": "Nishith Jain",
"name": "KingNish",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1072,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"JackLiuAngel",
"abbotk",
"Blane187",
"Trillionare",
"SvCy",
"louisbrulenaudet",
"AdamyaG",
"noelpil",
"peterciank"
],
"count": 10
},
{
"reaction": "๐",
"users": [
"Wuayker"
],
"count": 1
}
] | 2024-07-21T16:46:28.000Z | 2024-08-19T07:07:10.277Z | [
{
"avatarUrl": "/avatars/e64b02b16f82f7062248393ab51761d0.svg",
"fullname": "JackLiu",
"name": "JackLiuAngel",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
},
{
"avatarUrl": "/avatars/de5465ed2cc75f84f772bc2e595f5740.svg",
"fullname": "Levi Zoesch",
"name": "xbelevi",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6612aedf09f16e7347dfa7e1/bPYjBXCedY_1fSIPjoBTY.jpeg",
"fullname": "Nishith Jain",
"name": "KingNish",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1072,
"isFollowing": false
},
{
"avatarUrl": "/avatars/d3290da7c6de68101e81208688f8f5a1.svg",
"fullname": "Timofey",
"name": "nbv123",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "/avatars/937bb0ad65d807c6ea24499dc4544fa4.svg",
"fullname": "Pashangh Irani",
"name": "Orion34523",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/KingNish/851413061927294 | 5,869 | 7 |
677543233180235 | [
{
"type": "text",
"value": "Reading ",
"raw": "Reading ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2212.11279",
"resource": {
"type": "paper",
"id": "2212.11279",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2212.11279",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "Annotated History of Modern AI and Deep Learning (2212.11279)"
},
{
"type": "text",
"value": " by Jรผrgen Schmidhuber.",
"raw": " by Jรผrgen Schmidhuber.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "From the abstract: \"A modern history of AI will emphasize breakthroughs outside of the focus of traditional AI text books, in particular, mathematical foundations of today's NNs such as the chain rule (1676), the first NNs (linear regression, circa 1800), and the first working deep learners (1965-).\"",
"raw": "From the abstract: \"A modern history of AI will emphasize breakthroughs outside of the focus of traditional AI text books, in particular, mathematical foundations of today's NNs such as the chain rule (1676), the first NNs (linear regression, circa 1800), and the first working deep learners (1965-).\"",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Reading https://huggingface.co/papers/2212.11279 by Jรผrgen Schmidhuber.
From the abstract: "A modern history of AI will emphasize breakthroughs outside of the focus of traditional AI text books, in particular, mathematical foundations of today's NNs such as the chain rule (1676), the first NNs (linear regression, circa 1800), and the first working deep learners (1965-)." | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b695dcd3df8086e5ed7c89/06Toh65jDEz3WJbIM6ZmZ.jpeg",
"fullname": "Adam Fields",
"name": "adamelliotfields",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 9,
"isFollowing": false
} | [] | [] | [] | 2024-07-21T13:41:54.000Z | 2024-07-21T13:42:25.801Z | [] | /posts/adamelliotfields/677543233180235 | 586 | 0 |
332303861025570 | [
{
"type": "text",
"value": "You can now find the OBIS - Ocean Biodiversity Information System, on Hugging Face with 128M rows, via the Datasets package stream ๐ค",
"raw": "You can now find the OBIS - Ocean Biodiversity Information System, on Hugging Face with 128M rows, via the Datasets package stream ๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The datasets are integrated, allowing seamless search and mapping by species name, higher taxonomic level, geographic area, depth, time, and environmental parameters. OBIS originates from the Census of Marine Life (2000-2010) and was adopted as a project under IOC-UNESCOโs International Oceanographic Data and Information (IODE) programme in 2009.",
"raw": "The datasets are integrated, allowing seamless search and mapping by species name, higher taxonomic level, geographic area, depth, time, and environmental parameters. OBIS originates from the Census of Marine Life (2000-2010) and was adopted as a project under IOC-UNESCOโs International Oceanographic Data and Information (IODE) programme in 2009.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Collectively, they have provided over 45 million observations of nearly 120,000 marine species, ranging from bacteria to whales, from the surface to 10,900 meters depth, and from the tropics to the poles.",
"raw": "Collectively, they have provided over 45 million observations of nearly 120,000 marine species, ranging from bacteria to whales, from the surface to 10,900 meters depth, and from the tropics to the poles.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Link to the dataset: ",
"raw": "Link to the dataset: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/louisbrulenaudet/obis",
"resource": {
"type": "dataset",
"id": "louisbrulenaudet/obis",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/louisbrulenaudet/obis",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | You can now find the OBIS - Ocean Biodiversity Information System, on Hugging Face with 128M rows, via the Datasets package stream ๐ค
The datasets are integrated, allowing seamless search and mapping by species name, higher taxonomic level, geographic area, depth, time, and environmental parameters. OBIS originates from the Census of Marine Life (2000-2010) and was adopted as a project under IOC-UNESCOโs International Oceanographic Data and Information (IODE) programme in 2009.
Collectively, they have provided over 45 million observations of nearly 120,000 marine species, ranging from bacteria to whales, from the surface to 10,900 meters depth, and from the tropics to the poles.
Link to the dataset: https://huggingface.co/datasets/louisbrulenaudet/obis | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6459fa0f5b3111fbe83286e1/UhCa7JNbtTjC6dgOjZtH0.jpeg",
"fullname": "Louis Brulรฉ Naudet",
"name": "louisbrulenaudet",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 176,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6459fa0f5b3111fbe83286e1/JhUbEswcpw6ROpeDFkal6.jpeg"
}
] | [] | [] | 2024-07-21T07:55:07.000Z | 2024-07-21T07:55:07.031Z | [] | /posts/louisbrulenaudet/332303861025570 | 869 | 0 |
876153321993040 | [
{
"type": "text",
"value": "New Trends in LLM: Overview with Focus on xLLM",
"raw": "New Trends in LLM: Overview with Focus on xLLM",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Read full article and download PowerPoint presentation at ",
"raw": "Read full article and download PowerPoint presentation at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://mltblog.com/3KqlNO7",
"resource": null,
"url": null,
"href": "https://mltblog.com/3KqlNO7",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "If you ever wondered how xLLM is different from other LLM and RAG architectures, what are the foundational changes that make it appealing to fortune 100 companies, and what are the innovations being copied by competitors, read on. In this article, I share the latest trends and provide a high-level summary of xLLM, describing the ground-breaking technologies that make it unique, faster, and better for professional users and experts. In particular, I share my PowerPoint presentation on the topic.",
"raw": "If you ever wondered how xLLM is different from other LLM and RAG architectures, what are the foundational changes that make it appealing to fortune 100 companies, and what are the innovations being copied by competitors, read on. In this article, I share the latest trends and provide a high-level summary of xLLM, describing the ground-breaking technologies that make it unique, faster, and better for professional users and experts. In particular, I share my PowerPoint presentation on the topic.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Search is becoming hot again, this time powered by RAG and LLMs rather than PageRank. New LLMs may not use transformers, and energy-efficient implementations are gaining popularity, with an attempt to lower GPU usage, and thus costs. Yet all but xLLM still rely on Blackbox neural networks.",
"raw": "Search is becoming hot again, this time powered by RAG and LLMs rather than PageRank. New LLMs may not use transformers, and energy-efficient implementations are gaining popularity, with an attempt to lower GPU usage, and thus costs. Yet all but xLLM still rely on Blackbox neural networks.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Great evaluation metrics remain elusive and will remain so probably forever: in the end, LLMs, just like clustering, are part of unsupervised learning. Two users looking at a non-trivial dataset will never agree on what the โtrueโ underlying cluster structure is. Because โtrueโ is meaningless in this context. The same applies to LLMs. With some exceptions: when used for predictive analytics, that is, supervised learning, it is possible to tell which LLM is best in absolute terms (to some extent; it also depends on the dataset).",
"raw": "Great evaluation metrics remain elusive and will remain so probably forever: in the end, LLMs, just like clustering, are part of unsupervised learning. Two users looking at a non-trivial dataset will never agree on what the โtrueโ underlying cluster structure is. Because โtrueโ is meaningless in this context. The same applies to LLMs. With some exceptions: when used for predictive analytics, that is, supervised learning, it is possible to tell which LLM is best in absolute terms (to some extent; it also depends on the dataset).",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | New Trends in LLM: Overview with Focus on xLLM
Read full article and download PowerPoint presentation at https://mltblog.com/3KqlNO7
If you ever wondered how xLLM is different from other LLM and RAG architectures, what are the foundational changes that make it appealing to fortune 100 companies, and what are the innovations being copied by competitors, read on. In this article, I share the latest trends and provide a high-level summary of xLLM, describing the ground-breaking technologies that make it unique, faster, and better for professional users and experts. In particular, I share my PowerPoint presentation on the topic.
Search is becoming hot again, this time powered by RAG and LLMs rather than PageRank. New LLMs may not use transformers, and energy-efficient implementations are gaining popularity, with an attempt to lower GPU usage, and thus costs. Yet all but xLLM still rely on Blackbox neural networks.
Great evaluation metrics remain elusive and will remain so probably forever: in the end, LLMs, just like clustering, are part of unsupervised learning. Two users looking at a non-trivial dataset will never agree on what the โtrueโ underlying cluster structure is. Because โtrueโ is meaningless in this context. The same applies to LLMs. With some exceptions: when used for predictive analytics, that is, supervised learning, it is possible to tell which LLM is best in absolute terms (to some extent; it also depends on the dataset).
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/669c89e98f2dbc203f9e74ab/higvnXEHeo_Ig2bgTpn47.png",
"fullname": "Vincent Granville",
"name": "vincentg64",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 17,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/669c89e98f2dbc203f9e74ab/FM4OPaZa5zsL_NJYbr9pm.png"
}
] | [] | [] | 2024-07-21T04:55:17.000Z | 2024-07-21T04:56:40.618Z | [] | /posts/vincentg64/876153321993040 | 466 | 0 |
933947307384174 | [
{
"type": "text",
"value": "I can do Time Series Predictions with Swarm Algorithms! When all you know how to use is a hammer, everything looks like a nail. An LLM model is a hammer. It is not a deity. It has computational and mathematical limitations. Very big ones. Swarm Algorithms do not have this same problem. They are like a screwdriver. The screwdriver is not better than the hammer, both are useful. Why are LLM models bad at things like Time Series Predictions and Function Calls? Because those are jobs better fit for a screwdriver as opposed to a hammer. ",
"raw": "I can do Time Series Predictions with Swarm Algorithms! When all you know how to use is a hammer, everything looks like a nail. An LLM model is a hammer. It is not a deity. It has computational and mathematical limitations. Very big ones. Swarm Algorithms do not have this same problem. They are like a screwdriver. The screwdriver is not better than the hammer, both are useful. Why are LLM models bad at things like Time Series Predictions and Function Calls? Because those are jobs better fit for a screwdriver as opposed to a hammer. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I can do Time Series Predictions with Swarm Algorithms! When all you know how to use is a hammer, everything looks like a nail. An LLM model is a hammer. It is not a deity. It has computational and mathematical limitations. Very big ones. Swarm Algorithms do not have this same problem. They are like a screwdriver. The screwdriver is not better than the hammer, both are useful. Why are LLM models bad at things like Time Series Predictions and Function Calls? Because those are jobs better fit for a screwdriver as opposed to a hammer. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 148,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64274b69ba6cef0a6ebb0fd6/IBnV8-RjcZwM4oR3Bk6Jm.png"
}
] | [] | [] | 2024-07-21T00:33:23.000Z | 2024-07-21T02:40:31.690Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6316fb937b0ee0136e5f1220/poHBoJ7QAF_s2CCaosdvQ.jpeg",
"fullname": "Firstname Lastname",
"name": "takeraparterer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
}
] | /posts/TuringsSolutions/933947307384174 | 515 | 1 |
295019542040831 | [
{
"type": "text",
"value": "Remember when you had a few hundred rows of data that could easily be opened in Excel. ๐",
"raw": "Remember when you had a few hundred rows of data that could easily be opened in Excel. ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Well, we are far from that with billion-parameter LLMs trained on trillions of tokens. ๐",
"raw": "Well, we are far from that with billion-parameter LLMs trained on trillions of tokens. ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@Microsoft",
"resource": null,
"url": null,
"href": null,
"user": "Microsoft",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " wants to bridge that using \"SpreadsheetLLM\": Encoding Spreadsheets for Large Language Models. ๐ค๐",
"raw": " wants to bridge that using \"SpreadsheetLLM\": Encoding Spreadsheets for Large Language Models. ๐ค๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "While it sounds simple, Spreadsheets, with their extensive two-dimensional grids, various layouts, and diverse formatting options, present notable challenges for large language models (LLMs). ๐ง",
"raw": "While it sounds simple, Spreadsheets, with their extensive two-dimensional grids, various layouts, and diverse formatting options, present notable challenges for large language models (LLMs). ๐ง",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "They initially propose a vanilla serialization approach that incorporates cell addresses, values, and formats. However, this approach is limited by LLMs' token constraints, making it impractical for most applications. โ",
"raw": "They initially propose a vanilla serialization approach that incorporates cell addresses, values, and formats. However, this approach is limited by LLMs' token constraints, making it impractical for most applications. โ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Solution... A SheetCompressor, an innovative encoding framework that compresses spreadsheets effectively for LLMs. ๐ง",
"raw": "Solution... A SheetCompressor, an innovative encoding framework that compresses spreadsheets effectively for LLMs. ๐ง",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It comprises three modules: ",
"raw": "It comprises three modules: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "1๏ธโฃ Structural-anchor-based compression",
"raw": "1๏ธโฃ Structural-anchor-based compression",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "2๏ธโฃ Inverse index translation",
"raw": "2๏ธโฃ Inverse index translation",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "3๏ธโฃ Data-format-aware aggregation",
"raw": "3๏ธโฃ Data-format-aware aggregation",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "It significantly improves performance in spreadsheet table detection task, outperforming the vanilla approach by 25.6% in GPT4's in-context learning setting. ๐",
"raw": "It significantly improves performance in spreadsheet table detection task, outperforming the vanilla approach by 25.6% in GPT4's in-context learning setting. ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Sounds exciting, sadly no code, models OR datasets are released. ๐",
"raw": "Sounds exciting, sadly no code, models OR datasets are released. ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Moreover, there is a lot of research in encoding 2D position embeddings and this work has not been benchmarked against that! ๐ง",
"raw": "Moreover, there is a lot of research in encoding 2D position embeddings and this work has not been benchmarked against that! ๐ง",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Paper: ",
"raw": "Paper: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2407.09025",
"resource": {
"type": "paper",
"id": "2407.09025",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2407.09025",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "SpreadsheetLLM: Encoding Spreadsheets for Large Language Models (2407.09025)"
}
] | Remember when you had a few hundred rows of data that could easily be opened in Excel. ๐
Well, we are far from that with billion-parameter LLMs trained on trillions of tokens. ๐
@Microsoft wants to bridge that using "SpreadsheetLLM": Encoding Spreadsheets for Large Language Models. ๐ค๐
While it sounds simple, Spreadsheets, with their extensive two-dimensional grids, various layouts, and diverse formatting options, present notable challenges for large language models (LLMs). ๐ง
They initially propose a vanilla serialization approach that incorporates cell addresses, values, and formats. However, this approach is limited by LLMs' token constraints, making it impractical for most applications. โ
Solution... A SheetCompressor, an innovative encoding framework that compresses spreadsheets effectively for LLMs. ๐ง
It comprises three modules:
1๏ธโฃ Structural-anchor-based compression
2๏ธโฃ Inverse index translation
3๏ธโฃ Data-format-aware aggregation
It significantly improves performance in spreadsheet table detection task, outperforming the vanilla approach by 25.6% in GPT4's in-context learning setting. ๐
Sounds exciting, sadly no code, models OR datasets are released. ๐
Moreover, there is a lot of research in encoding 2D position embeddings and this work has not been benchmarked against that! ๐ง
Paper: https://huggingface.co/papers/2407.09025 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png",
"fullname": "Kuldeep Singh Sidhu",
"name": "singhsidhukuldeep",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 197,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/662bf5bfe93bb73804ef9344/36ySC-N5igRF3LbgJJo35.jpeg"
}
] | [] | [] | 2024-07-20T19:45:25.000Z | 2024-07-20T19:45:25.923Z | [] | /posts/singhsidhukuldeep/295019542040831 | 455 | 0 |
897353773831828 | [
{
"type": "text",
"value": "Wanted to share some brief comparison of early training of the two-stage PixArt e-diffi pipeline.",
"raw": "Wanted to share some brief comparison of early training of the two-stage PixArt e-diffi pipeline.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "On the left, we have the full stage 1 model generating all 50 steps on its own. This model is not trained at all on the final 400 timesteps of the schedule. On the right, we have the combined pipeline where stage 1 output is fed into stage 2.",
"raw": "On the left, we have the full stage 1 model generating all 50 steps on its own. This model is not trained at all on the final 400 timesteps of the schedule. On the right, we have the combined pipeline where stage 1 output is fed into stage 2.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Currently, the difference is rather minimal - but the small details are reliably improved. ",
"raw": "Currently, the difference is rather minimal - but the small details are reliably improved. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "In the watercolour example, the full generation (right side) has the texture of the watercolour paper, and the partial generation (left side) has a more flat digital art look to it.",
"raw": "In the watercolour example, the full generation (right side) has the texture of the watercolour paper, and the partial generation (left side) has a more flat digital art look to it.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "For the blacksmith robot, the sparks emitted from the operation have a more natural blend to it. The robot's clothing appears to be undergoing some interesting transformation due to the undertrained state of the weights.",
"raw": "For the blacksmith robot, the sparks emitted from the operation have a more natural blend to it. The robot's clothing appears to be undergoing some interesting transformation due to the undertrained state of the weights.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The medieval battle image has improved blades of grass, settling dust particles, and fabric in the flag.",
"raw": "The medieval battle image has improved blades of grass, settling dust particles, and fabric in the flag.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The stage 2 model being trained does not seem to resolve any global coherence issues despite having 400 steps in its schedule, but it still noticeably changes the local coherence, eg. the consistency of fabrics and metals can be improved through stage 2 fine-tuning.",
"raw": "The stage 2 model being trained does not seem to resolve any global coherence issues despite having 400 steps in its schedule, but it still noticeably changes the local coherence, eg. the consistency of fabrics and metals can be improved through stage 2 fine-tuning.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The stage 1 model is the workhorse of the output, as expected with the 600 timesteps in its schedule. Additional fine-tuning of this model will improve the overall global coherence of the outputs. I wish I could say it will not impact fine details, but a lot of that does seem to be carried forward.",
"raw": "The stage 1 model is the workhorse of the output, as expected with the 600 timesteps in its schedule. Additional fine-tuning of this model will improve the overall global coherence of the outputs. I wish I could say it will not impact fine details, but a lot of that does seem to be carried forward.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "As noted, these models are undertrained due to a lack of compute. But they are a promising look toward what an e-diffi PixArt might be capable of.",
"raw": "As noted, these models are undertrained due to a lack of compute. But they are a promising look toward what an e-diffi PixArt might be capable of.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Does anyone want to build this out fully with me?",
"raw": "Does anyone want to build this out fully with me?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Wanted to share some brief comparison of early training of the two-stage PixArt e-diffi pipeline.
On the left, we have the full stage 1 model generating all 50 steps on its own. This model is not trained at all on the final 400 timesteps of the schedule. On the right, we have the combined pipeline where stage 1 output is fed into stage 2.
Currently, the difference is rather minimal - but the small details are reliably improved.
In the watercolour example, the full generation (right side) has the texture of the watercolour paper, and the partial generation (left side) has a more flat digital art look to it.
For the blacksmith robot, the sparks emitted from the operation have a more natural blend to it. The robot's clothing appears to be undergoing some interesting transformation due to the undertrained state of the weights.
The medieval battle image has improved blades of grass, settling dust particles, and fabric in the flag.
The stage 2 model being trained does not seem to resolve any global coherence issues despite having 400 steps in its schedule, but it still noticeably changes the local coherence, eg. the consistency of fabrics and metals can be improved through stage 2 fine-tuning.
The stage 1 model is the workhorse of the output, as expected with the 600 timesteps in its schedule. Additional fine-tuning of this model will improve the overall global coherence of the outputs. I wish I could say it will not impact fine details, but a lot of that does seem to be carried forward.
As noted, these models are undertrained due to a lack of compute. But they are a promising look toward what an e-diffi PixArt might be capable of.
Does anyone want to build this out fully with me? | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641caf6c043963b1c0a27256/CD7ktICDsldVJlpiND5kl.png",
"fullname": "PseudoTerminal X",
"name": "bghira",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 100,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/641caf6c043963b1c0a27256/Oxrq0_SpXhQrpdVrubHEQ.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/641caf6c043963b1c0a27256/WkIj-nTKuHTynI10jseOQ.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/641caf6c043963b1c0a27256/bYohQ1rCDrkjh2bXItJCa.png"
}
] | [] | [
{
"reaction": "๐ฅ",
"users": [
"datnt114",
"GPT007",
"Sylvestre"
],
"count": 3
},
{
"reaction": "๐",
"users": [
"FilipeR",
"tcreamype"
],
"count": 2
},
{
"reaction": "๐",
"users": [
"afondiel",
"GPT007"
],
"count": 2
}
] | 2024-07-20T18:48:05.000Z | 2024-07-21T02:00:18.824Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641caf6c043963b1c0a27256/CD7ktICDsldVJlpiND5kl.png",
"fullname": "PseudoTerminal X",
"name": "bghira",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 100,
"isFollowing": false
}
] | /posts/bghira/897353773831828 | 4,334 | 1 |
715230803020940 | [
{
"type": "text",
"value": "Check this out on Poe",
"raw": "Check this out on Poe",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "StableDiffusionXL Prompt Generator",
"raw": "StableDiffusionXL Prompt Generator",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ": ",
"raw": ": ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://poe.com/sdpai",
"resource": null,
"url": null,
"href": "https://poe.com/sdpai",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Check this out on Poe
StableDiffusionXL Prompt Generator
: https://poe.com/sdpai | {
"avatarUrl": "/avatars/d773a7dd9b706759131fc482ab71ced7.svg",
"fullname": "[email protected]",
"name": "Taf2023",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 8,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64841af2295256340e4b9f88/nlhsjHEDVK3p21f_6nTVp.webp"
}
] | [] | [] | 2024-07-20T13:37:25.000Z | 2024-07-20T13:37:25.798Z | [] | /posts/Taf2023/715230803020940 | 392 | 0 |
821875507658799 | [
{
"type": "text",
"value": "We Got a Job Offer in SECourses Discord Channel Related to AI (Stable Diffusion)",
"raw": "We Got a Job Offer in SECourses Discord Channel Related to AI (Stable Diffusion)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This is kind of announcement sharing. I think the offer looks decent.",
"raw": "This is kind of announcement sharing. I think the offer looks decent.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "For who doesnโt know our channel here : ",
"raw": "For who doesnโt know our channel here : ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://discord.com/servers/software-engineering-courses-secourses-772774097734074388",
"resource": null,
"url": null,
"href": "https://discord.com/servers/software-engineering-courses-secourses-772774097734074388",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "The job offer is in ai-related-job-offers sub-channel",
"raw": "The job offer is in ai-related-job-offers sub-channel",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | We Got a Job Offer in SECourses Discord Channel Related to AI (Stable Diffusion)
This is kind of announcement sharing. I think the offer looks decent.
For who doesnโt know our channel here : https://discord.com/servers/software-engineering-courses-secourses-772774097734074388
The job offer is in ai-related-job-offers sub-channel | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gรถzรผkara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 368,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/2mJOUgm4YztZ10A6jXRwU.png"
}
] | [] | [
{
"reaction": "๐ฅ",
"users": [
"Skrypt"
],
"count": 1
}
] | 2024-07-20T02:17:21.000Z | 2024-07-20T02:17:21.964Z | [] | /posts/MonsterMMORPG/821875507658799 | 764 | 0 |
955927481434060 | [
{
"type": "text",
"value": "I like training LoRAs",
"raw": "I like training LoRAs",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/nroggendorff/create-diffusers-dataset",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/nroggendorff/create-diffusers-dataset",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | I like training LoRAs
https://huggingface.co/blog/nroggendorff/create-diffusers-dataset | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659f000b83abded48e190901/BnXL_XYbVX6PHngfQLECW.png",
"fullname": "Noa Roggendorff",
"name": "nroggendorff",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ฅ",
"users": [
"Malumatra",
"Blane187",
"merve",
"AtAndDev",
"Nymbo",
"prithivMLmods"
],
"count": 6
},
{
"reaction": "๐",
"users": [
"John6666",
"LeroyDyer",
"merve",
"AtAndDev",
"Nymbo"
],
"count": 5
},
{
"reaction": "๐",
"users": [
"ZeroWw",
"AtAndDev",
"Nymbo"
],
"count": 3
}
] | 2024-07-19T21:12:30.000Z | 2024-07-19T21:12:30.724Z | [] | /posts/nroggendorff/955927481434060 | 2,784 | 0 |
877209328076357 | [
{
"type": "text",
"value": "Websites slam doors on AI data harvesting ๐ช๐",
"raw": "Websites slam doors on AI data harvesting ๐ช๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "New study \"Consent in Crisis: The Rapid Decline of the AI Data Commons\" reveals a rapid decline in open web access.",
"raw": "New study \"Consent in Crisis: The Rapid Decline of the AI Data Commons\" reveals a rapid decline in open web access.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Key findings from 14,000 web domains audit:",
"raw": "Key findings from 14,000 web domains audit:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- +5% of three common data sets (C4, RefinedWeb and Dolma) now fully restricted, +25% of the highest-quality sources now fully restricted",
"raw": "- +5% of three common data sets (C4, RefinedWeb and Dolma) now fully restricted, +25% of the highest-quality sources now fully restricted",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- 45% of C4 restricted by Terms of Service",
"raw": "- 45% of C4 restricted by Terms of Service",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Noteworthy trends:",
"raw": "Noteworthy trends:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ซ๐ OpenAI banned 2x more than any other company",
"raw": "๐ซ๐ OpenAI banned 2x more than any other company",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ฐ๐ News sites leading restrictions: 45% of tokens off-limits",
"raw": "๐ฐ๐ News sites leading restrictions: 45% of tokens off-limits",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Two quotes in the NYT piece to ponder: ",
"raw": "Two quotes in the NYT piece to ponder: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โUnsurprisingly, weโre seeing blowback from data creators after the text, images and videos theyโve shared online are used to develop commercial systems that sometimes directly threaten their livelihoods.โ โ ",
"raw": "โUnsurprisingly, weโre seeing blowback from data creators after the text, images and videos theyโve shared online are used to develop commercial systems that sometimes directly threaten their livelihoods.โ โ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@yjernite",
"resource": null,
"url": null,
"href": null,
"user": "yjernite",
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "โMajor tech companies already have all of the data. Changing the license on the data doesnโt retroactively revoke that permission, and the primary impact is on later-arriving actors, who are typically either smaller start-ups or researchers.โ โ ",
"raw": "โMajor tech companies already have all of the data. Changing the license on the data doesnโt retroactively revoke that permission, and the primary impact is on later-arriving actors, who are typically either smaller start-ups or researchers.โ โ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@stellaathena",
"resource": null,
"url": null,
"href": null,
"user": "stellaathena",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Dive into the research: ",
"raw": "๐ Dive into the research: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.dataprovenance.org/consent-in-crisis-paper",
"resource": null,
"url": null,
"href": "https://www.dataprovenance.org/consent-in-crisis-paper",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "๐ Read the NYT story: ",
"raw": "๐ Read the NYT story: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html",
"resource": null,
"url": null,
"href": "https://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "#AIEthics #DataPrivacy",
"raw": "#AIEthics #DataPrivacy",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Websites slam doors on AI data harvesting ๐ช๐
New study "Consent in Crisis: The Rapid Decline of the AI Data Commons" reveals a rapid decline in open web access.
Key findings from 14,000 web domains audit:
- +5% of three common data sets (C4, RefinedWeb and Dolma) now fully restricted, +25% of the highest-quality sources now fully restricted
- 45% of C4 restricted by Terms of Service
Noteworthy trends:
๐ซ๐ OpenAI banned 2x more than any other company
๐ฐ๐ News sites leading restrictions: 45% of tokens off-limits
Two quotes in the NYT piece to ponder:
โUnsurprisingly, weโre seeing blowback from data creators after the text, images and videos theyโve shared online are used to develop commercial systems that sometimes directly threaten their livelihoods.โ โ @yjernite
โMajor tech companies already have all of the data. Changing the license on the data doesnโt retroactively revoke that permission, and the primary impact is on later-arriving actors, who are typically either smaller start-ups or researchers.โ โ @stellaathena
๐ Dive into the research: https://www.dataprovenance.org/consent-in-crisis-paper
๐ Read the NYT story: https://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html
#AIEthics #DataPrivacy
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 364,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/sXF9rxI8oLGj2erwNKg1o.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/mRIITpTtaHgt6bVYYYA5b.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60347d3660e3dd96631c9093/B3fuZer5N04tZIAYrLnz4.jpeg",
"fullname": "Stella Biderman",
"name": "stellaathena",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1987
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594144055859-5ee3a7cd2a3eae3cbdad1305.jpeg",
"fullname": "Yacine Jernite",
"name": "yjernite",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 147
}
] | [
{
"reaction": "๐ฅ",
"users": [
"oneiroid",
"zecerman"
],
"count": 2
},
{
"reaction": "๐ง ",
"users": [
"louisbrulenaudet"
],
"count": 1
}
] | 2024-07-19T20:31:43.000Z | 2024-07-19T20:31:43.344Z | [] | /posts/fdaudens/877209328076357 | 636 | 0 |
874928848909758 | [
{
"type": "text",
"value": "InSPyReNet Background Removal",
"raw": "InSPyReNet Background Removal",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "I've built a space for fast background removal. ",
"raw": "I've built a space for fast background removal. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/gokaygokay/Inspyrenet-Rembg",
"resource": {
"type": "space",
"id": "gokaygokay/Inspyrenet-Rembg",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/gokaygokay/Inspyrenet-Rembg",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/plemeri/InSPyReNet",
"resource": null,
"url": null,
"href": "https://github.com/plemeri/InSPyReNet",
"user": null,
"lang": null,
"code": null,
"label": null
}
] | InSPyReNet Background Removal
I've built a space for fast background removal.
- https://huggingface.co/spaces/gokaygokay/Inspyrenet-Rembg
- https://github.com/plemeri/InSPyReNet | {
"avatarUrl": "/avatars/b9a6d8e11ec7a62ca2b819e0b6c37222.svg",
"fullname": "gokay aydogan",
"name": "gokaygokay",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 1100,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/630899601dd1e3075d975785/Tz4KleCHEbpzRjx05PIs_.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/630899601dd1e3075d975785/wcDO9JXEMkOgUhFOWpSpy.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/630899601dd1e3075d975785/Ql8oPXfRmIwsWRhIANwuJ.png"
}
] | [] | [
{
"reaction": "๐",
"users": [
"John6666",
"DonkeySmall",
"den0620",
"Ramikan-BR",
"sxnr1",
"Rsln",
"DIvAndrey",
"Wok",
"jeffcookio",
"MoAusaf"
],
"count": 10
},
{
"reaction": "๐",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "๐",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "โค๏ธ",
"users": [
"Ramikan-BR"
],
"count": 1
},
{
"reaction": "๐ง ",
"users": [
"Ramikan-BR"
],
"count": 1
}
] | 2024-07-19T18:26:46.000Z | 2024-10-15T04:25:06.600Z | [
{
"avatarUrl": "/avatars/efbc06330c77e2dc37a3bb13e4494c3d.svg",
"fullname": "Sukanth K",
"name": "Sukanth07",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/gokaygokay/874928848909758 | 4,683 | 2 |
756501126284636 | [
{
"type": "text",
"value": "New feature ๐ฅ ",
"raw": "New feature ๐ฅ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Image models and LoRAs now have little previews ๐ค",
"raw": "Image models and LoRAs now have little previews ๐ค",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "If you don't know where to start to find them, I invite you to browse cool LoRAs in the profile of some amazing fine-tuners: ",
"raw": "If you don't know where to start to find them, I invite you to browse cool LoRAs in the profile of some amazing fine-tuners: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@artificialguybr",
"resource": null,
"url": null,
"href": null,
"user": "artificialguybr",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@alvdansen",
"resource": null,
"url": null,
"href": null,
"user": "alvdansen",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@DoctorDiffusion",
"resource": null,
"url": null,
"href": null,
"user": "DoctorDiffusion",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@e-n-v-y",
"resource": null,
"url": null,
"href": null,
"user": "e-n-v-y",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@KappaNeuro",
"resource": null,
"url": null,
"href": null,
"user": "KappaNeuro",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "mention",
"value": null,
"raw": "@ostris",
"resource": null,
"url": null,
"href": null,
"user": "ostris",
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | New feature ๐ฅ
Image models and LoRAs now have little previews ๐ค
If you don't know where to start to find them, I invite you to browse cool LoRAs in the profile of some amazing fine-tuners: @artificialguybr, @alvdansen, @DoctorDiffusion, @e-n-v-y, @KappaNeuro @ostris | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649143001781-624bebf604abc7ebb01789af.jpeg",
"fullname": "Apolinรกrio from multimodal AI art",
"name": "multimodalart",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 3149,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/624bebf604abc7ebb01789af/M8DURRNPeT0k-35_hJzNq.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635dd6cd4fabde0df74aeae6/23c0uEOr7RWDtSLDBzkPD.png",
"fullname": "araminta_k",
"name": "alvdansen",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 493
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6304037b7373aacccd882e1e/H8M3e1n0CpJr5n3aL0ExE.jpeg",
"fullname": "ArtificialGuy/JV.K",
"name": "artificialguybr",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2304
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6303ca79d14428368d1821d7/CHzkdI_0y2xJCPU1TDIsO.jpeg",
"fullname": "Joseph Kachnic",
"name": "DoctorDiffusion",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 21
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6303c4880907b9a115c36ce4/2dAYt5dDeKYuJwdY2CYF5.png",
"fullname": "_Envy_",
"name": "e-n-v-y",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 47
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/630776eadc6b7663aa95d0e5/LcNvLR6MgB5YzhpNEs6aX.jpeg",
"fullname": "Neuro_Kappa",
"name": "KappaNeuro",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 56
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/643cb43e6eeb746f5ad81c26/_DUtzHpNtpTeDw7u0oYyX.png",
"fullname": "Jaret Burkett",
"name": "ostris",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 259
}
] | [
{
"reaction": "โค๏ธ",
"users": [
"GPT007",
"osanseviero",
"ayush7",
"John6666",
"Blane187",
"bghira",
"artificialguybr",
"Rsln",
"edpresque",
"DoctorDiffusion",
"cbensimon",
"OmbelineM"
],
"count": 12
}
] | 2024-07-19T15:16:14.000Z | 2024-09-18T07:56:27.104Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641caf6c043963b1c0a27256/CD7ktICDsldVJlpiND5kl.png",
"fullname": "PseudoTerminal X",
"name": "bghira",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 100,
"isFollowing": false
},
{
"avatarUrl": "/avatars/8473d30b909208e7dd5828620bcb4ce1.svg",
"fullname": "Wallow",
"name": "Viktor1233",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
}
] | /posts/multimodalart/756501126284636 | 15,812 | 2 |
384170201249509 | [
{
"type": "text",
"value": "Chameleon ๐ฆ by Meta is now available in Hugging Face transformers ๐",
"raw": "Chameleon ๐ฆ by Meta is now available in Hugging Face transformers ๐",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "A vision language model that comes in 7B and 34B sizes ๐คฉ",
"raw": "A vision language model that comes in 7B and 34B sizes ๐คฉ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "But what makes this model so special? ",
"raw": "But what makes this model so special? ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Demo: ",
"raw": "Demo: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/merve/chameleon-7b",
"resource": {
"type": "space",
"id": "merve/chameleon-7b",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/merve/chameleon-7b",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Models: ",
"raw": "Models: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/facebook/chameleon-668da9663f80d483b4c61f58",
"resource": {
"type": "collection",
"id": "facebook/chameleon-668da9663f80d483b4c61f58",
"discussionNum": null
},
"url": "https://huggingface.co/collections/facebook/chameleon-668da9663f80d483b4c61f58",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "keep reading โฅฅ",
"raw": "keep reading โฅฅ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Chameleon is a unique model: it attempts to scale early fusion ๐คจ",
"raw": "Chameleon is a unique model: it attempts to scale early fusion ๐คจ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "But what is early fusion?",
"raw": "But what is early fusion?",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Modern vision language models use a vision encoder with a projection layer to project image embeddings so it can be promptable to text decoder (LLM)",
"raw": "Modern vision language models use a vision encoder with a projection layer to project image embeddings so it can be promptable to text decoder (LLM)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Early fusion on the other hand attempts to fuse all features together (image patches and text) by using an image tokenizer and all tokens are projected into a shared space, which enables seamless generation ๐ ",
"raw": "Early fusion on the other hand attempts to fuse all features together (image patches and text) by using an image tokenizer and all tokens are projected into a shared space, which enables seamless generation ๐ ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Authors have also introduced different architectural improvements (QK norm and revise placement of layer norms) for scalable and stable training and they were able to increase the token count (5x tokens compared to Llama 3 which is a must with early-fusion IMO)",
"raw": "Authors have also introduced different architectural improvements (QK norm and revise placement of layer norms) for scalable and stable training and they were able to increase the token count (5x tokens compared to Llama 3 which is a must with early-fusion IMO)",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This model is an any-to-any model thanks to early fusion: it can take image and text input and output image and text, but image generation are disabled to prevent malicious use. ",
"raw": "This model is an any-to-any model thanks to early fusion: it can take image and text input and output image and text, but image generation are disabled to prevent malicious use. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "One can also do text-only prompting, authors noted the model catches up with larger LLMs (like Mixtral 8x7B or larger Llama-2 70B) and also image-pair prompting with larger VLMs like IDEFICS2-80B (see paper for the benchmarks ",
"raw": "One can also do text-only prompting, authors noted the model catches up with larger LLMs (like Mixtral 8x7B or larger Llama-2 70B) and also image-pair prompting with larger VLMs like IDEFICS2-80B (see paper for the benchmarks ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2405.09818",
"resource": {
"type": "paper",
"id": "2405.09818",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2405.09818",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": "Chameleon: Mixed-Modal Early-Fusion Foundation Models (2405.09818)"
},
{
"type": "text",
"value": ")",
"raw": ")",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Thanks for reading!",
"raw": "Thanks for reading!",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Chameleon ๐ฆ by Meta is now available in Hugging Face transformers ๐
A vision language model that comes in 7B and 34B sizes ๐คฉ
But what makes this model so special?
Demo: https://huggingface.co/spaces/merve/chameleon-7b
Models: https://huggingface.co/collections/facebook/chameleon-668da9663f80d483b4c61f58
keep reading โฅฅ
Chameleon is a unique model: it attempts to scale early fusion ๐คจ
But what is early fusion?
Modern vision language models use a vision encoder with a projection layer to project image embeddings so it can be promptable to text decoder (LLM)
Early fusion on the other hand attempts to fuse all features together (image patches and text) by using an image tokenizer and all tokens are projected into a shared space, which enables seamless generation ๐
Authors have also introduced different architectural improvements (QK norm and revise placement of layer norms) for scalable and stable training and they were able to increase the token count (5x tokens compared to Llama 3 which is a must with early-fusion IMO)
This model is an any-to-any model thanks to early fusion: it can take image and text input and output image and text, but image generation are disabled to prevent malicious use.
One can also do text-only prompting, authors noted the model catches up with larger LLMs (like Mixtral 8x7B or larger Llama-2 70B) and also image-pair prompting with larger VLMs like IDEFICS2-80B (see paper for the benchmarks https://huggingface.co/papers/2405.09818)
Thanks for reading! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/avWtD9eu8PzFpZ8_PnTDD.png"
}
] | [] | [
{
"reaction": "๐ฅ",
"users": [
"sumandas",
"atahanuz",
"Nymbo",
"ucsahin",
"GPT007",
"fdaudens",
"osanseviero",
"alkzar90",
"dblasko",
"Svngoku",
"gokaygokay",
"mikestaub",
"abdulbasit-nubytes"
],
"count": 13
},
{
"reaction": "๐ค",
"users": [
"alanonbing"
],
"count": 1
}
] | 2024-07-19T12:46:01.000Z | 2024-07-26T12:13:07.517Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5520,
"isFollowing": false
},
{
"avatarUrl": "/avatars/cfc399a521b05df08b799595b0390d13.svg",
"fullname": "Prasanna Iyer",
"name": "prasiyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg",
"fullname": "leroy Samuel Dyer",
"name": "LeroyDyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 82,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/667e763c6f5dee59721f016d/V2e6Z0BjzXE3t_obD3JXS.jpeg",
"fullname": "Anastasia",
"name": "Ana111op",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/merve/384170201249509 | 3,305 | 8 |
723320133467359 | [
{
"type": "text",
"value": "Sparse MoE (SMoE) has an unavoidable drawback: the performance of SMoE heavily relies on the choice of hyper-parameters, such as the number of activated experts per token (top-k) and the number of experts.",
"raw": "Sparse MoE (SMoE) has an unavoidable drawback: the performance of SMoE heavily relies on the choice of hyper-parameters, such as the number of activated experts per token (top-k) and the number of experts.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Also, identifying the optimal hyper-parameter without a sufficient number of ablation studies is challenging. As the size of the models continues to grow, this limitation could result in a significant waste of computational resources, and in turn, could hinder the efficiency of training MoE-based models in practice.",
"raw": "Also, identifying the optimal hyper-parameter without a sufficient number of ablation studies is challenging. As the size of the models continues to grow, this limitation could result in a significant waste of computational resources, and in turn, could hinder the efficiency of training MoE-based models in practice.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "(READ MORE โโโ) Now, our DynMoE addresses these challenges! ๐ DynMoE incorporates: ",
"raw": "(READ MORE โโโ) Now, our DynMoE addresses these challenges! ๐ DynMoE incorporates: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "(1) a novel gating method that enables each token to automatically determine the number of experts to activate. ",
"raw": "(1) a novel gating method that enables each token to automatically determine the number of experts to activate. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "(2) An adaptive process automatically adjusts the number of experts during training. Extensive numerical results across Vision, Language, and Vision-Language tasks demonstrate the effectiveness of our approach to achieve competitive performance compared to GMoE for vision and language tasks, and MoE-LLaVA for vision-language tasks, while maintaining efficiency by activating fewer parameters.",
"raw": "(2) An adaptive process automatically adjusts the number of experts during training. Extensive numerical results across Vision, Language, and Vision-Language tasks demonstrate the effectiveness of our approach to achieve competitive performance compared to GMoE for vision and language tasks, and MoE-LLaVA for vision-language tasks, while maintaining efficiency by activating fewer parameters.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Our code is available at ",
"raw": "Our code is available at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/LINs-lab/DynMoE",
"resource": null,
"url": null,
"href": "https://github.com/LINs-lab/DynMoE",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": ", also see the checkpoints at ",
"raw": ", also see the checkpoints at ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/LINs-lab/dynmoe-family-665ed5a331a7e84463cab01a",
"resource": {
"type": "collection",
"id": "LINs-lab/dynmoe-family-665ed5a331a7e84463cab01a",
"discussionNum": null
},
"url": "https://huggingface.co/collections/LINs-lab/dynmoe-family-665ed5a331a7e84463cab01a",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Sparse MoE (SMoE) has an unavoidable drawback: the performance of SMoE heavily relies on the choice of hyper-parameters, such as the number of activated experts per token (top-k) and the number of experts.
Also, identifying the optimal hyper-parameter without a sufficient number of ablation studies is challenging. As the size of the models continues to grow, this limitation could result in a significant waste of computational resources, and in turn, could hinder the efficiency of training MoE-based models in practice.
(READ MORE โโโ) Now, our DynMoE addresses these challenges! ๐ DynMoE incorporates:
(1) a novel gating method that enables each token to automatically determine the number of experts to activate.
(2) An adaptive process automatically adjusts the number of experts during training. Extensive numerical results across Vision, Language, and Vision-Language tasks demonstrate the effectiveness of our approach to achieve competitive performance compared to GMoE for vision and language tasks, and MoE-LLaVA for vision-language tasks, while maintaining efficiency by activating fewer parameters.
Our code is available at https://github.com/LINs-lab/DynMoE, also see the checkpoints at https://huggingface.co/collections/LINs-lab/dynmoe-family-665ed5a331a7e84463cab01a
| {
"avatarUrl": "/avatars/86a748a3264e6e0f4ee5eaf8f7032ecb.svg",
"fullname": "Zhenglin Cheng",
"name": "kenshinn",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 11,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65028e8389707f182386588c/QQbD6VPiRldWdPbwo_IIy.jpeg"
}
] | [] | [
{
"reaction": "โค๏ธ",
"users": [
"surajkachate123",
"GPT007",
"osanseviero",
"Ramikan-BR",
"hllj",
"danielus",
"flflow",
"kenshinn",
"Yongxin-Guo"
],
"count": 9
},
{
"reaction": "๐",
"users": [
"GPT007",
"kenshinn",
"Yongxin-Guo"
],
"count": 3
}
] | 2024-07-19T11:34:57.000Z | 2024-07-19T14:04:36.451Z | [] | /posts/kenshinn/723320133467359 | 2,023 | 0 |
142040830676956 | [
{
"type": "text",
"value": "Since it is release season, at PleIAs we announce our first suite of specialized language models for document processing tasks (OCR correction, text segmentation, bibliographic extraction) and the release of the largest multimodal dataset of financial document Finance Commons: ",
"raw": "Since it is release season, at PleIAs we announce our first suite of specialized language models for document processing tasks (OCR correction, text segmentation, bibliographic extraction) and the release of the largest multimodal dataset of financial document Finance Commons: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/Pclanglais/finance-commons-bad-data-toolbox",
"resource": null,
"url": null,
"href": "https://huggingface.co/blog/Pclanglais/finance-commons-bad-data-toolbox",
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "LLM research is currently focused on quality data. We went on the opposite direction and voluntarily trained models on bad data. Far from degrading models, it made them more resilient to text sources commonly used in production.",
"raw": "LLM research is currently focused on quality data. We went on the opposite direction and voluntarily trained models on bad data. Far from degrading models, it made them more resilient to text sources commonly used in production.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "Having a wider range of real life data proved critical for this project. A few months after the release of Common Corpus, we expanded our pool of \"training data commons\" with a major multimodal ressource: document released as open financial data. Finance commons comprises 17 billion tokens and 1.25 PDF corporate documents released by the SEC, WTO, AMF, EU Tenders In a multiple languages with a large variety of document layouts and challenging sources to train more robust models.",
"raw": "Having a wider range of real life data proved critical for this project. A few months after the release of Common Corpus, we expanded our pool of \"training data commons\" with a major multimodal ressource: document released as open financial data. Finance commons comprises 17 billion tokens and 1.25 PDF corporate documents released by the SEC, WTO, AMF, EU Tenders In a multiple languages with a large variety of document layouts and challenging sources to train more robust models.",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "With HuggingFace compute support, we release an entire pipeline to process bad data sources and make them usable in production for LLMOps or simply retrieval: ",
"raw": "With HuggingFace compute support, we release an entire pipeline to process bad data sources and make them usable in production for LLMOps or simply retrieval: ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/PleIAs/PleIAs-Editor",
"resource": {
"type": "space",
"id": "PleIAs/PleIAs-Editor",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/PleIAs/PleIAs-Editor",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "This approach is based on our new series of specialized models for document processing, the \"bad data toolbox\" comprising:",
"raw": "This approach is based on our new series of specialized models for document processing, the \"bad data toolbox\" comprising:",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "*OCRonos, the best available model to date for OCR correction. ",
"raw": "*OCRonos, the best available model to date for OCR correction. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/PleIAs/OCRonos",
"resource": {
"type": "model",
"id": "PleIAs/OCRonos",
"discussionNum": null
},
"url": "https://huggingface.co/PleIAs/OCRonos",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "*Segmentext, a pure semantic small model for text segmentation, working without any visual reference. ",
"raw": "*Segmentext, a pure semantic small model for text segmentation, working without any visual reference. ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/PleIAs/Segmentext",
"resource": {
"type": "model",
"id": "PleIAs/Segmentext",
"discussionNum": null
},
"url": "https://huggingface.co/PleIAs/Segmentext",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "text",
"value": "*Bibtexer, a small model for bibliographic data extraction acting as a \"reversed-Zotero.\" ",
"raw": "*Bibtexer, a small model for bibliographic data extraction acting as a \"reversed-Zotero.\" ",
"resource": null,
"url": null,
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/PleIAs/BibTexer",
"resource": {
"type": "model",
"id": "PleIAs/BibTexer",
"discussionNum": null
},
"url": "https://huggingface.co/PleIAs/BibTexer",
"href": null,
"user": null,
"lang": null,
"code": null,
"label": null
}
] | Since it is release season, at PleIAs we announce our first suite of specialized language models for document processing tasks (OCR correction, text segmentation, bibliographic extraction) and the release of the largest multimodal dataset of financial document Finance Commons: https://huggingface.co/blog/Pclanglais/finance-commons-bad-data-toolbox
LLM research is currently focused on quality data. We went on the opposite direction and voluntarily trained models on bad data. Far from degrading models, it made them more resilient to text sources commonly used in production.
Having a wider range of real life data proved critical for this project. A few months after the release of Common Corpus, we expanded our pool of "training data commons" with a major multimodal ressource: document released as open financial data. Finance commons comprises 17 billion tokens and 1.25 PDF corporate documents released by the SEC, WTO, AMF, EU Tenders In a multiple languages with a large variety of document layouts and challenging sources to train more robust models.
With HuggingFace compute support, we release an entire pipeline to process bad data sources and make them usable in production for LLMOps or simply retrieval: https://huggingface.co/spaces/PleIAs/PleIAs-Editor
This approach is based on our new series of specialized models for document processing, the "bad data toolbox" comprising:
*OCRonos, the best available model to date for OCR correction. https://huggingface.co/PleIAs/OCRonos
*Segmentext, a pure semantic small model for text segmentation, working without any visual reference. https://huggingface.co/PleIAs/Segmentext
*Bibtexer, a small model for bibliographic data extraction acting as a "reversed-Zotero." https://huggingface.co/PleIAs/BibTexer | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ce091a9e9ca8123d7a42b0/OEPggp82RwigxNLL35LgT.jpeg",
"fullname": "Pierre-Carl Langlais",
"name": "Pclanglais",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 189,
"isFollowing": false
} | [] | [] | [
{
"reaction": "๐ค",
"users": [
"Tonic",
"louisbrulenaudet"
],
"count": 2
}
] | 2024-07-19T10:40:32.000Z | 2024-07-19T11:14:47.169Z | [] | /posts/Pclanglais/142040830676956 | 1,051 | 0 |