Prithiv Sakthi PRO
AI & ML interests
Articles
Organizations
prithivMLmods's activity
🧨Flux LoRA DLC: prithivMLmods/FLUX-LoRA-DLC
🎆Glowing-Body: prithivMLmods/Glowing-Body-Flux-LoRA
🎆Electric-Blue: prithivMLmods/Electric-Blue-Flux-LoRA
🎆Intense-Red: prithivMLmods/Intense-Red-Flux-LoRA
🎆Clouds-Illusion: prithivMLmods/Clouds-Illusion-Flux-LoRA
🎆Digital-Yellow: prithivMLmods/Digital-Yellow-Flux-LoRA
🧨Flux LoRA Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
.
.
.
@prithivMLmods
{ Try Now on Flux LoRA DLC ⛵ } : prithivMLmods/FLUX-LoRA-DLC
-- Undersea
{ Red Fluid } : prithivMLmods/Red-Undersea-Flux-LoRA
-- 3D Realmix
{ 3D Portrait Render } : prithivMLmods/3D-Render-Flux-LoRA
-- Pop
{ Yellow Pop } : prithivMLmods/Yellow-Pop-Flux-Dev-LoRA
-- Grid
{ Purple Grid } : prithivMLmods/Purple-Grid-Flux-LoRA
.
.
.
@prithivMLmods
Upto - 4 Megapixels, 10 seconds per sample. { Hi - Res }
{ Blog Post ⛵ } : https://huggingface.co/blog/prithivMLmods/flux-pro-endpoint
Endpoint Creation Step by Step: 🧵
-> Sign up to { api.bfl.ml } & get your api's: https://api.bfl.ml/auth/profile
-> File Structure:
flux_image_generation/
├── .env
├── generate_image.py
└── requirements.txt
-> Step 0: Add Your API Key to an Environment File
{ .env }
BFL_API_KEY=your_actual_api_key_here
-> Step 1: Install Required Libraries
{ requirements.txt }
requests
python-dotenv
-> Step 2: Setup the Python Script
{ generate_image.py} - https://github.com/PRITHIVSAKTHIUR/Flux-API/blob/main/generate_image.py
-> Step3: Install the requirements & Run the Script
pip install -r requirements.txt
python generate_image.py
-> Polling: The script polls the API every 0.5 seconds until the image generation result is ready. That's it the script also checks for a successful response after submitting the request.
For more visit:
🔺for script: https://github.com/PRITHIVSAKTHIUR/Flux-API/tree/main
🔺bfl doc: https://docs.bfl.ml/quick_start/gen_image/#__tabbed_1_2
Endpoints for image generation: 🧵
-> /flux-pro-1.1-ultra
-> /flux-pro-1.1
-> /flux-pro
-> /flux-dev
Each ID has 50 free credits available for use, based on the cost per image sample generated by the model.
.
.
.
@prithivMLmods 🤗
{ Flux LoRA DLC ⛵ } : prithivMLmods/FLUX-LoRA-DLC
-- Purple Dreamy
{ pop of color } : prithivMLmods/Purple-Dreamy-Flux-LoRA
-- Golden Dust
{ shimmer contrast } : prithivMLmods/Golden-Dust-Flux-LoRA
-- Lime Green
{ depth to the composition } : prithivMLmods/Lime-Green-Flux-LoRA
-- Flare Strike
{ Fractured Line } : prithivMLmods/Fractured-Line-Flare
-- Orange Chroma
{ studio lighting } : prithivMLmods/Orange-Chroma-Flux-LoRA
.
.
.
{ collection } : prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
@prithivMLmods
Upto - 4 Megapixels, 10 seconds per sample. { Hi - Res }
{ Blog Post ⛵ } : https://huggingface.co/blog/prithivMLmods/flux-pro-endpoint
Endpoint Creation Step by Step: 🧵
-> Sign up to { api.bfl.ml } & get your api's: https://api.bfl.ml/auth/profile
-> File Structure:
flux_image_generation/
├── .env
├── generate_image.py
└── requirements.txt
-> Step 0: Add Your API Key to an Environment File
{ .env }
BFL_API_KEY=your_actual_api_key_here
-> Step 1: Install Required Libraries
{ requirements.txt }
requests
python-dotenv
-> Step 2: Setup the Python Script
{ generate_image.py} - https://github.com/PRITHIVSAKTHIUR/Flux-API/blob/main/generate_image.py
-> Step3: Install the requirements & Run the Script
pip install -r requirements.txt
python generate_image.py
-> Polling: The script polls the API every 0.5 seconds until the image generation result is ready. That's it the script also checks for a successful response after submitting the request.
For more visit:
🔺for script: https://github.com/PRITHIVSAKTHIUR/Flux-API/tree/main
🔺bfl doc: https://docs.bfl.ml/quick_start/gen_image/#__tabbed_1_2
Endpoints for image generation: 🧵
-> /flux-pro-1.1-ultra
-> /flux-pro-1.1
-> /flux-pro
-> /flux-dev
Each ID has 50 free credits available for use, based on the cost per image sample generated by the model.
.
.
.
@prithivMLmods 🤗
Stay tuned for more updates.
Link: https://x.com/EzgiKorkmazAI/status/1854525141897671111
Fine-tuning models got even easier!!!!
Now you can fine-tune SOTA models on all compatible dataset-model pairs on Hugging Face Hub using Python on Hugging Face Servers. Choose from a number of GPU flavors, millions of models and dataset pairs and 10+ tasks 🤗
To try, install autotrain-advanced using pip. You can ignore dependencies and install without --no-deps and then you'd need to install some dependencies by hand.
"pip install autotrain-advanced"
Github repo: https://github.com/huggingface/autotrain-advanced
🧨Flux LoRA DLC: prithivMLmods/FLUX-LoRA-DLC
🎆Glowing-Body: prithivMLmods/Glowing-Body-Flux-LoRA
🎆Electric-Blue: prithivMLmods/Electric-Blue-Flux-LoRA
🎆Intense-Red: prithivMLmods/Intense-Red-Flux-LoRA
🎆Clouds-Illusion: prithivMLmods/Clouds-Illusion-Flux-LoRA
🎆Digital-Yellow: prithivMLmods/Digital-Yellow-Flux-LoRA
🧨Flux LoRA Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
.
.
.
@prithivMLmods
🧨Flux LoRA DLC: prithivMLmods/FLUX-LoRA-DLC
🎆Glowing-Body: prithivMLmods/Glowing-Body-Flux-LoRA
🎆Electric-Blue: prithivMLmods/Electric-Blue-Flux-LoRA
🎆Intense-Red: prithivMLmods/Intense-Red-Flux-LoRA
🎆Clouds-Illusion: prithivMLmods/Clouds-Illusion-Flux-LoRA
🎆Digital-Yellow: prithivMLmods/Digital-Yellow-Flux-LoRA
🧨Flux LoRA Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
.
.
.
@prithivMLmods
{ Flux LoRA DLC ⛵ } : prithivMLmods/FLUX-LoRA-DLC
-- Purple Dreamy
{ pop of color } : prithivMLmods/Purple-Dreamy-Flux-LoRA
-- Golden Dust
{ shimmer contrast } : prithivMLmods/Golden-Dust-Flux-LoRA
-- Lime Green
{ depth to the composition } : prithivMLmods/Lime-Green-Flux-LoRA
-- Flare Strike
{ Fractured Line } : prithivMLmods/Fractured-Line-Flare
-- Orange Chroma
{ studio lighting } : prithivMLmods/Orange-Chroma-Flux-LoRA
.
.
.
{ collection } : prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
@prithivMLmods
It analyzes a list of Hugging Face Daily Papers(w/ @akhaliq ) and turn them into insightful blog posts. This project leverages Gemini models (1.5 Pro, 1.5 Flash, and 1.5 Flash-8B) for content generation and Upstage Document Parse for parsing the layout and contents.
blog link: https://deep-diver.github.io/ai-paper-reviewer/
Also, here is the link of GitHub repository for parsing and generating pipeline. By using this, you can easily build your own GitHub static pages based on any arXiv papers with your own interest!
: https://github.com/deep-diver/paper-reviewer
⚡ Mixture of Experts (MoE) architecture: 389 B parameters in total, but only 52B are activated for any input
🧪 Trained on 7T tokens, including 1.5T tokens of synthetic data
🏗️ Architecture : Novel "recycle routing" prevents token dropping when experts are overrloaded
📊 Great benchmark results: Surpasses Llama-3-405B-Instruct in most benchmarks although it has 8x fewer active parameters
‣ Impressive perf on MATH: 77.4
🐋 Large context length: up to 256K tokens
🔒 License:
‣ Commercial use allowed, except if your products have >100M monthly active users
‣ No access in the EU
🤗 Model weights available on HF!
Read the full paper here 👉 Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent (2411.02265)
🧨Flux LoRA DLC: prithivMLmods/FLUX-LoRA-DLC
🎆Glowing-Body: prithivMLmods/Glowing-Body-Flux-LoRA
🎆Electric-Blue: prithivMLmods/Electric-Blue-Flux-LoRA
🎆Intense-Red: prithivMLmods/Intense-Red-Flux-LoRA
🎆Clouds-Illusion: prithivMLmods/Clouds-Illusion-Flux-LoRA
🎆Digital-Yellow: prithivMLmods/Digital-Yellow-Flux-LoRA
🧨Flux LoRA Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
.
.
.
@prithivMLmods
-> A Mediterranean chef standing in a rustic kitchen, surrounded by fresh ingredients like olives, tomatoes, herbs, and lemons. He wears a traditional chef's jacket with rolled-up sleeves, an apron, and a small chef's cap. The kitchen has stone walls, wooden countertops, and ceramic pots. He’s holding a plate with a vibrant Mediterranean dish, with warm lighting that gives a cozy, inviting atmosphere. The scene captures the authenticity of Mediterranean cooking, with sunlit colors and rich textures, evoking a sense of freshness and tradition.
-> A Mediterranean lady chef in a warm, rustic kitchen filled with fresh ingredients like basil, tomatoes, olives, and garlic. She wears a chef's jacket with rolled-up sleeves, an apron, and a warm smile. Her hair is neatly tied back, and she's holding a colorful Mediterranean dish, perhaps a vibrant salad or pasta. The kitchen features stone walls, wooden shelves, and traditional cookware, with sunlight streaming in to highlight the fresh ingredients. The setting has a cozy, inviting atmosphere that reflects the charm and warmth of Mediterranean cooking.
Image generated by : https://huggingface.co/spaces/prithivMLmods/FLUX-REALISM
Powered by: Polars, DuckDB, Gradio and model2vec (lightning-fast embeddings by Stéphan Tulkens).
Should work fast enough for datasets up to 100K.
davidberenstein1957/vectorsearch-hub-datasets
periodic reminder : if you are experiencing ⚠️500 errors ⚠️ or ⚠️ abnormal
spaces
behavior on load or launch ⚠️we have a thread 👉🏻 https://discord.com/channels/879548962464493619/1295847667515129877
if you can record the problem and share it there , or on the forums in your own post , please dont be shy because i'm not sure but i do think it helps 🤗🤗🤗
🧨Flux LoRA DLC: prithivMLmods/FLUX-LoRA-DLC
🎆Glowing-Body: prithivMLmods/Glowing-Body-Flux-LoRA
🎆Electric-Blue: prithivMLmods/Electric-Blue-Flux-LoRA
🎆Intense-Red: prithivMLmods/Intense-Red-Flux-LoRA
🎆Clouds-Illusion: prithivMLmods/Clouds-Illusion-Flux-LoRA
🎆Digital-Yellow: prithivMLmods/Digital-Yellow-Flux-LoRA
🧨Flux LoRA Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
.
.
.
@prithivMLmods
Large Reasoning Models powered by Monte Carlo Tree Search (MCTS), Self-Play Reinforcement Learning, PPO, AlphaGo Zero's dua policy paradigm and Large Language Models!
https://github.com/SimpleBerry/LLaMA-O1/
What will happen when you compound MCTS ❤ LLM ❤ Self-Play ❤RLHF?
Just a little bite of strawberry!🍓
Past related works:
LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning (2410.02884)
Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo Tree Self-refine with LLaMa-3 8B (2406.07394)