Prithiv Sakthi PRO

prithivMLmods

AI & ML interests

Computer Vision - AI/ML

Articles

Organizations

prithivMLmods's activity

reacted to their post with 👀👍🔥 about 3 hours ago
view post
Post
5128
posted an update about 3 hours ago
reacted to their post with ❤️ about 18 hours ago
view post
Post
1836
FLUX 1.1 [pro] Ultra : API - { 4x Higher Image Resolutions }
Upto - 4 Megapixels, 10 seconds per sample. { Hi - Res }

{ Blog Post ⛵ } : https://huggingface.co/blog/prithivMLmods/flux-pro-endpoint

Endpoint Creation Step by Step: 🧵
-> Sign up to { api.bfl.ml } & get your api's: https://api.bfl.ml/auth/profile
-> File Structure:
flux_image_generation/
├── .env
├── generate_image.py
└── requirements.txt

-> Step 0: Add Your API Key to an Environment File
{ .env }
BFL_API_KEY=your_actual_api_key_here

-> Step 1: Install Required Libraries
{ requirements.txt }
requests
python-dotenv

-> Step 2: Setup the Python Script
{ generate_image.py} - https://github.com/PRITHIVSAKTHIUR/Flux-API/blob/main/generate_image.py

-> Step3: Install the requirements & Run the Script
pip install -r requirements.txt

python generate_image.py

-> Polling: The script polls the API every 0.5 seconds until the image generation result is ready. That's it the script also checks for a successful response after submitting the request.

For more visit:
🔺for script: https://github.com/PRITHIVSAKTHIUR/Flux-API/tree/main
🔺bfl doc: https://docs.bfl.ml/quick_start/gen_image/#__tabbed_1_2

Endpoints for image generation: 🧵
-> /flux-pro-1.1-ultra
-> /flux-pro-1.1
-> /flux-pro
-> /flux-dev

Each ID has 50 free credits available for use, based on the cost per image sample generated by the model.

.
.
.
@prithivMLmods 🤗
reacted to their post with ❤️ 1 day ago
view post
Post
4275
Quintet Drop : : 🤗

{ Flux LoRA DLC ⛵ } : prithivMLmods/FLUX-LoRA-DLC

-- Purple Dreamy
{ pop of color } : prithivMLmods/Purple-Dreamy-Flux-LoRA

-- Golden Dust
{ shimmer contrast } : prithivMLmods/Golden-Dust-Flux-LoRA

-- Lime Green
{ depth to the composition } : prithivMLmods/Lime-Green-Flux-LoRA

-- Flare Strike
{ Fractured Line } : prithivMLmods/Fractured-Line-Flare

-- Orange Chroma
{ studio lighting } : prithivMLmods/Orange-Chroma-Flux-LoRA
.
.
.
{ collection } : prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

@prithivMLmods
posted an update 1 day ago
view post
Post
1836
FLUX 1.1 [pro] Ultra : API - { 4x Higher Image Resolutions }
Upto - 4 Megapixels, 10 seconds per sample. { Hi - Res }

{ Blog Post ⛵ } : https://huggingface.co/blog/prithivMLmods/flux-pro-endpoint

Endpoint Creation Step by Step: 🧵
-> Sign up to { api.bfl.ml } & get your api's: https://api.bfl.ml/auth/profile
-> File Structure:
flux_image_generation/
├── .env
├── generate_image.py
└── requirements.txt

-> Step 0: Add Your API Key to an Environment File
{ .env }
BFL_API_KEY=your_actual_api_key_here

-> Step 1: Install Required Libraries
{ requirements.txt }
requests
python-dotenv

-> Step 2: Setup the Python Script
{ generate_image.py} - https://github.com/PRITHIVSAKTHIUR/Flux-API/blob/main/generate_image.py

-> Step3: Install the requirements & Run the Script
pip install -r requirements.txt

python generate_image.py

-> Polling: The script polls the API every 0.5 seconds until the image generation result is ready. That's it the script also checks for a successful response after submitting the request.

For more visit:
🔺for script: https://github.com/PRITHIVSAKTHIUR/Flux-API/tree/main
🔺bfl doc: https://docs.bfl.ml/quick_start/gen_image/#__tabbed_1_2

Endpoints for image generation: 🧵
-> /flux-pro-1.1-ultra
-> /flux-pro-1.1
-> /flux-pro
-> /flux-dev

Each ID has 50 free credits available for use, based on the cost per image sample generated by the model.

.
.
.
@prithivMLmods 🤗
reacted to ezgikorkmaz's post with ❤️ 1 day ago
reacted to abhishek's post with 🔥 2 days ago
view post
Post
2929
INTRODUCING Hugging Face AutoTrain Client 🔥
Fine-tuning models got even easier!!!!
Now you can fine-tune SOTA models on all compatible dataset-model pairs on Hugging Face Hub using Python on Hugging Face Servers. Choose from a number of GPU flavors, millions of models and dataset pairs and 10+ tasks 🤗

To try, install autotrain-advanced using pip. You can ignore dependencies and install without --no-deps and then you'd need to install some dependencies by hand.

"pip install autotrain-advanced"

Github repo: https://github.com/huggingface/autotrain-advanced
reacted to their post with 🤝 2 days ago
view post
Post
5128
reacted to their post with 🤗 3 days ago
view post
Post
5128
posted an update 3 days ago
view post
Post
4275
Quintet Drop : : 🤗

{ Flux LoRA DLC ⛵ } : prithivMLmods/FLUX-LoRA-DLC

-- Purple Dreamy
{ pop of color } : prithivMLmods/Purple-Dreamy-Flux-LoRA

-- Golden Dust
{ shimmer contrast } : prithivMLmods/Golden-Dust-Flux-LoRA

-- Lime Green
{ depth to the composition } : prithivMLmods/Lime-Green-Flux-LoRA

-- Flare Strike
{ Fractured Line } : prithivMLmods/Fractured-Line-Flare

-- Orange Chroma
{ studio lighting } : prithivMLmods/Orange-Chroma-Flux-LoRA
.
.
.
{ collection } : prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

@prithivMLmods
reacted to chansung's post with 🤗 4 days ago
view post
Post
4087
Effortlessly stay up-to-date with AI research trends using a new AI tool, "AI Paper Reviewer" !!

It analyzes a list of Hugging Face Daily Papers(w/ @akhaliq ) and turn them into insightful blog posts. This project leverages Gemini models (1.5 Pro, 1.5 Flash, and 1.5 Flash-8B) for content generation and Upstage Document Parse for parsing the layout and contents.
blog link: https://deep-diver.github.io/ai-paper-reviewer/

Also, here is the link of GitHub repository for parsing and generating pipeline. By using this, you can easily build your own GitHub static pages based on any arXiv papers with your own interest!
: https://github.com/deep-diver/paper-reviewer
reacted to m-ric's post with 🚀 4 days ago
view post
Post
2383
𝗛𝘂𝗻𝘆𝘂𝗮𝗻-𝗟𝗮𝗿𝗴𝗲 𝗷𝘂𝘀𝘁 𝗿𝗲𝗹𝗲𝗮𝘀𝗲𝗱 𝗯𝘆 𝗧𝗲𝗻𝗰𝗲𝗻𝘁: 𝗟𝗮𝗿𝗴𝗲𝘀𝘁 𝗲𝘃𝗲𝗿 𝗼𝗽𝗲𝗻 𝗠𝗼𝗘 𝗟𝗟𝗠, 𝗼𝗻𝗹𝘆 𝟱𝟮𝗕 𝗮𝗰𝘁𝗶𝘃𝗲 𝗽𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿𝘀 𝗯𝘂𝘁 𝗯𝗲𝗮𝘁𝘀 𝗟𝗟𝗮𝗠𝗔 𝟯.𝟭-𝟰𝟬𝟱𝗕 𝗼𝗻 𝗺𝗼𝘀𝘁 𝗮𝗰𝗮𝗱𝗲𝗺𝗶𝗰 𝗯𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸𝘀 🚀

⚡ Mixture of Experts (MoE) architecture: 389 B parameters in total, but only 52B are activated for any input

🧪 Trained on 7T tokens, including 1.5T tokens of synthetic data

🏗️ Architecture : Novel "recycle routing" prevents token dropping when experts are overrloaded

📊 Great benchmark results: Surpasses Llama-3-405B-Instruct in most benchmarks although it has 8x fewer active parameters
‣ Impressive perf on MATH: 77.4

🐋 Large context length: up to 256K tokens

🔒 License:
‣ Commercial use allowed, except if your products have >100M monthly active users
‣ No access in the EU

🤗 Model weights available on HF!

Read the full paper here 👉  Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent (2411.02265)
reacted to their post with ❤️ 5 days ago
view post
Post
5128
replied to their post 6 days ago
view reply

@Adam-110

-> A Mediterranean chef standing in a rustic kitchen, surrounded by fresh ingredients like olives, tomatoes, herbs, and lemons. He wears a traditional chef's jacket with rolled-up sleeves, an apron, and a small chef's cap. The kitchen has stone walls, wooden countertops, and ceramic pots. He’s holding a plate with a vibrant Mediterranean dish, with warm lighting that gives a cozy, inviting atmosphere. The scene captures the authenticity of Mediterranean cooking, with sunlit colors and rich textures, evoking a sense of freshness and tradition.

-> A Mediterranean lady chef in a warm, rustic kitchen filled with fresh ingredients like basil, tomatoes, olives, and garlic. She wears a chef's jacket with rolled-up sleeves, an apron, and a warm smile. Her hair is neatly tied back, and she's holding a colorful Mediterranean dish, perhaps a vibrant salad or pasta. The kitchen features stone walls, wooden shelves, and traditional cookware, with sunlight streaming in to highlight the fresh ingredients. The setting has a cozy, inviting atmosphere that reflects the charm and warmth of Mediterranean cooking.

1.png

2.png

Image generated by : https://huggingface.co/spaces/prithivMLmods/FLUX-REALISM

reacted to davidberenstein1957's post with 🤗 6 days ago
view post
Post
2954
Vector Search (most) datasets on the Hugging Face Hub 🔦

Powered by: Polars, DuckDB, Gradio and model2vec (lightning-fast embeddings by Stéphan Tulkens).

Should work fast enough for datasets up to 100K.

davidberenstein1957/vectorsearch-hub-datasets
reacted to Tonic's post with 👍 6 days ago
view post
Post
3019
🙋🏻‍♂️hey there folks,

periodic reminder : if you are experiencing ⚠️500 errors ⚠️ or ⚠️ abnormal spaces behavior on load or launch ⚠️

we have a thread 👉🏻 https://discord.com/channels/879548962464493619/1295847667515129877

if you can record the problem and share it there , or on the forums in your own post , please dont be shy because i'm not sure but i do think it helps 🤗🤗🤗
  • 2 replies
·
posted an update 6 days ago
view post
Post
5128
reacted to qq8933's post with 🔥 7 days ago
view post
Post
5180
LLaMA-O1: Open Large Reasoning Model Frameworks For Training, Inference and Evaluation With PyTorch and HuggingFace
Large Reasoning Models powered by Monte Carlo Tree Search (MCTS), Self-Play Reinforcement Learning, PPO, AlphaGo Zero's dua policy paradigm and Large Language Models!
https://github.com/SimpleBerry/LLaMA-O1/

What will happen when you compound MCTS ❤ LLM ❤ Self-Play ❤RLHF?
Just a little bite of strawberry!🍓

Past related works:
LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning (2410.02884)
Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo Tree Self-refine with LLaMa-3 8B (2406.07394)
  • 2 replies
·