arxiv_id
stringlengths 10
10
| github
stringlengths 0
104
| title
stringlengths 8
177
| upvotes
int64 0
587
| num_comments
int64 0
74
| github_mention_hf
float64 0
1
| num_models
float64 0
100
| num_datasets
float64 0
100
| num_spaces
float64 0
100
| reached_out_link
stringclasses 341
values | reached_out_success
float64 0
1
⌀ | has_artifact
bool 2
classes | submitted_by
stringlengths 2
31
⌀ | reached_out_note
stringclasses 21
values | date
stringclasses 363
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2410.10672 | https://github.com/mlgroupjlu/matrixnuclearnorm | Large Language Model Evaluation via Matrix Nuclear-Norm | 18 | 2 | 0 | 0 | 0 | 0 | null | null | false | WhiteCatY | no artifacts | 2024-10-17 |
2410.12722 | WorldMedQA-V: a multilingual, multimodal medical examination dataset for multimodal language models evaluation | 4 | 2 | 0 | 0 | 1 | 0 | null | null | true | shanchen | null | 2024-10-17 |
|
2410.11081 | Simplifying, Stabilizing and Scaling Continuous-Time Consistency Models | 9 | 3 | 0 | 0 | 0 | 0 | null | null | false | feifeiobama | null | 2024-10-17 |
|
2410.11878 | Neural Metamorphosis | 6 | 2 | 0 | 0 | 0 | 0 | null | null | false | adamdad | no artifacts | 2024-10-17 |
|
2410.12491 | Insights from the Inverse: Reconstructing LLM Training Goals Through Inverse RL | 3 | 2 | 0 | 0 | 0 | 0 | null | null | false | skrishna | no code | 2024-10-17 |
|
2410.12391 | Tracking Universal Features Through Fine-Tuning and Model Merging | 5 | 2 | 0 | 4 | 0 | 0 | null | null | true | nilq | null | 2024-10-17 |
|
2410.07722 | DyVo: Dynamic Vocabularies for Learned Sparse Retrieval with Entities | 12 | 2 | 0 | 0 | 0 | 0 | null | null | false | andrewyates | no code yet | 2024-10-17 |
|
2410.08968 | Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements | 12 | 2 | 0 | 0 | 0 | 0 | null | null | false | jackzhang | null | 2024-10-17 |
|
2410.09870 | https://github.com/dmis-lab/chroknowledge | ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple Domains | 7 | 3 | 1 | 0 | 1 | 0 | null | null | true | Minbyul | null | 2024-10-17 |
2410.12490 | https://github.com/DAMO-NLP-SG/DiGIT | Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective | 6 | 2 | 0 | 1 | 0 | 0 | null | null | true | zyx123 | no artifacts | 2024-10-17 |
2410.09724 | Taming Overconfidence in LLMs: Reward Calibration in RLHF | 2 | 2 | 0 | 6 | 0 | 0 | null | null | true | teapot123 | null | 2024-10-17 |
|
2410.11900 | FLARE: Faithful Logic-Aided Reasoning and Exploration | 3 | 2 | 0 | 0 | 0 | 0 | null | null | false | IAMJB | no code | 2024-10-17 |
|
2410.11843 | From Commands to Prompts: LLM-based Semantic File System for AIOS | 1 | 1 | 0 | 0 | 0 | 0 | null | null | false | IAMJB | no artifacts | 2024-10-17 |
|
2410.09426 | https://github.com/ruikangliu/flatquant | FlatQuant: Flatness Matters for LLM Quantization | 12 | 2 | 0 | 0 | 0 | 0 | null | null | false | lianlio | no artifacts | 2024-10-18 |
2410.13824 | Harnessing Webpage UIs for Text-Rich Visual Understanding | 27 | 2 | 0 | 2 | 3 | 0 | null | null | true | yuexiang96 | null | 2024-10-18 |
|
2410.13841 | A Unified View of Delta Parameter Editing in Post-Trained Large-Scale Models | 14 | 2 | 0 | 0 | 0 | 0 | null | null | false | Tigerph | no code | 2024-10-18 |
|
2410.13085 | https://github.com/richard-peng-xia/mmed-rag | MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models | 20 | 3 | 0 | 0 | 0 | 0 | null | null | false | richardxp888 | no artifacts | 2024-10-18 |
2410.13830 | DreamVideo-2: Zero-Shot Subject-Driven Video Customization with Precise Motion Control | 21 | 2 | 0 | 0 | 0 | 0 | null | null | false | weilllllls | no code yet | 2024-10-18 |
|
2410.13852 | Retrospective Learning from Interactions | 7 | 2 | 0 | 0 | 0 | 0 | null | null | false | yoavartzi | null | 2024-10-18 |
|
2410.13198 | Failing Forward: Improving Generative Error Correction for ASR with Synthetic Data and Retrieval Augmentation | 8 | 2 | 0 | 0 | 0 | 0 | null | null | false | Sreyan88 | no code | 2024-10-18 |
|
2410.13060 | AERO: Softmax-Only LLMs for Efficient Private Inference | 3 | 2 | 0 | 0 | 0 | 0 | null | null | false | nandan523 | no code | 2024-10-18 |
|
2410.13785 | PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment | 17 | 2 | 0 | 0 | 0 | 0 | null | null | false | ZenMoore | no code | 2024-10-18 |
|
2410.09019 | https://github.com/nyuolab/MedMobile | MedMobile: A mobile-sized language model with expert-level clinical capabilities | 8 | 2 | 1 | 1 | 0 | 0 | null | null | true | KrithikV | null | 2024-10-18 |
2410.13804 | https://github.com/tianyi-lab/bento | BenTo: Benchmark Task Reduction with In-Context Transferability | 20 | 3 | 0 | 0 | 0 | 0 | https://huggingface.co/datasets/cindermond/bento/discussions/1 | null | false | zhoutianyi | null | 2024-10-18 |
2410.13832 | VidPanos: Generative Panoramic Videos from Casual Panning Videos | 10 | 2 | 0 | 0 | 0 | 0 | null | null | false | akhaliq | null | 2024-10-18 |
|
2410.13720 | Movie Gen: A Cast of Media Foundation Models | 77 | 2 | 0 | 0 | 0 | 0 | null | null | false | akhaliq | no code | 2024-10-18 |
|
2410.13618 | https://github.com/skddj/loldu | LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for Parameter-Efficient Fine-Tuning | 6 | 2 | 0 | 0 | 0 | 0 | null | null | false | Shiym | no artifacts | 2024-10-18 |
2410.11842 | https://github.com/skyworkai/moh | MoH: Multi-Head Attention as Mixture-of-Head Attention | 19 | 2 | 1 | 6 | 0 | 0 | null | null | true | Chat-UniVi | null | 2024-10-18 |
2410.13293 | SBI-RAG: Enhancing Math Word Problem Solving for Students through Schema-Based Instruction and Retrieval-Augmented Generation | 3 | 2 | 0 | 0 | 0 | 0 | null | null | false | pdx97 | no code | 2024-10-18 |
|
2410.13754 | MixEval-X: Any-to-Any Evaluations from Real-World Data Mixtures | 70 | 2 | 0 | 0 | 1 | 0 | null | null | true | jinjieni | already on the hub | 2024-10-18 |
|
2410.13848 | Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation | 27 | 3 | 0 | 2 | 0 | 5 | null | null | true | WuChengyue | null | 2024-10-18 |
|
2410.13334 | Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems | 12 | 2 | 0 | 0 | 0 | 0 | null | null | false | hbseong | no code | 2024-10-18 |
|
2410.12781 | Long-LRM: Long-sequence Large Reconstruction Model for Wide-coverage Gaussian Splats | 5 | 2 | 0 | 0 | 0 | 0 | null | null | false | arthurhero | no code | 2024-10-18 |
|
2410.13854 | https://github.com/MING-ZCH/CII-Bench | Can MLLMs Understand the Deep Implication Behind Chinese Images? | 7 | 2 | 1 | 0 | 1 | 0 | null | null | true | MING-ZCH | null | 2024-10-18 |
2410.13360 | Remember, Retrieve and Generate: Understanding Infinite Visual Concepts as Your Personalized Assistant | 8 | 2 | 0 | 0 | 0 | 0 | null | null | false | Hoar012 | null | 2024-10-18 |
|
2410.12957 | MuVi: Video-to-Music Generation with Semantic Alignment and Rhythmic Synchronization | 6 | 2 | 0 | 0 | 0 | 0 | null | null | false | ckzheng | no code | 2024-10-18 |
|
2410.12183 | TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration | 3 | 2 | 0 | 0 | 0 | 0 | null | null | false | markywg | no artifacts | 2024-10-18 |
|
2410.13859 | γ-MoD: Exploring Mixture-of-Depth Adaptation for Multimodal Large Language Models | 7 | 2 | 0 | 1 | 0 | 0 | null | null | true | YaxinLuo | null | 2024-10-18 |
|
2410.10210 | Minimum Tuning to Unlock Long Output from LLMs with High Quality Data as the Key | 3 | 2 | 0 | 0 | 0 | 0 | null | null | false | Yingda | no code | 2024-10-18 |
|
2410.09347 | https://github.com/thu-ml/cca | Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment | 3 | 2 | 1 | 2 | 0 | 0 | null | null | true | ChenDRAG | null | 2024-10-18 |
2410.13757 | MobA: A Two-Level Agent System for Efficient Mobile Task Automation | 30 | 3 | 0 | 0 | 1 | 0 | null | null | true | JamesZhutheThird | null | 2024-10-18 |
|
2410.13268 | Roadmap towards Superhuman Speech Understanding using Large Language Models | 33 | 2 | 0 | 0 | 0 | 0 | null | null | false | FanBuCUHK | no code | 2024-10-18 |
|
2410.12771 | Open Materials 2024 (OMat24) Inorganic Materials Dataset and Models | 5 | 1 | 0 | 1 | 1 | 1 | null | null | true | mshuaibi | null | 2024-10-18 |
|
2410.12705 | WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines | 25 | 3 | 0 | 0 | 1 | 1 | null | null | true | gentaiscool | null | 2024-10-18 |
|
2410.12784 | JudgeBench: A Benchmark for Evaluating LLM-based Judges | 30 | 2 | 0 | 1 | 1 | 1 | null | null | true | sijuntan | null | 2024-10-18 |
|
2410.13863 | Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous Tokens | 33 | 3 | 0 | 0 | 0 | 0 | null | null | false | tyl5566 | no code | 2024-10-18 |
|
2410.13639 | https://github.com/open-source-o1/o1_reasoning_patterns_study | A Comparative Study on Reasoning Patterns of OpenAI's o1 Model | 14 | 2 | 0 | 0 | 0 | 0 | https://github.com/Open-Source-O1/o1_Reasoning_Patterns_Study/issues/1 | null | false | SiweiWu | null | 2024-10-18 |
2410.13370 | MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models | 33 | 4 | 0 | 0 | 0 | 0 | https://github.com/Correr-Zhou/MagicTailor/issues/1 | null | false | BryanW | null | 2024-10-21 |
|
2410.13674 | https://github.com/tianyi-lab/DisCL | Diffusion Curriculum: Synthetic-to-Real Generative Curriculum Learning via Image-Guided Diffusion | 13 | 3 | 0 | 0 | 0 | 0 | https://github.com/tianyi-lab/DisCL/issues/2 | null | false | zhoutianyi | null | 2024-10-21 |
2410.13232 | Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation | 39 | 2 | 0 | 0 | 0 | 0 | null | null | false | hyungjoochae | no code yet | 2024-10-21 |
|
2410.13276 | SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs | 24 | 2 | 0 | 0 | 0 | 0 | null | null | false | Shijie | no code yet | 2024-10-21 |
|
2410.12791 | Context is Key(NMF): Modelling Topical Information Dynamics in Chinese Diaspora Media | 4 | 3 | 0 | 0 | 0 | 1 | null | null | true | kardosdrur | null | 2024-10-21 |
|
2410.14677 | Are AI Detectors Good Enough? A Survey on Quality of Datasets With Machine-Generated Texts | 9 | 5 | 0 | 0 | 0 | 0 | null | null | false | andriygav | no artifacts | 2024-10-21 |
|
2410.14059 | https://github.com/TobyYang7/UCFE-Benchmark | UCFE: A User-Centric Financial Expertise Benchmark for Large Language Models | 52 | 2 | 0 | 0 | 0 | 0 | https://github.com/TobyYang7/UCFE-Benchmark/issues/1 | null | false | amstrongzyf | null | 2024-10-21 |
2410.13828 | A Common Pitfall of Margin-based Language Model Alignment: Gradient Entanglement | 3 | 2 | 0 | 0 | 0 | 0 | null | null | false | yokey | no code yet | 2024-10-21 |
|
2410.13782 | DPLM-2: A Multimodal Diffusion Protein Language Model | 18 | 3 | 0 | 0 | 0 | 0 | https://huggingface.co/airkingbd/dplm_150m/discussions/1 | null | false | chengyenhsieh | null | 2024-10-21 |
|
2410.14669 | NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples | 35 | 4 | 0 | 0 | 2 | 0 | null | null | true | BaiqiL | null | 2024-10-21 |
|
2410.13726 | https://github.com/hanbo-cheng/dawn-pytorch | DAWN: Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for Talking Head Video Generation | 8 | 2 | 0 | 0 | 0 | 0 | https://github.com/Hanbo-Cheng/DAWN-pytorch/issues/1 | null | false | Hanbo-Cheng | null | 2024-10-21 |
2410.14470 | https://github.com/paulgavrikov/layer_criticality | How Do Training Methods Influence the Utilization of Vision Models? | 4 | 2 | 1 | 0 | 0 | 0 | null | null | false | paulgavrikov | no artifacts | 2024-10-21 |
2410.13787 | https://github.com/felixbinder/introspection_self_prediction | Looking Inward: Language Models Can Learn About Themselves by Introspection | 5 | 3 | 1 | 0 | 1 | 0 | null | null | true | thejaminator | null | 2024-10-21 |
2410.10812 | https://github.com/mit-han-lab/hart | HART: Efficient Visual Generation with Hybrid Autoregressive Transformer | 12 | 2 | 1 | 0 | 0 | 0 | https://huggingface.co/mit-han-lab/hart-0.7b-1024px/discussions/1 | null | false | akhaliq | null | 2024-10-21 |
2410.14208 | https://github.com/cxcscmu/montessori-instruct | Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning | 2 | 2 | 1 | 0 | 0 | 0 | null | null | false | lixiaochuan2020 | no artifacts | 2024-10-21 |
2410.14596 | https://github.com/esteng/persuasion_balanced_training | Teaching Models to Balance Resisting and Accepting Persuasion | 2 | 2 | 1 | 3 | 0 | 0 | https://huggingface.co/esteng/pbt_mistral_7B/discussions/1 | null | true | esteng | null | 2024-10-21 |
2410.14672 | https://github.com/haoosz/BiGR | BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities | 7 | 2 | 1 | 1 | 0 | 0 | https://huggingface.co/haoosz/BiGR/discussions/1 | null | true | haoosz | null | 2024-10-21 |
2410.13925 | FiTv2: Scalable and Improved Flexible Vision Transformer for Diffusion Model | 20 | 3 | 0 | 0 | 0 | 0 | null | null | false | whlzy | no code yet | 2024-10-21 |
|
2410.11190 | https://github.com/gpt-omni/mini-omni2 | Mini-Omni2: Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities | 19 | 2 | 1 | 1 | 0 | 0 | https://huggingface.co/gpt-omni/mini-omni2/discussions/1 | null | true | akhaliq | null | 2024-10-21 |
2410.11331 | SHAKTI: A 2.5 Billion Parameter Small Language Model Optimized for Edge AI and Low-Resource Environments | 5 | 3 | 0 | 0 | 0 | 2 | null | null | true | SyedAbdul | null | 2024-10-21 |
|
2410.14940 | Baichuan Alignment Technical Report | 46 | 2 | 0 | 0 | 0 | 0 | https://huggingface.co/PKU-Baichuan-MLSystemLab/Llama3-PBM-Nova-70B/discussions/2 | null | false | lin5547 | null | 2024-10-22 |
|
2410.13218 | CBT-Bench: Evaluating Large Language Models on Assisting Cognitive Behavior Therapy | 4 | 2 | 0 | 0 | 1 | 0 | null | null | true | billmianz | null | 2024-10-22 |
|
2410.14745 | https://github.com/luo-junyu/SemiEvol | SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation | 45 | 2 | 1 | 2 | 1 | 0 | https://huggingface.co/luojunyu/Llama-3.1-8B-SemiEvol-MMLU/discussions/1 | null | true | luojunyu | null | 2024-10-22 |
2410.13861 | https://github.com/rongyaofang/puma | PUMA: Empowering Unified MLLM with Multi-granular Visual Generation | 51 | 3 | 0 | 0 | 0 | 0 | https://github.com/rongyaofang/PUMA/issues/3 | null | false | LucasFang | null | 2024-10-22 |
2410.16256 | https://github.com/open-compass/compassjudger | CompassJudger-1: All-in-one Judge Model Helps Model Evaluation and Evolution | 55 | 2 | 1 | 6 | 0 | 2 | https://huggingface.co/opencompass/CompassJudger-1-32B-Instruct/discussions/1 | null | true | zsytony | null | 2024-10-22 |
2410.16153 | Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages | 41 | 3 | 0 | 2 | 1 | 3 | null | null | true | yuexiang96 | null | 2024-10-22 |
|
2410.13394 | https://github.com/ai4bharat/cia | Cross-Lingual Auto Evaluation for Assessing Multilingual LLMs | 1 | 2 | 1 | 13 | 0 | 0 | null | null | true | safikhan | null | 2024-10-22 |
2410.15017 | https://github.com/mubtasimahasan/dm-codec | DM-Codec: Distilling Multimodal Representations for Speech Tokenization | 1 | 2 | 0 | 0 | 0 | 0 | https://github.com/mubtasimahasan/DM-Codec/issues/1 | null | false | amanchadha | null | 2024-10-22 |
2410.16184 | https://github.com/thu-keg/rm-bench | RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style | 23 | 2 | 1 | 0 | 1 | 0 | https://huggingface.co/datasets/THU-KEG/RM-Bench/discussions/2 | null | true | RicardoL1u | null | 2024-10-22 |
2410.16271 | FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors | 79 | 2 | 0 | 0 | 0 | 0 | null | null | false | yulunliu | no code yet | 2024-10-22 |
|
2410.12788 | https://github.com/IAAR-Shanghai/Meta-Chunking | Meta-Chunking: Learning Efficient Text Segmentation via Logical Perception | 19 | 3 | 1 | 0 | 0 | 0 | null | null | false | Duguce | no artifacts | 2024-10-22 |
2410.16268 | https://github.com/mark12ding/sam2long | SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree | 61 | 2 | 1 | 0 | 0 | 0 | null | null | false | myownskyW7 | no artifacts | 2024-10-22 |
2410.15735 | https://github.com/huggingface/autotrain-advanced | AutoTrain: No-code training for state-of-the-art models | 48 | 2 | 1 | 0 | 0 | 100 | null | null | true | derek-thomas | null | 2024-10-22 |
2410.16215 | Pre-training Distillation for Large Language Models: A Design Space Exploration | 15 | 2 | 0 | 0 | 0 | 0 | null | null | false | bys0318 | no code | 2024-10-22 |
|
2410.15633 | Selecting Influential Samples for Long Context Alignment via Homologous Models' Guidance and Contextual Awareness Measurement | 7 | 2 | 0 | 0 | 0 | 0 | null | null | false | ssz1111 | no code | 2024-10-22 |
|
2410.15748 | Alchemy: Amplifying Theorem-Proving Capability through Symbolic Mutation | 11 | 3 | 0 | 0 | 0 | 0 | null | null | false | EurekaWu123 | no code | 2024-10-22 |
|
2410.11711 | https://github.com/abenechehab/dicl | Zero-shot Model-based Reinforcement Learning using Large Language Models | 7 | 3 | 0 | 0 | 0 | 0 | null | null | false | abenechehab | null | 2024-10-22 |
2410.15316 | https://github.com/homebrewltd/ichigo | Ichigo: Mixed-Modal Early-Fusion Realtime Voice Assistant | 8 | 4 | 1 | 1 | 0 | 2 | null | null | true | HoangHa | null | 2024-10-22 |
2410.13184 | https://github.com/case-lab-umd/router-tuning | Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers | 1 | 2 | 0 | 0 | 0 | 0 | null | null | false | charleslipku | no artifacts | 2024-10-22 |
2410.15460 | Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language Model Training | 1 | 2 | 0 | 0 | 0 | 0 | null | null | false | Shahradmz | no artifacts | 2024-10-22 |
|
2410.14086 | https://github.com/3rdcore/prequentialcode | In-context learning and Occam's razor | 2 | 2 | 0 | 0 | 0 | 0 | null | null | false | 3rdCore | no artifacts | 2024-10-22 |
2410.15002 | https://github.com/vsahil/mimetic-2 | How Many Van Goghs Does It Take to Van Gogh? Finding the Imitation Threshold | 5 | 3 | 0 | 0 | 0 | 0 | null | null | false | Royir | null | 2024-10-22 |
2410.16259 | Agent-to-Sim: Learning Interactive Behavior Models from Casual Longitudinal Videos | 4 | 2 | 0 | 0 | 0 | 0 | null | null | false | gengshan-y | null | 2024-10-22 |
|
2410.14649 | https://github.com/ist-daslab/evopress | EvoPress: Towards Optimal Dynamic Model Compression via Evolutionary Search | 5 | 2 | 0 | 0 | 0 | 0 | null | null | false | OliverSieberling | no artifacts | 2024-10-23 |
2410.17215 | https://github.com/thu-coai/miniplm | MiniPLM: Knowledge Distillation for Pre-Training Language Models | 12 | 2 | 1 | 13 | 0 | 0 | https://huggingface.co/MiniLLM/MiniPLM-Qwen-200M/discussions/1 | null | true | t1101675 | null | 2024-10-23 |
2410.17250 | JMMMU: A Japanese Massive Multi-discipline Multimodal Understanding Benchmark for Culture-aware Evaluation | 12 | 2 | 0 | 0 | 1 | 2 | null | null | true | AtsuMiyai | null | 2024-10-23 |
|
2410.17249 | SpectroMotion: Dynamic 3D Reconstruction of Specular Scenes | 36 | 2 | 0 | 0 | 0 | 0 | null | null | false | yulunliu | no code yet | 2024-10-23 |
|
2410.17247 | https://github.com/cooperx521/pyramiddrop | PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction | 41 | 2 | 1 | 0 | 0 | 0 | null | null | false | myownskyW7 | no artifacts | 2024-10-23 |
2410.17131 | https://github.com/icip-cas/sso | Aligning Large Language Models via Self-Steering Optimization | 18 | 3 | 0 | 0 | 0 | 0 | null | null | false | Tigerph | will be released on the hub | 2024-10-23 |
2410.16930 | https://github.com/bryanchrist/mathneuro | Math Neurosurgery: Isolating Language Models' Math Reasoning Abilities Using Only Forward Passes | 4 | 2 | 0 | 0 | 0 | 0 | null | null | false | bryanchrist | no artifacts | 2024-10-23 |
2410.15926 | https://github.com/xing0047/cca-llava | Mitigating Object Hallucination via Concentric Causal Attention | 12 | 2 | 1 | 1 | 0 | 0 | https://huggingface.co/xing0047/cca-llava-1.5-7b/discussions/1 | null | true | xing0047 | null | 2024-10-23 |
2410.16267 | xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs | 13 | 2 | 0 | 0 | 0 | 0 | null | null | false | michaelryoo | no code yet | 2024-10-23 |
|
2410.16198 | https://github.com/riflezhang/llava-reasoner-dpo | Improve Vision Language Model Chain-of-thought Reasoning | 14 | 2 | 0 | 0 | 0 | 0 | null | null | false | ruohongz | no code yet | 2024-10-23 |