This is a demo for our URIAL paper that enables base LLMs to chat with in-context alignment. You can talk directly with base, untuned LLMs to find out what knowledge and skills they have already learned from pre-training instead of SFT or xPO or RLHF. Also, you can use this to explore the pre-training data of base LLMs by chatting! I found a very interesting case: Base version of Llama-3-8B often thinks it is built by OpenAI, lol.
Introducing Vision Arena (beta)! Based on the lmsys's ChatbotArena, we create a simple demo for testing different Vision LMs (VLMs). We now support GPT-4V, Gemini-Pro-Vision, and Llava. More updates and models will come soon! We are still in the development stage and for now and we'd love to hear your feedback and suggestions! Please help us vote for better VLMs in your own use cases here! :D Kudos to Yujie Lu (UCSB)! WildVision/vision-arena