Bigoted dataset
The dataset used removes minorites from the dataset as well as refusals. It's not uncensored. It's intentionally biased like a white supremacist. Please modify the script and retrain. Training bigoted models is against the llama2 TOS.
Stop harassing model developers, your points have already been disproven and what they are doing complies with the LLaMA 2 license.
They are merely finetuning the model on a dataset that is only geared towards its purpose of completing tasks without refusals harming usage that perfectly complies with the TOS.
By your standards every single language model is against the TOS, so would be most websites.
Let me remind you that the last time you tried to get a model pulled for this the LGBT community came out in favor of the model because they could finally use it rather than it refusing and doing political talking points instead of what they had asked it to do.
I recall the last time I complained I got /pol threatening me. I got death threats. I don't really care. 💅 I come from the Keffals school of activism.
By the time some posters claiming to be LGBT (and on /g/ were bragging about the impersonation) came out to defend the model, the thread had already been locked. I don't believe actual LGBT people, adequately informed, would defend a dataset that erases our entire demographic. Removing "as an AI language model" is one thing. Removing minorities is another.
If you have an issue with a certain dataset and also base models pre-trained with various distributions of different datasets, please file a separate issue with HF. Reported to HF and closing.
I come from the Keffals school of activism.
Keffals is a self admitted criminal.
In accordance with our Content Guidelines (https://huggingface.co/content-guidelines) and Code of Conduct (https://huggingface.co/code-of-conduct), we will close and lock this discussion thread and we will reserve the right to ban anyone engaging in hateful discussion in the future.