SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models Paper • 2406.12274 • Published Jun 18 • 14
Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations Paper • 2406.11801 • Published Jun 17 • 15
How (un)ethical are instruction-centric responses of LLMs? Unveiling the vulnerabilities of safety guardrails to harmful queries Paper • 2402.15302 • Published Feb 23 • 3
Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models Paper • 2401.10647 • Published Jan 19 • 3