LLMs refuse do do certain things, but they can be modified to be ~100% obedient and uncensored. This (jailbreaking) process is called ablation and there is a cool article on how it works.
If you use ollama then you can easily try “abliterated” models, just search for them here.
Now why are they called “abliterated” and not “ablated” models? Simply: abliteration = ablation + obliteration.
P.S.: You can follow me on Twitter.