AI models can be abused. But so can AI laws. Let's test both.
Red-teaming is a really good way of thinking about finding the flaws in legislation. It seems like there would be room for a great non-partisan foundation that specialized in doing just that.
Red-teaming is a really good way of thinking about finding the flaws in legislation. It seems like there would be room for a great non-partisan foundation that specialized in doing just that.