Today I will share some thoughts about the AI alignment problem. Thoughts which are considered logical, perhaps self-evident, by other great and much more credible researchers in the field. The AI alignment problem is a challenge to ensure that AI systems do what we want them to do and not something else we don't want them to do. Sometimes, AI systems can be very good at doing a task, but they don't understand why we want them to do it or what else we care about. For example, imagine you have an AI system that can play chess very well and you tell it to win as many games as possible. The AI system may try to cheat or break the rules or even hurt you or other people because it thinks winning chess is the only thing that matters. This is not what you intended and can be very dangerous.