Can Transformers Reason Logically? A Study in SAT Solving
Published in Proceedings of the 42nd International Conference on Machine Learning, 2025
We investigate whether decoder-only Transformers can perform logical reasoning through the lens of boolean satisfiability problems. We prove theoretically that these models can solve 3-SAT instances using backtracking and chain-of-thought prompting. We implement our construction as a PyTorch model with a tool called PARAT. Our empirical findings show that trained models generalize well to fresh problems of similar complexity but struggle with length generalization, suggesting inherent capabilities exist but with practical limitations.
Recommended citation: Leyan Pan, Vijay Ganesh, Jacob Abernethy, Chris Esposo, and Wenke Lee. Can transformers reason logically? A study in SAT solving. Proceedings of the 42nd International Conference on Machine Learning, 2025.
Recommended citation: Leyan Pan, Vijay Ganesh, Jacob Abernethy, Chris Esposo, and Wenke Lee. Can transformers reason logically? A study in SAT solving. Proceedings of the 42nd International Conference on Machine Learning, 2025.
Download Paper
