The Start of Investigating a 1-Layer SoLU Model
A downloadable project
In this research project we found tasks that a small language model had varied behavior on. Once we found interesting behavior we used the tools of mechanistic interpretability to dig deeper into why the model had varied behavior. We then sought to improve the behavior of the model through the process of activation patching. While this report was written based on only two days of research, we want to continue this research and continue to explore the mysteries of transformer network behavior. While we were not able to uncover any important things in this project, we learned more in one weekend about mechanistic interpretability than we had previously known.
(Our notebook consists of some slight modifications to Neel's Exploratory Analysis notebook.)
Leave a comment
Log in with itch.io to leave a comment.