https://mlbenchmarks.org/This is the actual link to reach the book. There is no navigation link back to the index on the shared link.
Very cool book. I think a reason why ML has seen so much progress despite benchmark overfitting/abuse is that results are "regularized" by real world applications and the Lindy effect. Methods, or research, that abuse benchmarks aren't adopted by follow-up research so they tend not to survive. And they aren't adopted because people try them but then find out that they don't generalize to other/newer benchmarks. So the system works not because of specific benchmarks, but because of how the community as a whole deals with benchmarks.
If I'm recall correctly, this was also a keynote at MDS24? That was also a great talk, Hardt is an excellent speaker.
Read the preface.
1. It sounds like this book can be summarized in a practical blog post or a series of posts
2. Is using the term crisis so many times really necessary?