As one of the most comprehensive machine learning texts around, this book does justice to the field's incredible richness, but without losing sight of the unifying principles. Peter Flach's clear, example-based approach begins by discussing how a spam filter works, which gives an immediate introduction to machine learning in action, with a minimum of technical fuss. Flach provides case studies of increasing complexity and variety with well-chosen examples and illustrations throughout. He covers a wide range of logical, geometric and statistical models and state-of-the-art topics such as matrix factorisation and ROC analysis. Particular attention is paid to the central role played by features. The use of established terminology is balanced with the introduction of new and useful concepts, and summaries of relevant background material are provided with pointers for revision if necessary. These features ensure Machine Learning will set a new standard as an introductory textbook.
Interestingly, most books are a mathematical treatment of Machine learning. This is more of a contextual reading with the math baked in. As such is less a mountain of proofs and more context that then shows the equation that matches this.
The earlier chapters likely better if you do not have the Probability, Stats and regression classes. The later chapters might be more challenging for the math impaired. That said, relative to a pure mathematical treatment, it's significantly easier to place the equations with the use case/model technique choice.
I didn't have computer access for a while, so I was using this book to learn about ML as much as I can. I think the book did what it has set out to do. It was most successful in the chapters where it constrained itself in discussing one type of models at a time. Elsewhere, I think it sacrifice clarify for brevity with some of the mathematics presentation; sometimes I feel like I'm trying to read the author's mind trying to figure out how he got from one line of the algebra to the next (although I suppose this is a common criticism I have for text books). I also feel like it needed a lot of practical examples for each of the models; I had folks asking me about ML stuff I've learned from the book, but I struggle to explain it beyond the way the book has. In order words, I was able to effectively internalize the knowledge yet. Otherwise, I think this is a pretty good entry to people like me who had some stats background while trying to learn more about ML.
It's not badly written, but it doesn't follow a linear, subsequent way to expose concepts. Every chapter is a collection of notions that are partially related each others, and this style makes the book really difficult to follow and make practical use of it.
There are more "practical" (re)introductions to the field of machine learning but the thing that’s great about this one is that it goes very deep on the math, which you really need to do if you’re going to even think about doing something original in this space. Read other books to run code samples and run cookbook exercises, but read this book to understand the science. 5 stars