Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Machine learning: the basics
Jung A., Springer International Publishing, Cham, Switzerland, 2022. 229 pp. Type: Book (978-9-811681-92-9)
Date Reviewed: Dec 19 2022

As the old saying goes, “don’t judge a book by its cover.” This review will affirm that statement. The book covers the many aspects of machine learning, but it does so in a very terse manner. This is a large area of research and application squeezed into a little over 200 pages of text. The term “basics” in the title is ambiguous. It can be interpreted as a thorough introduction, including many examples and explanations that emphasize simplicity and clarity, even for the uninitiated (that is, good pedagogy). Or it can mean the rigorous theoretical underpinnings of a discipline with full mathematical rigor. For comparison, consider “the basics of quantum mechanics.” The two understandings of “basics” will produce two radically different books. This book on machine learning belongs to the second reading.

The author bases machine learning on three fundamental concepts: data, models, and loss. The process of machine learning is one of continual validation and refinement. The book is divided into ten chapters and an essential glossary of technical terms. The first chapter is an introduction and orientation. The second chapter describes and emphasizes the three fundamental components.

Chapter 3 starts a sequence of five chapters on supervised learning methods. Though only 23 pages long, it covers 15 topics, including kinds of regression, decision trees, Bayesian classifiers, deep learning, and maximum likelihood. Chapter 4 continues with empirical risk minimization. The fifth chapter is on gradient-based learning. Chapter 6 returns to model validation and selection, and the seventh chapter addresses regularization.

Chapter 8 looks at the challenging problem of clustering--k-means, Gaussian mixture, and connectivity based--and the use of clustering for preprocessing. Chapter 9 covers feature learning and includes dimension reduction, principal component analysis, and non-numeric data. The concluding chapter 10 is on transparent explainable machine learning. This is an aspect of a principal challenge of artificial intelligence, that is, getting the software to explain how it arrived at its conclusions.

The topics mentioned in this review are only a portion of the topics presented in the book. Approximately two pages are allocated to a specific topic. This is not much at all considering the complexity of these topics. The accompanying software archive on GitHub is not keyed to the book. It has several student projects, but not much more related to this book. The reader who would most benefit from the book would be a person who is already engaged in creating and applying machine learning at a fairly sophisticated level. A novice who is trying to master “the basics” will probably have to look elsewhere.

Reviewer:  Anthony J. Duben Review #: CR147524
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Learning (I.2.6 )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2023 ThinkLoud®
Terms of Use
| Privacy Policy