Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Towards Bayesian deep learning: a framework and some existing methods
Wang H., Yeung D. IEEE Transactions on Knowledge and Data Engineering28 (12):3395-3408,2016.Type:Article
Date Reviewed: Feb 1 2017

Machine learning is successfully used in many computer applications today. This has spawned an abundance of research and publications on the topic. The main problem as I see it is the confusion machine learning terminology creates when contrasted with human natural learning.

Machine learning algorithms are methods for function optimization by successive approximations and are an extensively studied domain of applied mathematics. The main approach in these methods consists of fitting some parameters of the algorithm that computes expressionless functions such that the values thus obtained match known or guessed values (seen, read, heard, and so on). The process of guessing these parameters is based on statistical information and is currently carried out by computers; therefore, it is called machine learning.

The human natural learning process is based on cognition theory and does not seem connected to machine learning as specified above. However, since machine learning is used in artificial intelligence (AI), the terminology may create confusion, particularly when discussing the AI limits.

The paper discussed here starts from the observations that while seeing, reading, and hearing are fundamental tasks for functioning comprehensive AI or data engineering systems, such a system should posses the ability to think. And among other attributes of natural thinking, the paper notices that the ability of an AI/data engineering (DE) system to think should involve ”casual inference, logic deduction, and dealing with uncertainty,“ which apparently are beyond the capability of conventional deep learning methods. To handle the limitations of conventional deep learning methods, the paper proposes the integration of probabilistic graphic models with neural networks; the resulting models are called Bayesian deep learning (BDL).

The paper defines a BDL model as the integration of a perception component and a task-specific component. It studies the implementation and performance of such a model in various contexts provided by deep learning applications. Computations performed by the algorithm are defined by the Bayesian rule applied on the probability distributions of three sets of variables characterizing the BDL: perception variables, hinge variables connecting the two components of the BDL, and task-specific variables.

The abusive use of acronyms and of references as sources of missing text makes it difficult to read.

Reviewer:  T. Rus Review #: CR145038 (1705-0306)
Bookmark and Share
  Reviewer Selected
 
 
Learning (I.2.6 )
 
 
Data Mining (H.2.8 ... )
 
 
Neural Nets (C.1.3 ... )
 
 
Neural Nets (I.5.1 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy