Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE10 (4):265-273,1985.Type:Article
Date Reviewed: Nov 1 1985

This is a short, popular account of some current work in machine learning, introduced with a description of convergence in perceptron-like networks. The author discusses Hopfield nets, which seek “minimum energy” patterns such that active units have maximal interfacilitation, and his own Boltzmann Machines, in which unit activation is probabilistic and the network settles to a “thermal equilibrium” in the presence of such background noise. The reference list contains a couple of more substantial papers.

Reviewer:  J. R. Sampson Review #: CR109534
Bookmark and Share
Learning (I.2.6 )
Human Information Processing (H.1.2 ... )
Would you recommend this review?
Other reviews under "Learning": Date
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
Looking at learning
Schank R.  Progress in artificial intelligence (, Orsay, France,291985. Type: Proceedings
Dec 1 1985

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2023 ThinkLoud®
Terms of Use
| Privacy Policy