Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Recent advances in deep learning
Oriol Vinyals.YouTube,01:17:03,published onJan 28, 2016,stanfordonline,https://www.youtube.com/watch?v=UAq961jQjYg.Type:Video
Date Reviewed: Jul 11 2016

The applications of deep learning (DL) are expanding from machine learning to other areas of artificial intelligence (AI). Recently, concepts like deep networks, deep reinforcement learning, and recurrent neural networks have come up as the result of advancements in DL. This video addresses these recent advances.

In this video lecture, made at the Center for Professional Development at Stanford University, Oriol Vinyals, Research Scientist at Google, presents a road map of his presentation comprising a DL overview, recurrent networks as sequence decoders, and memories and attention. Vinyals explains the basics of supervised learning, neurons, the (other) chain rule and details about deep neural networks, a Google news benchmark, and pointer networks. He discusses skip-gram models, long short-term memory (LSTM), sequence-to-sequence models, and neural conversational models. He further explains attention and its type, namely, position based, content based, and visual attention.

Vinyals presents an interesting example of guessing about a cat or a dog in “Is This a Cat or Dog?” He also includes several examples comprising image captions comparing human versions to machine versions. In a comparison of DL with machine learning and support vector machines, he projects the growth of DL over support vector machines in the last two years. He further introduces deep reinforcement learning and talks about recent papers published in Nature on DL. He promises the development of universal machine learning for applications in speech, text, search queries, images, videos, labels, entities, words, audio, and features.

This video will be interesting for those who are working in the DL and AI areas developing real-time applications. Vinyals affirms that “DL passed the promise, sequences have become first-class citizens, and DL is not equivalent to AI,” which makes watching worthwhile.

Reviewer:  Lalit Saxena Review #: CR144563 (1609-0692)
Bookmark and Share
 
Learning (I.2.6 )
 
 
Connectionism And Neural Nets (I.2.6 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy