Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Deep learning approaches to text production
Narayan S., Gardent C., Morgan & Claypool, San Rafael, CA, 2020. 199 pp. Type: Book (978-1-681737-60-7)
Date Reviewed: Feb 23 2021

Curiosity in natural language production (or generation) does not seem to be going away. Newer frameworks and newer approaches, with some of them causing paradigm shifts, are helping the cause. Through this book, Shashi Narayan and Claire Gardent unravel the mystery behind text production, taking readers through its historical evolution and elaborating on major developments.

Text production involves generating text from varied input sources to be used for different applications. For instance, input sources for automated text production could mean representations, data, or even text itself, while applications for which the output could be generated include text summarization, text paraphrasing, text simplification, or data verbalization.

Following chapter 1, “Introduction,” the book is organized into three parts: “Basics,” “Neural Improvements,” and “Data Sets and Conclusion.” A substantial part of the discussion is centered around different neural approaches to the encoder-decoder model appearing in various neural learning architectures, including one that uses “a recurrent neural network (RNN) as encoder and a different RNN as decoder.” Discussions on using bidirectional recurrent neural network (BiRNNs) are also present. For text summarization, a description of hierarchical encoder-decoder architecture is given. The model consists of a “CNN sentence encoder, a RNN document-encoder and an attention-based RNN sentence extractor.”

While almost all relevant aspects of machine learning and deep learning are illustrated, sufficient links to authentic sources on related topics are also included. Graphical illustrations complement the technical aesthetics of the book, to help readers make quick inferences. A basic understanding of machine learning and deep learning fundamentals is expected as a prerequisite.

An interesting feature that caught my attention is the chapter dedicated to datasets, which adds to the richness and elegance of this book. Various types of representations of natural language, including semantics and dependency trees, are discussed in detail. However, I found the bibliography a bit too difficult to navigate.

In the concluding chapter, the authors mention the arrival of transformers as a new paradigm shift from long short-term memory (LSTM) networks. The work is expected to be of great help to educational, research, and commercial institutions engaged in text production from different forms of inputs.

Reviewer:  CK Raju Review #: CR147196 (2107-0171)
Bookmark and Share
  Reviewer Selected
 
 
Learning (I.2.6 )
 
 
Natural Language (H.5.2 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy