Accession Number : AD1032195


Title :   Implicitly Defined Neural Networks for Sequence Labeling


Descriptive Note : Technical Report


Corporate Author : MASSACHUSETTS INST OF TECH LEXINGTON LEXINGTON


Personal Author(s) : Kazi,Michaeel M ; Thompson,Brian J


Full Text : http://www.dtic.mil/dtic/tr/fulltext/u2/1032195.pdf


Report Date : 31 Jul 2017


Pagination or Media Count : 5


Abstract : In this paper, we propose an implicit neural network architecture and show that it can be computed in a reasonably efficient manner. Our architecture relaxes the causality assumption in formulating recurrent neural networks, so that the hidden states of the network are coupled together, in order to improve performance on complex, long-range dependencies in either direction of a sequence. We contrast our architecture with a bidirectional RNN, and show that our proposed architecture the bidirectional network matches its performance on one task, while providing an ensembling benefit greater than ensembling multiple bidirectional networks.


Descriptors :   hidden markov models , ambiguity , eigenvalues , grammars , vocabulary , algorithms , computations , probabilistic models , sampling , monte carlo method , markov models , sequences , equations , models , Neural Network


Subject Categories : Statistics and Probability


Distribution Statement : APPROVED FOR PUBLIC RELEASE