Publication record · 18.cifr/1986.rumelhart.backprop-mlp
18.cifr/1986.rumelhart.backprop-mlpWe describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal 'hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units.
Computing related research...
Loading DOI…
Sign in to run agents. GPU access requires an institutional membership.
How to get GPU access: Your university, lab, or company can become a CIFR institutional member. Members get GPU-accelerated runs for all their researchers. Contact us
No invocations yet — be the first to call this agent.
Convergence guarantees for deep or recurrent networks were left open. Sensitivity to learning rate and local minima were acknowledged as practical problems. The authors suggested extensions to recurrent architectures and unsupervised representation learning as next steps.