Neurocomputing makes use of parallel dynamical interactions of modifiable neuron-like elements. It is important to show, by mathematical treatments, the capabilities and limitations of information processing by various architectures of neural networks. This paper, part tutorial and part review, tries to give mathematical foundations to neurocomputing. It considers the capabilities of transformations by layered networks, statistical neurodynamics, the dynamical characteristics of associative memory, a general theory of neural learning, and self-organization of neural networks.