Repository URL:
Azhar, Feraz
preprint description
Localizing content in neural networks provides a bridge to understanding the way in which the brain stores and processes information. In this paper, I propose the existence of polytopes in the state space of the hidden layer of feedforward neural networks as vehicles of content. I analyze these geometrical structures from an information-theoretic point of view, invoking mutual information to help define the content stored within them. I establish how this proposal addresses the problem of misclassification, and provide a novel solution to the disjunction problem, which hinges on the precise nature of the causal-informational framework for content advocated herein.