next up previous
Next: References. Up: Neural nets Previous: Neural nets

Crossbar switching.

To illustrate the power of the high order approach, consider a schematic telecommunications crossbar as shown below.

Here packets arrive and must be routed to output lines through an mxm grid of switches. The header of one packet in input line 1 reads output 2, and so on. The packet in input line 5 is not served because, say, its header reads line 3, a conflict. The problem is to select a 0-1 matrix with at most one 1 in each row and at most one 1 in each column which in some sense best serves the queues in the input lines. That is, the request matrix at a certain time is the 0-1 matrix with a 1 in position ij if the header of any packet in input line i specifies output line j, and 0 otherwise. The routing matrix is a 0-1 matrix with at most one 1 in each row and column which is an optimal match in some sense of the request matrix. In emerging gigabit/second technology, this matrix must be repeatedly chosen in microseconds.

A choice of routing matrix amounts to an evolution of a trajectory of the dynamical system from a point in n-space representing the request matrix (perhaps with some random noise to break symmetries) to a neighborhood of a point in n-space representing one of the possible routing matrices.

A neural network which solves the crossbar switching problem was devised in [MT] and discussed in [M]. It is an instance of a Hopfield model [H]. Although the Hopfield model works well in the selection of permutation matrices, arriving at other tex2html_wrap_inline56 matrices with fewer 1 entries necessitates respecification of the model before every choice. The purpose of this section is to present an alternative model which does not require such respecification.

The dimension of the neural network for the crossbar switching problem is is tex2html_wrap_inline58. Each neuron (state variable) is mathematically defined by its state tex2html_wrap_inline60 (a real number, the voltage of a capacitor) and its associated output gain function tex2html_wrap_inline62 (the output voltage of an amplifier controlled by the capacitor voltage). An example of a gain function appears below.

The Hopfield model which chooses a permutation matrix (see [J chapter 6]) is
displaymath42

where tex2html_wrap_inline64 is the set of indices of neurons in the same row as i, not including i, and tex2html_wrap_inline70 is the analogous set of column indices. We make here and hereafter the identification of 0-1 matrices of size tex2html_wrap_inline56 (tex2html_wrap_inline74 values) with -1,+1 vectors of length tex2html_wrap_inline78 (tex2html_wrap_inline60 values), 0 in one identified with -1 in the other. It can be shown that every tex2html_wrap_inline56 permutation matrix is a stable constant trajectory (has tex2html_wrap_inline86 for each i) for this dynamical system. Furthermore, if the slope of the gain function tex2html_wrap_inline90 satisfies tex2html_wrap_inline92, then the only stable constant trajectories of the system are the permutation matrices [J 118-122].

Marrakchi and Troudet [MT] and Morris [M] have proposed using the Hopfield model for the crossbar switching problem. Unfortunately, if some input lines have no packets or if conflicts otherwise preclude a permutation matrix as a routing matrix, details of the model must be respecified before running (some tex2html_wrap_inline60 must be disconnected from the model and frozen at -1).

To avoid this rebuilding of the neural network at every selection of a routing matrix, we define the high order crossbar switching model by the equations
displaymath43

It is easy to show that this system automatically has a finite attractor region [J 15]. Furthermore we can prove


theorem17


theorem19


next up previous
Next: References. Up: Neural nets Previous: Neural nets

Dan Warner
Wed Mar 25 17:30:59 EST 1998