### Abstract

We perform a detailed fixed-point analysis of two-unit recurrent neural networks with sigmoid-shaped transfer functions. Using geometrical arguments in the space of transfer function derivatives, we partition the network state-space into distinct regions corresponding to stability types of the fixed points. Unlike in the previous studies, we do not assume any special form of connectivity pattern between the neurons, and all free parameters are allowed to vary. We also prove that when both neurons have excitatory self-connections and the mutual interaction pattern is the same (i.e., the neurons mutually inhibit or excite themselves), new attractive fixed points are created through the saddle-node bifurcation. Finally, for an N-neuron recurrent network, we give lower bounds on the rate of convergence of attractive periodic points toward the saturation values of neuron activations, as the absolute values of connection weights grow.

Original language | English (US) |
---|---|

Pages (from-to) | 1379-1414 |

Number of pages | 36 |

Journal | Neural Computation |

Volume | 13 |

Issue number | 6 |

DOIs | |

State | Published - Jun 1 2001 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Arts and Humanities (miscellaneous)
- Cognitive Neuroscience

### Cite this

}

*Neural Computation*, vol. 13, no. 6, pp. 1379-1414. https://doi.org/10.1162/08997660152002898

**Attractive periodic sets in discrete-time recurrent networks (with emphasis on fixed-point stability and bifurcations in two-neuron networks).** / Tiňo, Peter; Horne, Bill G.; Giles, C. Lee.

Research output: Contribution to journal › Article

TY - JOUR

T1 - Attractive periodic sets in discrete-time recurrent networks (with emphasis on fixed-point stability and bifurcations in two-neuron networks)

AU - Tiňo, Peter

AU - Horne, Bill G.

AU - Giles, C. Lee

PY - 2001/6/1

Y1 - 2001/6/1

N2 - We perform a detailed fixed-point analysis of two-unit recurrent neural networks with sigmoid-shaped transfer functions. Using geometrical arguments in the space of transfer function derivatives, we partition the network state-space into distinct regions corresponding to stability types of the fixed points. Unlike in the previous studies, we do not assume any special form of connectivity pattern between the neurons, and all free parameters are allowed to vary. We also prove that when both neurons have excitatory self-connections and the mutual interaction pattern is the same (i.e., the neurons mutually inhibit or excite themselves), new attractive fixed points are created through the saddle-node bifurcation. Finally, for an N-neuron recurrent network, we give lower bounds on the rate of convergence of attractive periodic points toward the saturation values of neuron activations, as the absolute values of connection weights grow.

AB - We perform a detailed fixed-point analysis of two-unit recurrent neural networks with sigmoid-shaped transfer functions. Using geometrical arguments in the space of transfer function derivatives, we partition the network state-space into distinct regions corresponding to stability types of the fixed points. Unlike in the previous studies, we do not assume any special form of connectivity pattern between the neurons, and all free parameters are allowed to vary. We also prove that when both neurons have excitatory self-connections and the mutual interaction pattern is the same (i.e., the neurons mutually inhibit or excite themselves), new attractive fixed points are created through the saddle-node bifurcation. Finally, for an N-neuron recurrent network, we give lower bounds on the rate of convergence of attractive periodic points toward the saturation values of neuron activations, as the absolute values of connection weights grow.

UR - http://www.scopus.com/inward/record.url?scp=0035375070&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0035375070&partnerID=8YFLogxK

U2 - 10.1162/08997660152002898

DO - 10.1162/08997660152002898

M3 - Article

C2 - 11387050

AN - SCOPUS:0035375070

VL - 13

SP - 1379

EP - 1414

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 6

ER -