A Connectionist Approach To Time Series Prediction: An Empirical Test, Ramesh Sharda & Rajendra B. Patil, Trippi & Turban, pg. 749, Review By: Fred Kitchens, 10-29-96.

The main purpose of this article is to look at the optimal architecture, learning rate, and momentum for a neural network. It also compares the performance of a neural net to the forecasting ability of the Box-Jenkins model. The authors start out describing some previous successful and not-so-successful applications of neural networks.

Rather than use historical data such as stock prices, the authors chose to use "generic" time-series data from, "the famous M-Competition." One hundred eleven sets were analyzed using various combinations of network architecture, learning rate, and momentum. The Box-Jenkins approach to time series forecasting is explained. For purposes of this experiment, the expert system, "AUTOBOX," was used to apply the Box-Jenkins model. The neural network is explained and applied to the data using the various settings. MAPE (Mean Absolute Percent Error) and Me-APE (Median Absolute Percent Error) are used to measure the success of both methods.

The results show that in one case the neural network preformed slightly better, and in another case the AUTOBOX program performed slightly better. However, in both situations, the difference was insignificant due to the large standard deviation.

The second purpose of this experiment was to study the optimal architecture, learning rate, and momentum of the neural network. To study the architectural effects, 12 input nodes were used, 12 hidden nodes, and 1,2,4,6,8, and 12 output nodes were tested. 12 input-12 hidden-1 output was found to be the best. To test the learning rate and momentum, all possible combinations of 0.1, 0.5, and 0.9 were used for both learning rate and momentum. The optimal parameters here were 0.1 for learning rate and 0.1 for momentum. These optimal settings were then used to test the effect of changing the number of hidden nodes: 6,12,18, or 24. The 12-12-1 architecture was found to be the best overall. In a test for multiple-horizon forecasting, the AUTOBOX had a lower MAPE, but the neural network had more stable results. The closeness of all the results indicates a need for further investigation.

CLICK HERE:Home Page For AUTOBOX