Buscar en
Journal of Economics, Finance and Administrative Science
Toda la web
Inicio Journal of Economics, Finance and Administrative Science Stock market index prediction using artificial neural network
Journal Information
Vol. 21. Issue 41.
Pages 89-93 (December 2016)
Share
Share
Download PDF
More article options
Visits
6045
Vol. 21. Issue 41.
Pages 89-93 (December 2016)
Article
Open Access
Stock market index prediction using artificial neural network
Predicción del índice del mercado bursátil utilizando una red neuronal artificial
Visits
...
Amin Hedayati Moghaddama, Moein Hedayati Moghaddamb,
Corresponding author
moein_hedayati67@yahoo.com

Corresponding author.
, Morteza Esfandyaric
a Department of Chemical and Petroleum Engineering, Sharif University of Technology, Tehran, Iran
b Faculty of Managing and Accounting, College of Farabi, University of Tehran, Qom, Iran
c Department of Chemical Engineering, Faculty of Engineering, University of Bojnord, ,Bojnord, Iran
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (4)
Show moreShow less
Tables (3)
Table 1. The prediction ability of a BPNN with different training and transfer function.
Table 2. The R2 value for BPNN with different structure for four prior days.
Table 3. The R2 value for BPNN with different structure for nine prior days.
Show moreShow less
Abstract

In this study the ability of artificial neural network (ANN) in forecasting the daily NASDAQ stock exchange rate was investigated. Several feed forward ANNs that were trained by the back propagation algorithm have been assessed. The methodology used in this study considered the short-term historical stock prices as well as the day of week as inputs. Daily stock exchange rates of NASDAQ from January 28, 2015 to 18 June, 2015 are used to develop a robust model. First 70 days (January 28 to March 7) are selected as training dataset and the last 29 days are used for testing the model prediction ability. Networks for NASDAQ index prediction for two type of input dataset (four prior days and nine prior days) were developed and validated.

Keywords:
NASDAQ
ANN
Prediction
JEL classification:
C5
Resumen

En este estudio se investigó la capacidad de previsión del índice bursátil diario NASDAQ, por parte de la red neuronal artificial (RNA). Se evaluaron diversas RNA proalimentadas, que fueron entrenadas mediante un algoritmo de retropropagación. La metodología utilizada en este estudio consideró como inputs los precios bursátiles históricos a corto plazo, así como el día de la semana. Se utilizaron los índices bursátiles diarios de NASDAQ del 28 de enero al 18 de junio de 2015, para desarrollar un modelo robusto. Se seleccionaron los primeros 70 días (del 28 de enero al 7 de marzo) como conjuntos de datos de entrenamiento, y los últimos 29 días para probar la capacidad del modelo de predicción. Se desarrollaron y validaron redes para la predicción del índice NASDAQ, para dos tipos de conjuntos de datos de input (los cuatro y los nueve días previos).

Palabras clave:
NASDAQ
ANN
Predicción
Nomenclature
ANN

artificial neural networks

BPNN

back propagation neural network

RBFNN

radial basis function neural network

FIS

fuzzy inference system

ANFIS

adaptive neuro-fuzzy inference system

MLP

multi-layer perceptron

PNN

probabilistic neural network

GFNN

genetic algorithm based fuzzy neural network

x

input parameter

wP

connection weight of neuron P

uP

input combiner

bP

bias

φ

activation function

yP

output of the neuron

SCG

scaled conjugate gradient

LM

Levenberg-Marquardt

OSS

one step secant

GDA

gradient descent with adaptive learning rate

GDM

gradient descent with momentum

y(k)

stock price at time k

D(k)

day of week

R2

determination coefficient

MSE

mean square error

yexp.

experimental value

ypred.

predicted value

Full Text
1Introduction

In studying some phenomenon, developing a mathematical model to simulate the non-linear relations between input and output parameters is a hard task due to complicated nature of these phenomenons. Artificial intelligent systems such as artificial neural networks (ANN), fuzzy inference system (FIS), and adaptive neuro-fuzzy inference system (ANFIS) have been applied to model a wide range of challenging problems in science and engineering. ANN displays better performance in bankruptcy prediction than conventional statistical methods such as discriminant analysis and logistic regression (Quah & Srinivasan 1999). Investigations in credit rating process showed that ANN has better prediction ability than statistical methods due to complex relation between financial and other input variables (Hájek, 2011). Bankruptcy prediction (Alfaro, García, Gámez, & Elizondo, 2008; Lee, Booth, & Alam, 2005; Baek & Cho, 2003), credit risk assessment (Yu, Wang, & Lai, 2008; Angelini, Di Tollo, & Roli, 2008), and security market applications are the other economical areas that ANN has been widely applied. Objective of this study is to investigate the ability of ANN in forecasting the daily NASDAQ stock exchange rate.

2Background

Guresen, Kayakutlu, and Daim (2011) investigated the performance of multi-layer perceptron (MLP), dynamic ANN, and hybrid ANN models in forecasting the market values. Chen, Leung, and Daouk (2003) used probabilistic neural network (PNN) to predict the direction of Taiwan stock index return. They reported that PNN has higher performance in stock index than generalized methods of moments-Kalman filter and random walk forecasting models. Kuo, Chen, and Hwang (2001) developed a decision support system through combining a genetic algorithm based fuzzy neural network (GFNN) and ANN for stock market. The proposed system was evaluated using the data of Taiwan stock market. Qiu, Liu, and Wang (2012) developed a new forecasting model on the basis of fuzzy time series and C-fuzzy decision trees to predict stock index of shanghai composite index. Atsalakis and Valavanis (2009) developed an adaptive neuro-fuzzy inference controller to forecast next day's stock price trend. They reported the potential ability of ANFIS in predicting the stock index.

3Artificial intelligent systems used in forecasting3.1Artificial neural network

A neural network is a bio-inspired system with several single processing elements, called neurons. The neurons are connected each other by joint mechanism which is consisted of a set of assigned weights.

MLP is a common approach in regression-type problems. MLP network has three layers: input layer, output layer, and hidden layer. Neuron takes the values of inputs parameters, sums them up according to the assigned weights, and adds a bias. By applying the transfer function, the value of the outputs would be determined. The number of neurons in input layer corresponded to the number of input parameters. The architecture of a typical MLP is presented in Figure 1.

Figure 1.

Multi-layer feed forward neural network structure.

Elaborated by the authors.

(0.11MB).

In mathematical terms, the performance of neuron P can be described as follows:

where x1,…,xn are the input parameters; wP1,…,wPn are the connection weights of neuron P; uP is the input combiner; bP is the bias; φ is the activation function; and yP is the output of the neuron.

In this study feed forward artificial neural networks that were trained by the back propagation algorithm has been used.

There are several learning techniques such as scaled conjugate gradient (SCG), Levenberg-Marquardt (LM), one step secant (OSS), gradient descent with adaptive learning rate (GDA), gradient descent with momentum (GDM) etc. that are using for training and developing the constructed models.

4Predicting NASDAQ index

The methodology used in this study considered the short-term historical stock prices as well as the day of week as inputs. The overall procedure is governed by the following equation:

where y(k) is the stock price at time k, n is the number of historical days, and D(k) is the day of week.

Daily stock exchange rates of NASDAQ from January 28, 2015 to 18 June, 2015 are used to develop a robust model. First 70 days (January 28 to March 7) are selected as training dataset and the last 29 days are used for testing the model prediction ability.

For constructing the model, training, and testing procedure MATLAB software R2010a was used. The performance of ANNs was evaluated using the determination coefficient (R2) and the mean square error (MSE) of the modeled output. R2 was determined as follows:

MSE represents the average squared difference between the predicted values estimated from a model and the actual values. MSE was determined by the following equation:

where yexp. and ypred. were experimental and predicted values, respectively, and M was the total number of data.

5Result and discussion

In this section several networks for NASDAQ index prediction for two input dataset (four prior days and nine prior days) were developed and validated. Then the optimized network structure for both type of dataset was selected according to their abilities in prediction.

5.1Four prior working days

In Table 1 the values of R2 for different training algorithms and transfer function of a BPNN with 20-40-20 neurons in hidden layers have been shown. In experiments 1 through 3, networks were trained by LM, in experiments 4 through 6 by OSS, and in experiment 7 by GDA method. As is shown, applying OSS training method and TANGSIG transfer function resulted in an optimized trained network according to the values of R2 of validation dataset. Networks with transfer function of TANSIG or PURELIN and training functions of GDA were not able to generate a robust model (not shown). Accordingly, in the next experiments in the current study OSS and TANSIG were selected as training method and transfer function, respectively.

Table 1.

The prediction ability of a BPNN with different training and transfer function.

No.Training functionTransfer functionR2
Train  Test  Validation  Total 
LM  TANSIG  0.9925  0.9869  0.8864  0.974 
LM  PURELIN  0.9457  0.9675  0.9027  0.9395 
LM  LOGSIG  0.9989  0.9698  0.7339  0.9475 
OSS  LOGSIG  0.9166  0.9133  0.8669  0.9069 
OSS  PURELIN  0.7016  0.8824  0.8230  0.7675 
OSS  TANSIG  0.9386  0.8917  0.9408  0.9267 
GDA  LOGSIG  0.9016  0.8308  0.8497  0.8649 

Elaborated by the authors.

In Table 2 configurations of MLP are presented. The data achieved from 99 days of NASDAQ index were randomly divided into training set (60%), validation set (20%), and testing set (20%). On the basis of the preliminary study, the training method and transfer function were OSS and TANGSIG, respectively. The architecture of the neural network was optimized by applying different values for the number of hidden layers and number of neurons in each hidden layer. Sixteen networks with different architectures were generated, trained, and tested. R2-values of training set, validation set, and total data were calculated, but only the R2-value of validation was considered to select the optimized architecture of network. It is found that networks with four hidden layers and more were not able to be trained and to generate a robust model (these networks were not shown). As seen in Table 2, R2 had desirable values (maximum value) when the number of hidden layers was 2 and the numbers of neurons in hidden layers were 40. It is worthwhile noting that any changes in number of neurons would influence the model proficiency. For example, as seen in Table 2 although a network with 5-5 had acceptable R2 validation (0.8631) but a network with 5-10 neurons had poor prediction ability.

Table 2.

The R2 value for BPNN with different structure for four prior days.

No.StructureR2
Train  Test  Validation  Total 
0.8177  0.9616  0.9493  0.8692 
0.9250  0.9188  0.9605  0.9264 
5-5  0.9229  0.9724  0.8631  0.9212 
5-10  0.2185  −0.0165  −0.2065  0.0838 
10-10  0.9534  0.9602  0.6811  0.9344 
10-20  0.9059  0.9758  0.9108  0.9263 
40-40  0.9003  0.9639  0.9616  0.9264 
50-100  0.9576  0.9324  0.9393  0.9483 
100-200  0.9390  0.9466  0.9533  0.9393 
10  200-300  0.9267  0.9642  0.8822  0.9276 
11  20-40-20  0.9386  0.8917  0.9408  0.9267 
12  20-50-20  0.9403  0.9417  0.9077  0.9374 
13  50-100-50  0.6837  0.8108  0.7785  0.7326 
14  20-40-40-20  0.8990  0.8445  0.8093  0.8739 
15  10-20-20-10  0.8977  0.9602  0.9015  0.9109 
16  10-20-20-20-10  0.9304  0.9341  0.9456  0.9329 

Elaborated by the authors.

Figure 2 shows the predicted data generated by the optimized BPNN (two hidden layer with forty neurons) against the observed NASDAQ index for training, validation, testing, and total data. Figure 3 shows the real and predicted NASDAQ index values for four prior days in 99 days from 28 January to 18 June 2015.

Figure 2.

The predicted data against experimental data for training, validation, testing, and total data.

Elaborated by the authors.

(0.42MB).
Figure 3.

Real and predicted NASDAQ index values for four prior days.

Elaborated by the authors.

(0.1MB).
5.2Nine prior working days

Similar to four prior days, the values of R2 for different training algorithms and transfer function of a MLP with 20-40-20 neurons in hidden layers have been generated and tested. Accordingly, applying OSS training method and LOGGSIG transfer function resulted in an optimized trained network according to the values of R2 of validation dataset (0.9622).

In Table 3 several configurations of MLP are presented. The training method and transfer function were OSS and LOGSIG, respectively. It is found that a network with three hidden layers and 20-40-20 neurons in hidden layers was the optimized network. Figure 4 shows the real and predicted NASDAQ index values for nine prior days in 99 days from 28 January to 18 June 2015. Accordingly, there is no distinct difference between the prediction ability of the four and nine prior working days as input parameters.

Table 3.

The R2 value for BPNN with different structure for nine prior days.

No.StructureR2
Train  Test  Validation  Total 
0.8195  0.8480  0.8425  0.8274 
10  0.8025  0.7113  0.8608  0.7882 
5-5  0.8437  0.7762  0.8518  0.8280 
5-10  0.8127  0.8554  0.8262  0.8180 
20-20  0.8344  0.9116  0.7524  0.8389 
20-30  0.8859  0.8690  0.8756  0.8707 
50-100  0.8335  0.9028  0.8300  0.8292 
200-300  0.8473  0.7899  0.8938  0.8506 
300-400  0.8476  0.8685  0.7988  0.8435 
10  20-50-20  0.8648  0.7937  0.8809  0.8372 
11  20-40-20  0.9318  0.8827  0.9622  0.9262 
12  50-100-50  0.8697  0.8367  0.8388  0.8552 
13  20-40-40-20  0.7761  0.9205  0.8601  0.8188 
14  20-50-50-20  0.8359  0.8462  0.8977  0.8443 
15  10-20-20-20-10  0.8578  0.8348  0.6704  0.8107 

Elaborated by the authors.

Figure 4.

Real and predicted NASDAQ index values for nine prior days.

Elaborated by the authors.

(0.11MB).
6Conclusion

The model uses the values of NASDAQ exchange rate of last four and nine working days as well as the day of week as the input parameters. For four prior working days, applying OSS training method and TANGSIG transfer function in a network with 20-40-20 neurons in hidden layers resulted in an optimized trained network with R2 values of 0.9408 for validation dataset. For this dataset, the maximum R2 values for the networks with OSS training method and TANGSIG transfer function would be obtained when the number of hidden layers was 2 and the number of neurons was 40-40. For nine prior working days a network with 20-40-20 neurons in hidden layers OSS training method and LOGSIG transfer function are the optimized network with validation R2 of 0.9622. The model outputs show that there is no distinct difference between the prediction ability of the four and nine prior working days as input parameters.

References
[Alfaro et al., 2008]
E. Alfaro, N. García, M. Gámez, D. Elizondo.
Bankruptcy forecasting: An empirical comparison of AdaBoost and neural networks.
Decision Support Systems, 45 (2008), pp. 110-122
[Angelini et al., 2008]
E. Angelini, G. Di Tollo, A. Roli.
A neural network approach for credit risk evaluation.
The Quarterly Review of Economics and Finance, 48 (2008), pp. 733-755
[Atsalakis and Valavanis, 2009]
G.S. Atsalakis, K.P. Valavanis.
Forecasting stock market short-term trends using a neuro-fuzzy based methodology.
Expert Systems with Applications, 36 (2009), pp. 10696-10707
[Baek and Cho, 2003]
J. Baek, S. Cho.
Bankruptcy prediction for credit risk using an auto-associative neural network in Korean firms.
Computational Intelligence for Financial Engineering, 2003. Proceedings. 2003 IEEE International Conference, 25-29. IEEE,
[Chen et al., 2003]
A.S. Chen, M.T. Leung, H. Daouk.
Application of neural networks to an emerging financial market: Forecasting and trading the Taiwan Stock Index.
Computers & Operations Research, 30 (2003), pp. 901-923
[Guresen et al., 2011]
E. Guresen, G. Kayakutlu, T.U. Daim.
Using artificial neural network models in stock market index prediction.
Expert Systems with Applications, 38 (2011), pp. 10389-10397
[Hájek, 2011]
P. Hájek.
Municipal credit rating modelling by neural networks.
Decision Support Systems, 51 (2011), pp. 108-118
[Kuo et al., 2001]
R.J. Kuo, C. Chen, Y. Hwang.
An intelligent stock trading decision support system through integration of genetic algorithm based fuzzy neural network and artificial neural network.
Fuzzy Sets and Systems, 118 (2001), pp. 21-45
[Lee et al., 2005]
K. Lee, D. Booth, P. Alam.
A comparison of supervised and unsupervised neural networks in predicting bankruptcy of Korean firms.
Expert Systems with Applications, 29 (2005), pp. 1-16
[Qiu et al., 2012]
W. Qiu, X. Liu, L. Wang.
Forecasting shanghai composite index based on fuzzy time series and improved C-fuzzy decision trees.
Expert Systems with Applications, 39 (2012), pp. 7680-7689
[Quah et al., 1999]
T.S. Quah, B. Srinivasan.
Improving returns on stock investment through neural network selection.
Expert Systems with Applications, 17 (1999), pp. 295-301
[Yu et al., 2008]
L. Yu, S. Wang, K.K. Lai.
Credit risk assessment with a multistage neural network ensemble learning approach.
Expert Systems with Applications, 34 (2008), pp. 1434-1444
Copyright © 2016. Universidad ESAN
Article options
Tools