In this study, a new procedure to determine the optimum activation function for a neural network is proposed. Unlike previous methods of optimising activation functions, the proposed approach regards selection of the most suitable activation function as a discrete optimisation problem, which involves generating various combitions of function then evaluating their performance as activation functions in a neural network, returning the function or combition of functions which yields best result as the optimum. The efficacy of the proposed optimisation method is compared with conventiol approaches using the data generated from several synthetic functions. Numerical results indicate that the network produced using the proposed method achieves a better accuracy with a smaller network size, compared to other approaches. Bridge scour problem is used to further demonstrate the performance of the proposed algorithm. Based on the training and validation results, a better estimation of both equilibrium and time dependent scour depth is produced by the neural network developed using the proposed optimisation method, compared to networks with a priori chosen activation functions. Furthermore, the performance of the proposed model is compared with predictions of empirical methods, with the former making more accurate predictions.
Unless otherwise indicated, works by Griffith University Scholars are © Griffith University. For further details please refer to the University Intellectual Property Policy.