Solving the TSP by the AALHNN algorithm

Math Biosci Eng. 2022 Jan 25;19(4):3427-3448. doi: 10.3934/mbe.2022158.

Abstract

It is prone to get stuck in a local minimum when solving the Traveling Salesman Problem (TSP) by the traditional Hopfield neural network (HNN) and hard to converge to an efficient solution, resulting from the defect of the penalty method used by the HNN. In order to mend this defect, an accelerated augmented Lagrangian Hopfield neural network (AALHNN) algorithm was proposed in this paper. This algorithm gets out of the dilemma of penalty method by Lagrangian multiplier method, ensuring that the solution to the TSP is undoubtedly efficient. The second order factor added in the algorithm stabilizes the neural network dynamic model of the problem, thus improving the efficiency of solution. In this paper, when solving the TSP by AALHNN, some changes were made to the TSP models of Hopfield and Tank. Say, constraints of TSP are multiplied by Lagrange multipliers and augmented Lagrange multipliers respectively, The augmented Lagrange function composed of path length function can ensure robust convergence and escape from the local minimum trap. The Lagrange multipliers are updated by using nesterov acceleration technique. In addition, it was theoretically proved that the extremum obtained by this improved algorithm is the optimal solution of the initial problem and the approximate optimal solution of the TSP was successfully obtained several times in the simulation experiment. Compared with the traditional HNN, this method can ensure that it is effective for TSP solution and the solution to the TSP obtained is better.

Keywords: HNN; Lagrange neural network algorithm; TSP; augmented Lagrangian; nesterov acceleration technique.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acceleration
  • Algorithms*
  • Computer Simulation
  • Neural Networks, Computer*
  • Travel