Abstract [eng] |
The theoretical part of the work described artificial and convolutional neural networks, their structure, and function. It also describes the different types of data and their problems - the curse of dimensionality and overfitting. Lyapunov functions and Lyapunov exponents and their application in the context of neural networks are described. The selected dataset, assessment metrics, and technologies are also defined. The experimental part defines the evaluation metrics used, the overview of the created programs, and the experiments themselves. The network with Lyapunov function algorithms - LF I and improved LF I - and Adam function were used to solve the XOR problem and adversarial attacked image classification. The improved LF I achieves the best metrics, but convergence experiments have shown that a network with this algorithm needs significantly more epochs than others to achieve very low loss function value. The LF I algorithm achieves the convergence condition fastest when solving the XOR problem, and Adam when classifying images. These results show that although the Lyapunov function algorithms can solve these problems, they are less efficient with complex tasks such as image classification. The next part of the experiment was the calculation of Lyapunov exponents using MNIST and CIFAR datasets affected by adversarial attacks and an additional dataset compiled in this work. It was observed that Lyapunov exponents were more clustered from simple data into appropriate classes (original and adversarial attacked), than from more complex data in which case the classes were strongly overlapped. The use of Lyapunov exponents for network training has shown that the more clustered the exponents are, the higher the accuracy the network can achieve. It was also found that, on average, when a set of Lyapunov exponents is used for training a network with an improved LF I algorithm, it achieves the highest accuracy. |