Saturday, March 22, 2025

The Mysterious Mathematics of AI Could be Decoded Using the Neural Network Method

AI systems are developed to achieve specific tasks. Essentially, a set of algorithms, AI systems, are opaque which makes them continue to be a mystery to the end users. For instance, if an AI system is designed to predict the outcome of a political election, it would not be able to explain the criteria based on which the conclusion is predicted. This is known as black-box designing in AI circles.

Artificial Intelligence relies on Neural Networks making it think or process information like a human. There is both input and output but it mostly remains unknown what transpires in the processing unit of the Neural Networks.

What is the reason behind AI black box issue?

Artificial neural networks have layers of nodes hidden and as information passes from one layer to another, it transforms the data radically making it extremely difficult rather impossible to envisage the output.

- Advertisement -

As a layer gets fed with new information, the nodes exercise a specific pattern that whenever new data is infused, they place the data into the learned pattern occasionally tweaking it, however making it substantially complex by the end, turning into a mystery.

Mysterious Mathematics of AI 1

There is probably no solution to make AI answerable. Since Artificial Intelligence is a huge part of the industrial and critical sectors such as defense, healthcare, and law & order, everybody is trying to decipher its way of functioning.

The process to control confounds

Los Alamos National Laboratory’s researchers have found ways to compare neural networks that may be able to peep into the way AI systems work. According to the research, it is essential to control the confound to compare two neural networks with similar yet discrete structures within the individual data points.

- Advertisement -

The technique of Representation Inversion helps to isolate the features in a picture used by a certain network while discarding or randomizing everything else. This procedure reveals a new inverted dataset where each input consists of the relevant features only as other things are randomized.

This model is trying to ascertain the likeness between an arbitrary network and an inverting model using the inverted data set thereby extending a precise similarity of models. The paper called If You’ve Trained One You’ve Trained Them All: Inter-architecture similarity increases with Robustness”, written by Haydn Jones was shown at the Conference on Uncertainty in Artificial Intelligence in the Netherlands. Jones stated, “Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI.”

Mysterious Mathematics of AI 2

Depending on the attacks, the similarities are ascertained

Jones along with Los Alamos National Laboratory’s collaborators Garrett Kenyon, Jacob Springer, and Juston Moore experimented with the new metric of network similarity with adversarial neural networks. They discovered that as the attack increased, the accusatorial neural networks exhibited similar data representations, regardless of the network architecture.

This novel research is still unable to decipher the exact model but it assists in narrowing down the search to quite a few specifics, resulting in spending less time discovering new architectures. Haydn Jones also said that the research could even uncover how perception works in people and animals.

- Advertisement -
Dipanita Bhowmick
Dipanita Bhowmick
Dipanita Bhowmick: I am a content writer with 13+ years of experience in various genres, allowing me to adapt my writing style to diverse topics and audiences. Alongside my passion for creating engaging content, I have a deep interest in esoteric knowledge, constantly exploring the mystical and unconventional realms for inspiration along with spiritual and personal growth.

Related Articles

Stay Connected

2,814FansLike
179FollowersFollow
1,600SubscribersSubscribe

Latest Articles