Loading Events

ReLU and Softplus neural nets as zero-sum, turn-based, stopping games

CMSA NEW TECHNOLOGIES IN MATHEMATICS

When: February 11, 2026
2:00 pm - 3:00 pm
Where: Virtually
Speaker: Yiannis Vlassopoulos (Athena Research Center)

Neural networks are for the most part treated as black boxes. In an effort to begin elucidating the mathematical structure they encode, we will explain how ReLU neural nets can be interpreted as zero-sum turn-based, stopping games. The game runs in the opposite direction to the net. The input to the net is the terminal reward of the game, the output of the net is the value of the game at its initial states. The bias at each neuron is used to define the reward and the weights are used to define state-transition probabilities. One player -Max- is trying to maximize reward and the other -Min-, to minimize it. Every neuron gives rise to two game states, one where Max plays and one where Min plays. In fact running the ReLU net is equivalent to the Shapley-Bellman backward recursion for the value of the game. As a corollary of this construction we get a path integral expression for the output of the net, given input. Moreover using the fact that the Shapley operator is monotonic (with respect to the coordinate-wise order) we get bounds for the output of the net, given bounds for the input. Adding an entropic regularization to the ReLU net game allows us to interpret Softplus neural nets as games in an analogous fashion. This is joint work with Stéphane Gaubert.

Zoom: https://harvard.zoom.us/j/91864143060?pwd=liDbUVYXs47QsYhxdzXYowl8vpQGy1.1