How to code an artificial neural network (Tic-tac-toe)?

caw picture caw · Apr 17, 2009 · Viewed 27.3k times · Source

I want to play Tic-tac-toe using an artificial neural network. My configuration for the network is as follows: For each of the 9 fields, I use 2 input neuron. So I have 18 input neurons, of course. For every field, I have 1 input neuron for a piece of Player 1 and 1 neuron for a piece of Player 2. In addition to that, I have 1 output neuron which gives an evaluation of the current board position. The higher the output value is, the better is the position for Player 1. The lower it is, the better is it for Player 2.

But my problem is: How could I code that neural network? My idea was to use an Array[1-18] for the input neurons. The values of this array are the input weights. The I would walk through the array using a loop. Whenever there is a neuron to be activated, I add the weight to the output value. So the output value is the sum of the weights of the activated input neurons:

Output = SUM(ActivatedInputNeurons)

Do you think this is a good way of programming the network? Do you have better ideas?

I hope you can help me. Thanks in advance!

Answer

Svante picture Svante · Apr 17, 2009

Well, you have an input layer of 18 neurons, and an output layer of 1 neuron. That's OK. However, you need to give your neural net the opportunity to put the inputs into relation. For that, you need at least one intermediate layer. I would propose to use 9 neurons in the intermediate layer. Each of these should be connected to each input neuron, and the output neuron should be connected to each intermediate. Each such connection has a weight, and each neuron has an activation level.

Then, you go through all neurons, a layer at a time. The input layer is just activated with the board state. For all further neurons, you go through all its respective connections and sum over the product of the connected neuron's activation level and the weight of the connection. Finally, you calculate the activation level by applying a sigmoid function on this sum.

This is the working principle. Now, you need to train this net to get better results. There are several algorithms for this, you will have to do some googling and reading. Finally, you might want to adjust the number of neurons and layers when the results don't get convincing fast enough. For example, you could reduce the input layer to 9 neurons and activate them with +1 for an X and -1 for an O. Perhaps adding another intermediate layer yields better results, or increasing the number of neurons of a layer.