In my last article I mention I've started playing with Artificial Neural Networks. I bought the book "Tutorial on Neural Systems Modeling" by Thomas J. Anastasio. I've been a bit slow about reading it and applying the lessons learned. I tend to have trouble with mathematics expressed in Greek letters. Likewise, I nod off when dealing with calculus. That being said, I can begin to lay out the framework for the basic computational model he presents.
Anastasio goes on about how floating point numbers represent firing rate in the early models he presents. I don't really care about all that. As far as I'm concerned the models work equally as well no matter what we believe they represent.
So here's the basics: Suppose you want to keep a trace of the activities of your Artificial Neural Network as it changes over time. One could use a matrix where every row represents the value of the neuron as it changes over time and every column represents a particular time. One can model a fully connected network by using a square matrix where each cell in the matrix tells one how strong the connection is between a neuron at time t and either itself or a different neuron at time t+1.
V1, V2,V3, and V4 represents a vertical slice through the trace of the neurons. (t) represents the values at time t and (t+1) represents the values at the next time slice. W represents the Weights, multipliers, between the neurons. W11 is the multiplier connecting V1 to itself. W12 is the multiplier connecting V2 to V1. Etc.
V1(t+1)=W11*V1(t)+W12*V2(t)+W13*V3(t)+W14*V4(t).
V2(t+1)=W21*V1(t)+W22*V2(t)+W23*V3(t)+W24*V4(t).
V3(t+1)=W31*V1(t)+W32*V2(t)+W33*V3(t)+W34*V4(t).
V4(t+1)=W41*V1(t)+W42*V2(t)+W43*V3(t)+W44*V4(t).
The Trace of the neurons can be expressed as the matrix V where the number after the V is converted to an index in an array of simulated neurons. V(1,t) would contain the value of neuron 1 at time t. That V1(t). The weights can also be expressed as the matrix W. W(1,2) would be the matrix version of W12. The set of calculations mentioned above is written in short hand as V(:,t+1)=W*V(:,t) in Matlab.
MatLab starts indicies at 1 where Python starts them at 0. For now I'll start at 1.
An Example using something like MatLab syntax inside English text.
Suppose all of W is 0 except W(2,1)=1, W(3,2)=1, and W(4,3)=1. Suppose all of V is 0 except V(1,1)=1. By iterating t from 1 to 4 and setting V(:,t+1) = W*V(:,t) one finds the following cells have been set to 1: V(2,2), V(3,3), V(4,4). All else remain zero. This "transmission line" behavior will be useful later on.
Input units are clamped to their source so if V(1,t) was an input unit's value at time t then we wouldn't want V(1,t+1) calculated from the weight matrix. We could keep a separate matrix for the weights from the input cells. That would probably be more efficient. For my purposes here I'll leave the weight matrix alone and change the Matrix Vector multiply. If V(1,:) represents an input cell then rather than calculating V(:,t+1):W*V(:,t) the calculation would be V(2:end,t+1)=W(2:end,:)*V(:,t). If V(1,:) represents an input cell and V(4,:) represents and output cell then the calculation would be V(2:end,t+1)=W(2:end,1:end-1)*V(1:end-1,t).
So there you have it. This is the basic layout for the a simple artificial neural net. The weights can be any numbers you want, including imaginary numbers. If you have many input or output neurons then you probably want to split the weight matrix into two matrices. One for the input neurons and another for the internal neurons. Adjust the formulas as needed.
You may wonder why this describes an artificial neural net. I'll leave that until next time.
No comments:
Post a Comment