# Artificial Neuron

Artificial neurons are mathematical objects that are inspired by biological neurons and are the main building block of artificial neural networks.

## Artificial Neurons Versus Biological Neurons

There are many different types of artificial neurons. Some artificial neurons aim for bioligical realism, while others simply strive for alternate computational models which happen to be loosly inspired by biological neurons.

It should be noted that there are different kinds of biological neurons, so even with the artificial neurons that aim for bioligical realism, it is important to keep in mind which type of biological neuron they are trying to model. Said another way, even going for bioligical realism you still may potentially need different types of artificial neurons to model them, depending on how much bioligical realism you are going for.

With many of the different artificial neurons, synapses of biological neurons are often modeled as "weights" -- as constants to multiply input with. As such, many of the different artificial neuron models "boil down" the essence of a synapse to a constant number (which in most models can be positive, negative or zero). A positive constant represents an excitatory connection and a negative constant represents an inhibitory connection. (Things such as the vesicles of a neuron tend not to be modeled in artificial neurons.)

## Two Function Models

A number of the artificial neuron models can be thought of the composition of 2 functions. One function models input from the dendrites and the build up of charge in the perikaryon. The other function models the output out of the charge out the axon. This latter function (that models the output out of the charge out the axon) is calls the activiation function.

### Linear Threshold Gate

Linear threshold gates are a type of artificial neuron defined by the composition of two functions. The first function, which we will name `D` (very loosely) models the input from the dendrites and the build up of charge in the perikaryon. The second function, which we will name `A` (very loosely) models the output out of the charge out the axon.

Let `D` be a function defined as:

$D\left(\stackrel{-}{x}\right):=\stackrel{-}{w}·\stackrel{-}{x}=\begin{array}{ccccccc}\left(& {w}_{1}& {w}_{2}& {w}_{3}& ...& {w}_{n}& \right)\end{array}·\begin{array}{ccccccc}\left(& {x}_{1}& {x}_{2}& {x}_{3}& ...& {x}_{n}& \right)\end{array}={w}_{1}{x}_{1}+{w}_{2}{x}_{2}+{w}_{3}{x}_{3}+...+{w}_{n}{x}_{n}=\underset{n}{\overset{}{i=1}}{w}_{i}{x}_{i}$

(Where the `wi` are constants, and the input has `n` dimensions.)

Now, let our activation function `A` be the function defined as:

$A\left(x\right):=IF\left(x>T\right)THEN\left(1\right)ELSE\left(0\right)=\left\{\begin{array}{c}1|x>T\\ 0|x\le T\end{array}$

Note that the constant `T` in the equation above is what "decides" where it the threshold is

This function $A$ has a special name, and is known as threshold function. (Which is where the word "threshold" comes from for the name of this artificial neuron model.)

Then a linear threshold gate is defined as follows:

$\mathrm{LTG}\left(\stackrel{-}{x}\right):=\left(A\circ D\right)\left(\stackrel{-}{x}\right)=A\left(D\left(\stackrel{-}{x}\right)\right)$

#### Example

Here's an example. Let's say:

$D\left(\stackrel{-}{x}\right):=\begin{array}{ccccc}\left(& 5& -3& 8& \right)\end{array}·\begin{array}{ccccc}\left(& {x}_{1}& {x}_{2}& {x}_{3}& \right)\end{array}=5{x}_{1}-3{x}_{2}+8{x}_{3}$

And that $A$ is defined as:

$A\left(x\right):=IF\left(x>28\right)THEN\left(1\right)ELSE\left(0\right)=\left\{\begin{array}{c}1|x>28\\ 0|x\le 28\end{array}$

Then the linear threshold gate for this example is defined by the function:

$\mathrm{LTG}\left(\stackrel{-}{x}\right):=IF\left(5{x}_{1}-3{x}_{2}+8{x}_{3}\ge 28\right)THEN\left(1\right)ELSE\left(0\right)$

So then, here's the output we get from various inputs:

$\mathrm{LTG}\left(\begin{array}{ccccc}\left(& 2& 3& 4& \right)\end{array}\right)=IF\left(5×2-3×3+8×4\ge 28\right)THEN\left(1\right)ELSE\left(0\right)=IF\left(33\ge 28\right)THEN\left(1\right)ELSE\left(0\right)=1$ $\mathrm{LTG}\left(\begin{array}{ccccc}\left(& 1& 1& 1& \right)\end{array}\right)=IF\left(5×1-3×1+8×1\ge 28\right)THEN\left(1\right)ELSE\left(0\right)=IF\left(10\ge 28\right)THEN\left(1\right)ELSE\left(0\right)=0$ $\mathrm{LTG}\left(\begin{array}{ccccc}\left(& 0& 0& 4& \right)\end{array}\right)=IF\left(5×0-3×0+8×4\ge 28\right)THEN\left(1\right)ELSE\left(0\right)=IF\left(32\ge 28\right)THEN\left(1\right)ELSE\left(0\right)=1$

If this artificial neuron was used in an artificial neural network with other artificial neurons, the "weights" on the other artificial neurons (and perhaps even the number of inputs on the other artificial neurons) would most likely be different.

### Linear Sigmoid Gate

Linear sigmoid gates are a type of artificial neuron defined by the composition of two functions. The first function, which we will name `D` (very loosely) models the input from the dendrites and the build of charge in the perikaryon. The second function, which we will name `A` (very loosely) models the output out of the charge out the axon.

Linear sigmoid gates are a lot like linear threshold gates except that their activation function is different: instead of a theshold function (like with linear theshold gates), linear sigmod gates have a sigmoid function.

Let `D` be a function defined as:

$D\left(\stackrel{-}{x}\right):=\stackrel{-}{w}·\stackrel{-}{x}=\begin{array}{ccccccc}\left(& {w}_{1}& {w}_{2}& {w}_{3}& ...& {w}_{n}& \right)\end{array}·\begin{array}{ccccccc}\left(& {x}_{1}& {x}_{2}& {x}_{3}& ...& {x}_{n}& \right)\end{array}={w}_{1}{x}_{1}+{w}_{2}{x}_{2}+{w}_{3}{x}_{3}+...+{w}_{n}{x}_{n}=\underset{n}{\overset{}{i=1}}{w}_{i}{x}_{i}$

(Where the `wi` are constants, and the input has `n` dimensions.)

Now, let our activation function `A` be the function defined as:

$A\left(x\right):=\frac{1}{1+{e}^{-2s\left(x+T\right)}}$

Then a linear sigmoid gate is defined by combining the function above with the activation function below:

$\mathrm{LSG}\left(\stackrel{-}{x}\right):=\left(A\circ D\right)\left(\stackrel{-}{x}\right)=A\left(D\left(\stackrel{-}{x}\right)\right)$

TODO

TODO

## Integrate-and-Fire

An artificial neuron model that strives for more biological realism than the various two function models is the integrate-and-fire model.

TODO

## Leaky Integrate-and-Fire

Another artificial neuron model that strives for more biological realism than the various two function models is the integrate-and-fire model.

The leaky integrate-and-fire artificial neuron model is intended as an improvement to the integrate-and-fire model in that it tries to fix the memory problem that the integrate-and-fire model has, where if a integrate-and-fire neuron receives a below threshold signal at some time, it will retain that voltage until a voltage spike finally happens. (This behavior is not how biological neurons behave.)

TODO

-- Mirza Charles Iliya Krempeaux