# Artificial Neuron

**Artificial neurons** are mathematical objects that are inspired by biological neurons
and are the main building block of artificial neural networks.

## Artificial Neurons Versus Biological Neurons

There are many different types of *artificial neurons*.
Some *artificial neurons* aim for bioligical realism,
while others simply strive for alternate computational models which happen to be loosly inspired by biological neurons.

It should be noted that there are different kinds of biological neurons, so even with the *artificial neurons* that aim for bioligical realism, it is important to keep in mind which type of biological neuron they are trying to model.
Said another way, even going for bioligical realism you still may potentially need different types of *artificial neurons* to model them, depending on how much bioligical realism you are going for.

With many of the different *artificial neurons*, synapses of biological neurons are often modeled as "weights" -- as constants to multiply input with.
As such, many of the different *artificial neuron* models "boil down" the essence of a synapse to a constant number (which in most models can be positive, negative or zero).
A positive constant represents an excitatory connection and a negative constant represents an inhibitory connection.
(Things such as the vesicles of a neuron tend not to be modeled in *artificial neurons*.)

## Two Function Models

A number of the *artificial neuron* models can be thought of the composition of 2 functions.
One function models *input* from the dendrites and the build up of charge in the perikaryon.
The other function models the *output* out of the charge out the axon.
This latter function (that models the *output* out of the charge out the axon) is calls the *activiation function*.

### Linear Threshold Gate

*Linear threshold gates* are a type of *artificial neuron* defined by the composition of two functions.
The first function, which we will name `D`

(very loosely) models the *input* from the dendrites and the build up of charge in the perikaryon.
The second function, which we will name `A`

(very loosely) models the *output* out of the charge out the axon.

Let `D`

be a function defined as:

(Where the `w`

are constants, and the input has _{i}`n`

dimensions.)

Now, let our *activation function* `A`

be the function defined as:

Note that the constant `T`

in the equation above is what "decides" where it the *threshold* is

This function $A$ has a special name, and is known as threshold function. (Which is where the word "threshold" comes from for the name of this *artificial neuron* model.)

Then a *linear threshold gate* is defined as follows:

#### Example

Here's an example. Let's say:

$$D\left(\stackrel{-}{x}\right):=\begin{array}{ccccc}(& 5& -3& 8& )\end{array}\xb7\begin{array}{ccccc}(& {x}_{1}& {x}_{2}& {x}_{3}& )\end{array}=5{x}_{1}-3{x}_{2}+8{x}_{3}$$And that $A$ is defined as:

$$A\left(x\right):=IF\left(x>28\right)THEN\left(1\right)ELSE\left(0\right)=\{\begin{array}{c}1|x>28\\ 0|x\le 28\end{array}$$
Then the *linear threshold gate* for this example is defined by the function:

So then, here's the output we get from various inputs:

$$\mathrm{LTG}\left(\begin{array}{ccccc}(& 2& 3& 4& )\end{array}\right)=IF(5\times 2-3\times 3+8\times 4\ge 28)THEN\left(1\right)ELSE\left(0\right)=IF(33\ge 28)THEN\left(1\right)ELSE\left(0\right)=1$$ $$\mathrm{LTG}\left(\begin{array}{ccccc}(& 1& 1& 1& )\end{array}\right)=IF(5\times 1-3\times 1+8\times 1\ge 28)THEN\left(1\right)ELSE\left(0\right)=IF(10\ge 28)THEN\left(1\right)ELSE\left(0\right)=0$$ $$\mathrm{LTG}\left(\begin{array}{ccccc}(& 0& 0& 4& )\end{array}\right)=IF(5\times 0-3\times 0+8\times 4\ge 28)THEN\left(1\right)ELSE\left(0\right)=IF(32\ge 28)THEN\left(1\right)ELSE\left(0\right)=1$$
If this *artificial neuron* was used in an artificial neural network with other *artificial neurons*, the "weights" on the other *artificial neurons* (and perhaps even the number of inputs on the other *artificial neurons*) would most likely be different.

### Linear Sigmoid Gate

*Linear sigmoid gates* are a type of *artificial neuron* defined by the composition of two functions.
The first function, which we will name `D`

(very loosely) models the *input* from the dendrites and the build of charge in the perikaryon.
The second function, which we will name `A`

(very loosely) models the *output* out of the charge out the axon.

*Linear sigmoid gates* are a lot like *linear threshold gates* except that their *activation function* is different:
instead of a theshold function (like with *linear theshold gates*), *linear sigmod gates* have a sigmoid function.

Let `D`

be a function defined as:

(Where the `w`

are constants, and the input has _{i}`n`

dimensions.)

Now, let our *activation function* `A`

be the function defined as:

Then a *linear sigmoid gate* is defined by combining the function above with the *activation function* below:

#### Example

TODO

### Linear Hyperbolic Tangent Gate

TODO

## Integrate-and-Fire

An artificial neuron model that strives for more biological realism than the various two function models is the *integrate-and-fire* model.

TODO

## Leaky Integrate-and-Fire

Another artificial neuron model that strives for more biological realism than the various two function models is the *integrate-and-fire* model.

The *leaky integrate-and-fire* artificial neuron model is intended as an improvement to the *integrate-and-fire* model in that it tries to fix the memory problem that the *integrate-and-fire* model has, where if a *integrate-and-fire* neuron receives a below threshold signal at some time, it will retain that voltage until a voltage spike finally happens. (This behavior is not how biological neurons behave.)

TODO