Logistic function
The logistic sigmoid provides one of the best ways to normalize the weight of a given output. This will be used as the activation function of the neuron. The threshold is usually a value above which the neuron has a y=1 value; or else it has y=0. In this case, the minimum value will be 0 because the activation function will be more complex.
The logistic function is represented as follows.
- e represents Euler's number, or 2.71828, the natural logarithm.
- x is the value to be calculated. In this case, x is the result of the logistic sigmoid function.
The code has been rearranged in the following example to show the reasoning process:
#For given variables:
x_1 = [[10, 2, 1., 6., 2.]] # the x inputs
w_t = [[.1, .7, .75, .60, .20]] # the corresponding weights
b_1 = [1] # the bias
# A given total weight y is calculated
y = tf.matmul(x, w) + b
# then the logistic sigmoid is applied to y which represents the "x" in the formal definition of the Logistic Sigmoid
s = tf.nn.sigmoid(y)
Thanks to the logistic sigmoid function, the value for the first location in the model comes out as 0.99 (level of saturation of the location).
To calculate the availability of the location once the 0.99 has been taken into account, we subtract the load from the total availability, which is 1, as follows:
As seen previously, once all locations are calculated in this manner, a final availability vector, lv, is obtained.
When analyzing lv, a problem has stopped the process. Inpidually, each line appears to be fine. By applying the logistic sigmoid to each output weight and subtracting it from 1, each location displays a probable availability between 0 and 1. However, the sum of the lines in lv exceeds 1. That is not possible. Probability cannot exceed 1. The program needs to fix that. In the source code, lv will be named y.
Each line produces a [0,1] solution, which fits the prerequisite of being a valid probability.
In this case, the vector lv contains more than one value and becomes a multiple distribution. The sum of lv cannot exceed 1 and needs to be normalized.
The softmax function provides an excellent method to stabilize lv. Softmax is widely used in machine learning and deep learning.
Bear in mind that these mathematical tools are not rules. You can adapt them to your problem as much as you wish as long as your solution works.