In the hidden layer, the activation function will decide what is being determined by the neural network, is it possible for an AI to generate activation function for itself so it can improve upon itself?

  • HoppsM
    link
    fedilink
    English
    31 year ago

    Based on my research, there is an emerging interest in the field of meta-learning, or “learning to learn.” Some researchers are exploring the concept of allowing neural networks to learn their own hyperparameters, which could include parameters of activation functions. However, it’s my understanding that this approach could lead to more complex training processes and risks such as unstable gradients, and it might not always result in significantly better performance.

    While activation functions with learnable parameters aren’t commonly used, there is ongoing research that explores them. One such example is the Parametric ReLU (PReLU) function - a variant of the ReLU activation function that allows the negative slope to be learned during training, as opposed to being a predetermined hyperparameter.

    In my opinion, if you’re new to this field, it’s essential to grasp the basics of neural networks, including understanding how common activation functions like ReLU, sigmoid, tanh, etc., operate. These advanced concepts are undoubtedly fascinating and might offer incremental improvements, but even most of today’s state-of-the-art models primarily use these “standard” activation functions. So, starting with a solid foundation is key.