Useful Function Approximations for Neural Nets and ML

评价:
0
(0用户)

In machine learning, the training processes for neural networks often make use of special math functions, such as exp, ln, log, pow10 and many others. These functions are often called with high frequency and can dominate overall performance, as these functions are computationally expensive and are rarely hardwired in processors.

Approximations are generally faster, and may be of sufficient precision to be useful; here are presented a few functions that might help you.

fast random, returns a value in [-1.0, 1.0]:

float randf(int& seed) //based on quilez rand: seed MUST != 0
{
     seed*=16807;
     unsigned int ir = ((unsigned int)seed >> 9) | 0x40000000;
     return ((float&)ir) – 3.0f;
}

fast GAUSSIAN random, returns a value in [-3.0, 3.0]:

float gaussRandf(int& seed) //randf is above, x*.3 for [-1f, 1f]
{
     float sum=0;
     for(int i=0; i < 3; ++i) sum += randf(seed);
     return sum;
}

fast exp:

double exp(double val) //from Ankerl
{
     i64 tmp = (1512775 * val + 1072632447);
     return *((double*)&(tmp << 32));
}

fast ln, input in [0.0, 2.0]:

float ln(float val)
{
     float v = val-1.0f;
     return v*(6.0f + 0.7662f*v) / (5.9897f + 3.7658f*v);
}

fast pow10 (x raised to the 10th power):

float i_as_f(int i)
{
     return *(float*)(&i);
};

float pow10e(float f) //haken-schraudolph
{
     f = i_as_f((int)(f * 27866352.6f + 1064866808.0f));
     return f;
}

fast bias, input in [0.0,1.0], output in [0.0,1.0]:

float biasf(float val, float shape)
{
     return val / ( (1.0f/shape – 2.0f) * (1.0f-val) + 1.0f )
}

fast gain, input in [0.0,1.0], output in [0.0,1.0]:

float gainf(float val, float shape)
{
     if(val < .5f){
          return biasf( 2.0f * val, shape ) * 0.5f;
     }
     return ( biasf( 2.0f * – 1.0f, 1.0f – shape) + 1.f ) * .5f;
}

That’s it for now. I may add more later.

 

本文为原创文章,转载请注明出处!

注册并通过认证的用户才可以进行评价!

admin:系统自动奖励,+10,  

发表评论