Silicon Graphics released a BSD licensed version of the STL for C++ compilers that didn’t support it back in the late 1990’s. These days it is useful for implementation details and being freely useable under a very liberal license.
Useful Function Approximations for Neural Nets and ML
In machine learning, the training processes for neural networks often make use of special math functions, such as exp, ln, log, pow10 and many others. These functions are often called with high frequency and can dominate overall performance, as these functions are computationally expensive and are rarely hardwired in processors.
Approximations are generally faster, and may be of sufficient precision to be useful; here are presented a few functions that might help you.
fast random, returns a value in [-1.0, 1.0]:
float randf(int& seed) //based on quilez rand: seed MUST != 0
{
seed*=16807;
unsigned int ir = ((unsigned int)seed >> 9) | 0x40000000;
return ((float&)ir) – 3.0f;
}
fast GAUSSIAN random, returns a value in [-3.0, 3.0]:
float gaussRandf(int& seed) //randf is above, x*.3 for [-1f, 1f]
{
float sum=0;
for(int i=0; i < 3; ++i) sum += randf(seed);
return sum;
}
fast exp:
double exp(double val) //from Ankerl
{
i64 tmp = (1512775 * val + 1072632447);
return *((double*)&(tmp << 32));
}
fast ln, input in [0.0, 2.0]:
float ln(float val)
{
float v = val-1.0f;
return v*(6.0f + 0.7662f*v) / (5.9897f + 3.7658f*v);
}
fast pow10 (x raised to the 10th power):
float i_as_f(int i)
{
return *(float*)(&i);
};
float pow10e(float f) //haken-schraudolph
{
f = i_as_f((int)(f * 27866352.6f + 1064866808.0f));
return f;
}
fast bias, input in [0.0,1.0], output in [0.0,1.0]:
float biasf(float val, float shape)
{
return val / ( (1.0f/shape – 2.0f) * (1.0f-val) + 1.0f )
}
fast gain, input in [0.0,1.0], output in [0.0,1.0]:
float gainf(float val, float shape)
{
if(val < .5f){
return biasf( 2.0f * val, shape ) * 0.5f;
}
return ( biasf( 2.0f * – 1.0f, 1.0f – shape) + 1.f ) * .5f;
}
That’s it for now. I may add more later.
本文为原创文章,转载请注明出处!
SGI MLC++ 1.3 Source
This is the older version of MLC2.01-src.zip, also available on this site. SGI released a library to simplify AI/ML C++ programming in 1997, described in the paper “MLC++: A Machine Learning Library in C++”. This source is the final version released by the original Stanford research team, and released into the public domain. The newer versions licenses are restricted from commercial use.
SGI MLC++ Utilities
This is the utility set to accompany MLC2.01-src.zip, also available on this site. SGI released a library to simplify AI/ML programming in C++ in 1997. These utilities were intended to facilitate library integration and testing, and to create training data.
Machine Learning, C++ and You
Machine learning (ML) is a hot area of activity in the programming world these days. And, generally speaking, the ML programming language weapon of choice is Python. Practically all new ML research projects, libraries and frameworks are coded in Python or provide Python bindings exclusively. Even most pre-trained learning models provide Python-only interfaces.
This poses a challenge for C-language family programmers. One has to either craft or adapt C++ interfaces to this ML code intended for Python, or move to Python itself. Admittedly, Python is fairly straight-ahead where C-style syntax is concerned, and most C coders should be able to get up to speed easily, but are likely to be unable to put up with the speed easily.
Many novice programmers don’t realize it, but Python is an interpreted language. As such, Python code is vastly less performant than C++ code. Performance gaps on the order of 100x or even 1000x are not uncommon. This poor performance is frequently acceptable for interface code for GPU-accelerated libraries and frameworks, where the bulk of the runtime is spent in computational kernels and “programmer productivity” is of paramount concern. But in many scenarios, Python represents a significant challenge to performance and ultimately productivity, as programmers struggle to work around Python’s limitations.
That sounds bad. By now, you might be asking yourself “If all of that is true, why is Python so popular in the machine learning space?” The answer: coding neural nets and other AI models from scratch is not trivial. The large body of available code for Python allows for “ML script kiddies”. People who barely know how to program at all, and who know nothing about the concepts underlying the lowest levels of ML machinery, are able to get results, however poor, slow and unstable, with moderate effort.
An alternative approach for C++ coders, and one I advocate, is to roll you own ML code. As mentioned before, this is not trivial, and that brings me to the purpose of this post.
While machine learning with Python is the new hotness, the old hotness was machine learning with C++ 30 years ago. And some of the resources from back then might prove useful to you, my fellow C++ specialists who don’t love Python, today.
A tiny list to get going, for free and license unencumbered:
Books – available at archive.org for free check-out:
C++ neural networks and fuzzy logic (1993)
Neural network and fuzzy logic applications in C/C++ (1994)
and public domain C++ implementations of various algorithms for ML:
MLC++ – A Machine Learning Library in C++ (1997)
The implementation is from the Stanford research group which authored a paper of the same name and the now defunct Silicon Graphics. The code and utilities are in the public domain, and can be downloaded from this very site.
To be continued…
本文为原创文章,转载请注明出处!