
Strassen algorithm for matrix multiplication https://en.wikipedia.org/wiki/Strassen_algorithm
FFT convolution https://dsp.stackexchange.com/a/63211
Winograd convolution https://www.cv-foundation.org/openaccess/content_cvpr_2016/p...
And of course optimization algorithms themselves.
It also feels incredibly snarky to say "they knew about caching" and that they will get to partial evaluation and dead code elimination, when those seem to be particularly useless (beyond what the CUDA compiler itself does) when it comes to writing GPU kernels or doing machine learning in general.
You can't do any partial evaluation of a neural network because the activation functions are interrupting the multiplication of tensors. If you remove the activation function, then you end up with two linear layers that are equivalent to one linear layer, defeating the point of the idea. You could have trained a network with a single layer instead and achieved the same accuracy with a corresponding shorter training/inference time.
Dead code elimination is even more useless since most kernels are special purpose to begin with and you can't remove tensors without altering the architecture. Instead of adding useless tensors only to remove them, you could have simply used a better architecture.
I'm not enough of an expert to see if there's any actualy merit to this idea, and if you can skip evaluating huge parts of the network and keeping track of such evaluations, is actually worth it, but it intuitively makes sense to me that making an omelette has nothing to do with the Battle of Hastings, so when making a query about the former, the neurons encoding the latter might not affect the output.
Afaik, there's already research into finding which network weight encode which concepts.
MOE is a somewhat cruder version of this technique.
What other optimizations are there that can be used than what explicitly falls into the 4 categories that the top commenter here listed out?
There is more to it, mentioned 4 categories are not the only ones, they are not even broad categories.
If somebody likes broad categories here is good one: "1s and 0s" and you can compute anything you want, there you go – single category for everything. Is it meaningful? Not really.
In this case are you're strickly fusing pre defined kernels or are you also optimizing them? Is this complimentary to your earlier work on search-based compilers?
In other words, very complimentary to our search-based approach.