Everything starts from discussing what incentivized the development of hardware, software and machine learning in isolation from each other: Moore’s law 📈 and Dennard scaling 📈 provided a predictable increase in compute and memory, and locked in profit margins for vendors 💰, making hardware design risk-averse. No reason to design purpose-specific hardware when it could be eclipsed by the next generation general purpose processors in two years. The only exceptions to this were either short-lived or motivated by seeking prestige, for example mastering chess ♟️ against a human opponent.
The cost difference between exploring new kinds of hardware or exploring original machine learning algorithms has lead to an exponential increase in publications in machine learning, while publications in hardware research have stayed more or less constant.
This history of siloed development generated the hardware lottery 🎰: when a research idea wins because it is better suited to current hardware and software, and not because it is universally superior.
Machine learning researchers treat hardware as a constraint and stop exploring. However, the choice of hardware and software has often determined the algorithmic winner in history: a striking example is neural networks!
Sara discussed why this matters now: while the field witnesses the advent of domain specialized hardware and software, avoiding future hardware lotteries is only possible with an effort to reduce barriers to explore new combinations of hardware, software and algorithms.
At LightOn we are building hardware for machine learning that goes beyond silicon. If you want to try out your latest idea, you can register to the LightOn Cloud for a Free Trial or apply to the LightOn Cloud for Research Program!