The development of predictive models is a time and computationally intensive process that is highly iterative in nature. By carefully optimizing the right parts of the workflow, order of magnitude type speed-ups can be achieved, leading to more accurate models in shorter periods of time. In this talk we'll touch on several different ways in which we've been able to drastically reduce the time to train deep learning models, from high level library choices all the way down to leveraging custom silicon.
Scott has over nine years experience creating machine learning based solutions to solve large-scale, real-world problems. Scott's currently the cloud team lead at Nervana Systems, focused on providing a highly optimized deep learning platform for customers across a variety of domains. Inside of work he can often be found pushing and reviewing code. Outside of work he can often be found running long distances and quaffing local craft beer, occasionally simultaneously.