Layer-wise Adaptive Rate Scaling (LARS) is a Neural Network Optimizer. The technique allows Large Batch Training without significant decrease in accuracy (You, Gitman, and Ginsburg, n.d.). One of the secondary goals is Fast Neural Network Training.
You, Yang, Igor Gitman, and Boris Ginsburg. n.d. “Large Batch Training of Convolutional Networks.” http://arxiv.org/abs/1708.03888v3.