Making large AI models cheaper, faster and more accessible
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Distributed Deep Learning, with a focus on distributed training, using Keras and Apache Spark.
A state-of-the-art multithreading runtime: message-passing based, fast, scalable, ultra-low overhead
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Get updates on the fastest growing repos and cool stats about GitHub right in your inbox
Once per month. No spam.