Distributed (Deep) Machine Learning Community
Grow your team on GitHub
GitHub is home to over 50 million developers working together. Join them to grow your own development teams, manage permissions, and collaborate on projects.
Sign up
Pinned repositories
Repositories
xgboost
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Flink and DataFlow
dgl
Python package built to ease deep learning on graph, on top of existing DL frameworks.
gluon-nlp
NLP made easy
gluon-cv
Gluon CV Toolkit
rabit
Reliable Allreduce and Broadcast Interface for distributed machine learning
decord
An efficient video loader for deep learning with smart shuffling that's super easy to digest
web-data
The repo to host all the web data including images for documents in dmlc projects.
ps-lite
A lightweight parameter server interface
XGBoost.jl
XGBoost Julia Package
dlpack
RFC for common in-memory tensor structure and operator interface for deep learning system
dmlc-core
A common bricks library for building scalable and portable distributed machine learning.
tensorboard Archived
Standalone TensorBoard for visualizing in deep learning
mshadow
Matrix Shadow:Lightweight CPU/GPU Matrix and Tensor Template Library in C++/CUDA for (Deep) Machine Learning
MXNet.jl
MXNet Julia Package - flexible and efficient deep learning in Julia
minerva Archived
Minerva: a fast and flexible tool for deep learning on multi-GPU. It provides ndarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy.
keras
Forked from keras-team/kerasDeep Learning library for Python. Convnets, recurrent neural networks, and more. Runs on MXNet, Theano or TensorFlow.
mxnet.js
MXNetJS: Javascript Package for Deep Learning in Browser (without server)
minpy Archived
NumPy interface with mixed backend execution
mxnet-notebooks
Notebooks for MXNet
mxnet-memonger
Sublinear memory optimization for deep learning, reduce GPU memory cost to train deeper nets

