Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
C++ Python Jupyter Notebook Perl Scala Cuda Other
#9433 Compare This branch is 34 commits ahead, 240 commits behind master.
Latest commit 9ef1969 Jan 9, 2018 @szha szha fix path separator (#9352)
Permalink
Failed to load latest commit information.
.github Update PR & Issue Template (#8555) Nov 10, 2017
R-package [Merge into v1.0.0 ONLY][Copy of PR #8704] Prep1.0: bump the version … Nov 22, 2017
amalgamation fix random generator: do not gen seed each time (#9119) Dec 29, 2017
benchmark/python/sparse LibsvmIter Doc Updates (#8111) Oct 1, 2017
cmake License fixes (#8873) Nov 30, 2017
cpp-package [cpp-package]Update readme#8655 (#8746) Nov 21, 2017
cub @ 05eb57f Update cub for CUDA 9 (#7270) Aug 1, 2017
dlpack @ a6e09b5 Change Interface of NDArray & TBlob for DLPack Compatible (#6345) May 30, 2017
dmlc-core @ 87b7ffa multi processing and fork fix (#8677) Nov 16, 2017
docker License fixes (#8873) Nov 30, 2017
docker_multiarch Multiplatform docker based builds (#7792) Oct 13, 2017
docs Remove torch support (#9072) Dec 14, 2017
example Remove torch support (#9072) Dec 14, 2017
include/mxnet Fix custom op multi-gpu scaling (#9283) Jan 5, 2018
make Remove torch support (#9072) Dec 14, 2017
matlab License fixes (#8873) Nov 30, 2017
mshadow @ 3d87ed2 Fix float16 min and max (#9149) Dec 22, 2017
nnvm @ e4a138a License fixes (#8873) Nov 30, 2017
perl-package fix random generator: do not gen seed each time (#9119) Dec 29, 2017
plugin [ImageIO] Fix image io for opencv3.3 (#8757) Dec 14, 2017
ps-lite @ 2ce8b9a Updating ps-lite submodule (#8769) Nov 22, 2017
python fix path separator (#9352) Jan 13, 2018
scala-package License fixes (#8873) Nov 30, 2017
setup-utils License fixes (#8873) Nov 30, 2017
src Fix custom op multi-gpu scaling (#9283) Jan 5, 2018
tests Fix nadam (#9127) Jan 13, 2018
tools License fixes (#8873) Nov 30, 2017
.gitattributes [R] To ignore R-pkg when releasing on github (#7007) Jul 13, 2017
.gitignore bump up version (#8488) Nov 2, 2017
.gitmodules update cub url (#6625) Jun 9, 2017
.travis.yml Add h5py support to NDArrayIter (#6790) Jul 18, 2017
CMakeLists.txt fix lint with cmake (#8752) Nov 21, 2017
CODEOWNERS Updating code owners (#8128) Oct 3, 2017
CONTRIBUTORS.md Fix __repr__ for gluon.Parameter (#8956) Dec 14, 2017
DISCLAIMER Add DISCLAIMER and lxn2 GPG keys (#7344) Aug 5, 2017
Jenkinsfile [EXPERIMENT] increasing timeout to 24hrs. (#8613) Nov 13, 2017
KEYS add code signing key (#8743) Nov 22, 2017
LICENSE V1.0.0.rc1 (#8896) Nov 30, 2017
MKL_README.md MKL compile update to remove full mkl pack dependency for blas=mkl (#… Feb 16, 2017
Makefile Make make lint compatible with python3 (don't call python2 explicitly) ( Nov 22, 2017
NEWS.md [v1.0.0branch only] Final Changes for 1.0- NEWS.d and README.md (#8781) Nov 23, 2017
NOTICE Issue #7748: Update the Copyright years in NOTICE file (#8046) Sep 26, 2017
README.md [v1.0.0branch only] Final Changes for 1.0- NEWS.d and README.md (#8781) Nov 23, 2017
appveyor.yml Add BLAS3 and LAPACK routines (#6538) Jun 13, 2017
prepare_mkl.sh upgrade MKL (#8378) Oct 26, 2017
readthedocs.yml [docs] add favicon and fix index html title Mar 25, 2016
snap.python Add snapcraft packaging (#4852) Mar 23, 2017
snapcraft.yaml [Merge into v1.0.0 ONLY][Copy of PR #8704] Prep1.0: bump the version … Nov 22, 2017

README.md

Apache MXNet (incubating) for Deep Learning

Build Status Documentation Status GitHub license

banner

Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Join the chat at https://gitter.im/dmlc/mxnet

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, R, Scala, C++ and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

Ask Questions

  • Please use mxnet/issues for how to use mxnet and reporting bugs

License

Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.