- Machine Learning (ML)
- Deep Machine Learning (DML)
- Artificial Intelligence (AI)
Master Machine Learning Algorithms
Machine Learning Mastery with Python
Deep Learning With Python
From First Model, To State-of-the-Art Results
book by top deep learning scientists
Ian Goodfellow, Yoshua Bengio and Aaron Courville
and includes coverage of all of the main algorithms.
The Elements of Statistical Learning: Data Mining, Inference, and Prediction,
(Springer Series in Statistics) 2nd ed. 7th printing 2013
by Trevor Hastie, Robert Tibshirani, Jerome Friedman
Deep Learning: A Practitioner’s Approach 1st Edition
by Adam Gibson, Josh Patterson
Fundamentals of Deep Learning: Designing Next-Generation
Machine Intelligence Algorithms 1st Edition
by Nikhil Buduma (Author)
Java Deep Learning Essentials
Books by Timothy Masters:
Deep Belief Nets in C++ and CUDA C: Volume 1:
Restricted Boltzmann Machines and Supervised Feedforward Networks
Deep Belief Nets in C++ and CUDA C: Volume II:
Autoencoding in the Complex Domain
Artificial Intelligence for Humans, Volume 1: Fundamental Algorithms
Artificial Intelligence for Humans, Volume 2: Nature-Inspired Algorithms
Artificial Intelligence for Humans, Volume 3: Deep Learning and Neural Networks
Deep Learning Made Easy with R: A Gentle Introduction for Data Science. – by N.D Lewis
Neural Networks and Deep Learning
Grokking Deep Learning
Several books on TensorFlow:
Machine Learning with TensorFlow
TensorFlow Machine Learning Cookbook
Getting Started with TensorFlow
Hands-On Machine Learning with Scikit-Learn and TensorFlow:
Concepts, Tools, and Techniques for Building Intelligent Systems
Two main languages used to write DML libraries: C++ and Rust.
Rust was started/created as a personal project by Mozilla employee Graydon Hoare in 2006.
The “production-ready” stable release v.1.0 – in May 2015.
Rust is a compiled language, similar to C/C++, but designed to cause much less problems:
(memory safe, does not permit null pointers or dangling pointers, etc.).
Rust is designed for concurrency (parallelism) and speed.
Rust doesn’t have classes, and they have no plans to implement classes.
(They were dropped from the language a while ago… along with the garbage collector).
Rust uses traits, impl, and structs to achieve similar results.
– https://www.rust-lang.org/en-US/documentation.html –
– http://rustbyexample.com/ –
– https://doc.rust-lang.org/book/ – https://doc.rust-lang.org/stable/book/ –
– https://github.com/ctjhoa/rust-learning –
Programming Rust – by Jim Blandy, Jason Orendorff (2016)
– https://www.amazon.com/Programming-Rust-Fast-Systems-Development/dp/1491927283/ –
Leaf – an open Machine Learning Framework written in Rust
– https://github.com/autumnai/leaf –
– Deep Learning SIMPLIFIED – series of 30+ youtube videos
NIPS – Neural Information Processing Systems Foundation, Inc.:
– https://nips.cc/Conferences/2016/Schedule?type=Workshop –
ODSC – Open Data Science Conference
– https://www.odsc.com/boston –
– https://www.kaggle.com – predictive modelling and analytics competitions
– DataScienceWeekly.org –
– http://insightdatascience.com – 7 weeks, zero cost
Deep Learning – https://en.wikipedia.org/wiki/Deep_learning –
Alan Turing 1950 essay – test for AGI (artificial general intelligence) – 5 min text exchange – pretend to be a human
Frank Rosenblatt (Cornell psychologist, late 1950s) – the Perceptron , an artificial neural network .
The simplest description of a neural network is that it’s a machine
that makes classifications or predictions based on its ability to discover patterns in data .
Quoc Le et al – cat paper (using unlabeled data – reconize a cat)
– http://www.forbes.com/sites/aarontilley/2016/11/30/nvidia-deep-learning-ai-intel/#51b3ede839cc –
– http://www.nvidia.com/object/deep-learning-system.html –
NVIDIA DGX-1 – purpose-built system for deep learning and AI, equiv to 250 servers, price $130K
TPUs = tensor processing units :
– Google TPU chips – https://www.google.com/amp/www.recode.net/platform/amp/2016/5/20/11719392/google-ai-chip-tpu-questions-answers –
– Intel Nervana Chip coming in 2017 – http://www.globalfuturist.org/2016/12/intels-latest-nervana-chip-shows-the-giant-is-serious-about-owning-the-ai-market/ – http://www.nextbigfuture.com/2016/11/intel-will-deliver-100x-increase-in.html
|Andrew Ng||Google Brain, chief scientist at Baidu Research in Silicon Valley, cofounder of Coursera
– http://www.mlyearning.org –
|Yann LeCun||convolutional neural net (CNN), 1998
optical character recognition and computer vision
in late 1990s was processing up to 20% of all checks in USA.
NYU-CDS = NYU Center for Data Science
director of Facebook AI Research in New York City
International Conference on Learning Representations
|Geoff Hinton||“Father of Deep Learning”, Univ of Toronto & Google
Restricted Boltzmann Machines (RBM)
Deep Belief Networks (DBN) – stacked RBMs
|recurrent neural nets (RNNs) and Long short-term memory (LSTM) architecture – Sepp Hochreiter and Jürgen Schmidhuber (1997) – good for time series, handwriting recognition, speech recognition – https://en.wikipedia.org/wiki/Long_short-term_memory|
|Demis Hassabis, David Silver, Nando De Freitas||researchers at DeepMind, combining convnets with reinforcement learning|
|Quoc Le, Oriol Vinyals, Ilya Sutskever||Google Research|
|Tomas Mikolov||Facebook for Word2vec|
|Ryan Kiros||Attention Models and Skip-Thought vectors|
|Ivakhnenko||Ivakhnenko published the first general learning algorithms for deep networks (e.g., Ivakhnenko and Lapa, 1965). First deep learning net with 8 layers was designed by him.|
Google Brain – department in Google since 2011
Jeff Dean – head of Google Brain
Mike Schuster, Yonghui Wu, Zhifeng Chen
Macduff Hughes – director of Google Translate
Michael Tetelman – https://www.linkedin.com/in/michaeltetelman, https://plus.google.com/+MichaelTetelman
- University of Toronto, Canada
- Google, DeepMind – Google’s AlphaGo AI beat Lee Sedol, the board game Go champion (March 2016),
- IBM – SystemML ,
- Elon Musk et al – OpenAI