Deep Learning State of the Art (2020)

Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.

Website:
Slides:
References:
Playlist:

OUTLINE:
0:00 – Introduction
0:33 – AI in the context of human history
5:47 – Deep learning celebrations, growth, and limitations
6:35 – Deep learning early key figures
9:29 – Limitations of deep learning
11:01 – Hopes for 2020: deep learning community and research
12:50 – Deep learning frameworks: TensorFlow and PyTorch
15:11 – Deep RL frameworks
16:13 – Hopes for 2020: deep learning and deep RL frameworks
17:53 – Natural language processing
19:42 – Megatron, XLNet, ALBERT
21:21 – Write with transformer examples
24:28 – GPT-2 release strategies report
26:25 – Multi-domain dialogue
27:13 – Commonsense reasoning
28:26 – Alexa prize and open-domain conversation
33:44 – Hopes for 2020: natural language processing
35:11 – Deep RL and self-play
35:30 – OpenAI Five and Dota 2
37:04 – DeepMind Quake III Arena
39:07 – DeepMind AlphaStar
41:09 – Pluribus: six-player no-limit Texas hold'em poker
43:13 – OpenAI Rubik's Cube
44:49 – Hopes for 2020: Deep RL and self-play
45:52 – Science of deep learning
46:01 – Lottery ticket hypothesis
47:29 – Disentangled representations
48:34 – Deep double descent
49:30 – Hopes for 2020: science of deep learning
50:56 – Autonomous vehicles and AI-assisted driving
51:50 – Waymo
52:42 – Tesla Autopilot
57:03 – Open question for Level 2 and Level 4 approaches
59:55 – Hopes for 2020: autonomous vehicles and AI-assisted driving
1:01:43 – Government, politics, policy
1:03:03 – Recommendation systems and policy
1:05:36 – Hopes for 2020: Politics, policy and recommendation systems
1:06:50 – Courses, Tutorials, Books
1:10:05 – General hopes for 2020
1:11:19 – Recipe for progress in AI
1:14:15 – Q&A: what made you interested in AI
1:15:21 – Q&A: Will machines ever be able to think and feel?
1:18:20 – Q&A: Is RL a good candidate for achieving AGI?
1:21:31 – Q&A: Are autonomous vehicles responsive to sound?
1:22:43 – Q&A: What does the future with AGI look like?
1:25:50 – Q&A: Will AGI systems become our masters?

CONNECT:
– If you enjoyed this video, please subscribe to this channel.
– Twitter:
– LinkedIn:
– Facebook:
– Instagram:

36 Comments

  1. This is the opening lecture on recent developments in deep learning and AI, and hopes for 2020. It’s humbling beyond words to have the opportunity to lecture at MIT and to be part of the AI community. Here’s the outline:
    0:00 – Introduction
    0:33 – AI in the context of human history
    5:47 – Deep learning celebrations, growth, and limitations
    6:35 – Deep learning early key figures
    9:29 – Limitations of deep learning
    11:01 – Hopes for 2020: deep learning community and research
    12:50 – Deep learning frameworks: TensorFlow and PyTorch
    15:11 – Deep RL frameworks
    16:13 – Hopes for 2020: deep learning and deep RL frameworks
    17:53 – Natural language processing
    19:42 – Megatron, XLNet, ALBERT
    21:21 – Write with transformer examples
    24:28 – GPT-2 release strategies report
    26:25 – Multi-domain dialogue
    27:13 – Commonsense reasoning
    28:26 – Alexa prize and open-domain conversation
    33:44 – Hopes for 2020: natural language processing
    35:11 – Deep RL and self-play
    35:30 – OpenAI Five and Dota 2
    37:04 – DeepMind Quake III Arena
    39:07 – DeepMind AlphaStar
    41:09 – Pluribus: six-player no-limit Texas hold’em poker
    43:13 – OpenAI Rubik’s Cube
    44:49 – Hopes for 2020: Deep RL and self-play
    45:52 – Science of deep learning
    46:01 – Lottery ticket hypothesis
    47:29 – Disentangled representations
    48:34 – Deep double descent
    49:30 – Hopes for 2020: science of deep learning
    50:56 – Autonomous vehicles and AI-assisted driving
    51:50 – Waymo
    52:42 – Tesla Autopilot
    57:03 – Open question for Level 2 and Level 4 approaches
    59:55 – Hopes for 2020: autonomous vehicles and AI-assisted driving
    1:01:43 – Government, politics, policy
    1:03:03 – Recommendation systems and policy
    1:05:36 – Hopes for 2020: Politics, policy and recommendation systems
    1:06:50 – Courses, Tutorials, Books
    1:10:05 – General hopes for 2020
    1:11:19 – Recipe for progress in AI
    1:13:11 – Q&A: Limitations / road-blocks of deep learning
    1:14:15 – Q&A: What made you interested in AI
    1:15:21 – Q&A: Will machines ever be able to think and feel?
    1:18:20 – Q&A: Is RL a good candidate for achieving AGI?
    1:21:31 – Q&A: Are autonomous vehicles responsive to sound?
    1:22:43 – Q&A: What does the future with AGI look like?
    1:25:50 – Q&A: Will AGI systems become our masters?

    1. I am sure if the aliens are here living in our every day among us and we dont notice that ,then’ im sure’ ,Al are already bagging groceries ,and running bulldozers among us; today .! Lol

  2. Thanks a lot for making these kinda things public to everyone! I hope I will get lectures from you in the future.

    1. in my opinion these kind of things should be made public. only a few can understand and would dare to understand it deeper.💜 who knows genZ are born for these🤞

  3. Lex, I really appreciate how you have made the spirit of an elite education and access available to all of us through your channel. I feel privileged to be able to listen to your conversations while I’m driving or doing whatever it is. So ThankYou! Can anyone point to a good tutorial on creating image datasets from scratch? Say I want to take some iPhone pics and create a dataset of whatever…What’s the proper file format? What size do they need to be? Comma Separated Values? I would really like something geared around the bash command line workflow! Thanks Y’all

  4. Thank you very much Professor I liked the way you presented the history. Yes we do AI and machine learning but we need to know where the story started, GREAT PRESENTATION.

  5. Can’t believe I’m getting such a cool lecture series for free here in India, I was so hungry for education like this as an engineering student .. so Thank you lex sir.

    1. I agree 👍 💯 And ,dang ,look at that teacher mmmm hmm . Lmao ,’my bad ” serious eye candy for some of us out here xoxo mouche gracias 😘

    1. worthless drivel hardly outstanding, some of the most worthless turds I’ve ever heard. He should be ashamed.

  6. It is great to see this content advocating for collaboration & a breaking of the walls between disciplines & I hope the world becomes more open in AI. As an old GenX at the end of my systems career I am hopeful that new generations will continue to reach for innovations in areas like cognitive ai to help save humanity from itself and lead us to become an intelligent interstellar species. Thanks for inspiring so many.

  7. Thanks for sharing this session! Just waiting for the day when we have the culture of having such tech talks in our local universities

  8. AI has come such a long way in just one year; so much has happened. AI research is growing so exponential that we will need AI to manage and predict where to focus our attention on AI.

  9. This is really a very informative and detailed talk. I am specially interested in RL and its logical progression into participation in the open learning space …from gaming focused to more on Open Domain , Recommendation Spaces

  10. “The meaning of life is not what I think it is, it’s what I do to make it.” So is AI.
    Thank you for making these public. Highly appreciated!

  11. Thank you very much for putting this online. I am not even sure what recently triggered me to even attempt learning AI. But because of individuals like yourself that release information freely, I have already begun to learn the concepts.

  12. Thank you for this comprehensive review! As someone working in this field, it’s virtually impossible to keep up with the speed of new developments in all frontiers. Would also love to know how you keep track of recent works. Thank you!

  13. Also looking forward to more abstractions to scale DL application beyond those who can invest in learning the syntax and complex setup and usage of hyper parameters. Those who can spend more time on understanding causal relationships and other context related to the problem, could really help add far more value than we’ve seen from the most technical data scientists.

  14. I can really see your deep enthusiasm for the history and interdisciplinary knowledge coming through here. I hope that passion passes on to your students. People need to _dig_ into this stuff. There’s nothing more powerful for innovation than wholistic understanding.

Leave a Reply

Your email address will not be published. Required fields are marked *