Skip to content

Latest commit

 

History

History
108 lines (56 loc) · 6.55 KB

File metadata and controls

108 lines (56 loc) · 6.55 KB

Notes on 02 Aug 2021

Agenda

  • Lorenz will show a FastAI.jl live demo!
  • Live Q&A session

Minutes

See the recorded video for the live demo! Below is a chat transcript:

16:54:33 From Kyle Daruwalla to Everyone : Live Q&A: https://pigeonhole.at/FASTAIJL

17:01:24 From Kyle Daruwalla to Everyone : Live Q&A: https://pigeonhole.at/FASTAIJL

17:03:52 From Kyle Daruwalla to Everyone : Live Q&A: https://pigeonhole.at/FASTAIJL

17:05:15 From Kyle Daruwalla to Everyone : Live Q&A: https://pigeonhole.at/FASTAIJL

17:09:22 From Kyle Daruwalla to Everyone : Live Q&A: https://pigeonhole.at/FASTAIJL

17:19:29 From Rakibul Hasnat Alif to Everyone : is physics must needed for ai?

17:20:49 From gabrielbaraldi to Everyone : Physics I don’t think so, but knowing a bit of the math is always helpful

17:21:13 From Christopher Rackauckas to Everyone : If you have billions of dollars for a "big data" set, use a neural network trained on a large dataset. If you are DeepMind and are willing to pay the billions to train AlphaStar for a demonstration, then pay and use standard ML. If you're the rest of the world, incorporate physics and reduce the training costs and data by a few orders of magnitude. That's the general summary of the results in the scientific machine learning field.

17:21:27 From Jerry Ling to Everyone : if you want to apply AI in field X, especially for discovering new "laws", then knowledge in X can save you a lot of time and data.

17:22:49 From Christopher Rackauckas to Everyone : https://medium.com/swlh/neural-ode-for-reinforcement-learning-and-nonlinear-optimal-control-cartpole-problem-revisited-5408018b8d71 here for example is a nice demonstration of sciml approaches to reinforcement learning. Cartpole learns in 4 iterations IIRC. If you already know the physics of gravity, why relearn it every time? The future will be bottlenecked and we should help ML as much as possible.

17:22:58 From Jeremy Howard to Everyone : note that most practical applications can use transfer learning and semi-supervised learning. around 100 labeled data points generally sufficient for that purpose

17:23:43 From Jeremy Howard to Everyone : sciml also very useful. i haven't seen a combination of both but I assume they would combine well

17:25:09 From Christopher Rackauckas to Everyone : Some people (including us) are working on it in the fluid dynamics community. You can use the simulation of simplified physics as a baseline to train, and then transfer learn to just a few real data points to improve the predictions. These kinds of combinations are where we are going and further reducing data/compute needs.

17:25:24 From Jeremy Howard to Everyone : nice!

17:27:00 From joao guilherme to Everyone : You need to make adam’s default lr be 3e-4 😛

17:27:15 From Dhairya Gandhi to Everyone : We also have a example of training atomic graph nets for materials properties discovery and simulation which combines ML with simulations for over 10x faster convergence compared with pytorch

17:27:20 From Ari to Everyone : Onehot definition: https://github.com/FluxML/Flux.jl/blob/1a0b51938b9a3d679c6950eece214cd18108395f/src/onehot.jl

17:27:32 From Kyle Daruwalla to Everyone : I am sorry for everyone who had to see that graphic zoom bomb. I have disabled the video and audio for participants.

17:28:15 From joao guilherme to Everyone : Do you have plans for releasing an online course using FastAI.jl? (Question to Jeremy and Lorenz mostly)

17:30:19 From Ari to Everyone : Great job kyle, this is awesome!

17:32:17 From Bruno dos Santos to Everyone : Any reason VAETraining phase is a struct and not an abstract type?

17:32:48 From Dhairya Gandhi to Everyone : Its subtypes an AbstractPhase, so this is specific to the VAE training

17:34:13 From Ari to Everyone : And all with the base array API

17:34:19 From Ari to Everyone : Julia's base 

17:34:32 From Jeremy Howard to Everyone : is there something like fastai's `show_results`? can it be customized, e.g. for that VAE example to show input and output?

17:35:31 From Kyle Daruwalla to Everyone : Yes there are a couple plotting utilities that would work: https://fluxml.ai/FastAI.jl/dev/notebooks/how_to_visualize.ipynb.html

17:36:56 From Jeremy Howard to Everyone : thanks kyle - that's not quite as convenient/abstracted. needs typed model outputs and typedispatch to replicate it

17:37:29 From Julian Samaroo to Everyone : Need unmute

17:37:32 From Dhairya Gandhi to Everyone : I can’t unmute myself

17:38:33 From Kyle Daruwalla to Everyone : @Jeremy yeah that’s something we’re planning on working on

17:39:53 From Jeremy Howard to Everyone : can i have unmute/video enabled please?

17:40:11 From Kyle Daruwalla to Everyone : I enabled unmute for everyone

17:40:26 From Kyle Daruwalla to Everyone : Let me know if you still have trouble

17:40:34 From Jeremy Howard to Everyone : thanks - still don't have video but that's ok

17:57:11 From Kyle Daruwalla to Everyone : https://julialang.zulipchat.com/#narrow/stream/237432-ml-ecosystem-coordination

17:57:44 From Kyle Daruwalla to Everyone : If anyone is interested in contributing to porting the book, join us using the Zulip link above

17:58:45 From Dhairya Gandhi to Everyone : Also please join the Julia Slack! http://julialang.slack.com

18:00:00 From joao guilherme to Everyone : There are more questions in pigeonhole, maybe you need to refresh the page?

18:00:01 From Johann-Tobias Schäg to Everyone : there are still unanswered questions

18:00:08 From Johann-Tobias Schäg to Everyone : like:

18:00:29 From Johann-Tobias Schäg to Everyone : will fastAI create recipes for running the MLPerf benchmarks... (no need to perform too well, but ticking these benchmarks might be important for users)?

18:01:15 From Johann-Tobias Schäg to Everyone : I still have problems finding (recent) benchmarks comparing Julia libraries to non-Julia packages. Can you make them or make them more accessible?

18:01:32 From Johann-Tobias Schäg to Everyone : Can image augmentations use the GPU for faster training on environments with a low number of CPUs (Colab & Kaggle)?

18:01:46 From Johann-Tobias Schäg to Everyone : How do the other deep learning packages i.e Knet.jl Avalon.jl fit into the ecossystem?

18:02:05 From Johann-Tobias Schäg to Everyone : Best Julia learning resources for Python users?

18:02:20 From Dhairya Gandhi to Everyone : We can definitely use the GPUs for augmentation etc.

18:02:46 From Johann-Tobias Schäg to Everyone : not my question just reposting *shrugg*

18:06:39 From Lorenz Ohly to Everyone : https://julialang.org/community/