327 lines
75 KiB
Plaintext
327 lines
75 KiB
Plaintext
|
||
[38;5;12m [39m[38;2;255;187;0m[1m[4mAwesome JAX [0m[38;5;14m[1m[4m![0m[38;2;255;187;0m[1m[4mAwesome[0m[38;5;14m[1m[4m (https://awesome.re/badge.svg)[0m[38;2;255;187;0m[1m[4m (https://awesome.re)[0m[38;2;255;187;0m[1m[4m (https://github.com/google/jax)[0m
|
||
|
||
|
||
[38;5;14m[1mJAX[0m[38;5;12m [39m[38;5;12m(https://github.com/google/jax)[39m[38;5;12m [39m[38;5;12mbrings[39m[38;5;12m [39m[38;5;12mautomatic[39m[38;5;12m [39m[38;5;12mdifferentiation[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;14m[1mXLA[0m[38;5;14m[1m [0m[38;5;14m[1mcompiler[0m[38;5;12m [39m[38;5;12m(https://www.tensorflow.org/xla)[39m[38;5;12m [39m[38;5;12mtogether[39m[38;5;12m [39m[38;5;12mthrough[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;14m[1mNumPy[0m[38;5;12m [39m[38;5;12m(https://numpy.org/)-like[39m[38;5;12m [39m[38;5;12mAPI[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mhigh[39m[38;5;12m [39m[38;5;12mperformance[39m[38;5;12m [39m[38;5;12mmachine[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mresearch[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12maccelerators[39m[38;5;12m [39m[38;5;12mlike[39m[38;5;12m [39m
|
||
[38;5;12mGPUs[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mTPUs.[39m
|
||
|
||
|
||
[38;5;12mThis is a curated list of awesome JAX libraries, projects, and other resources. Contributions are welcome![39m
|
||
|
||
[38;2;255;187;0m[4mContents[0m
|
||
|
||
[38;5;12m- [39m[38;5;14m[1mLibraries[0m[38;5;12m (#libraries)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mModels and Projects[0m[38;5;12m (#models-and-projects)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mVideos[0m[38;5;12m (#videos)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mPapers[0m[38;5;12m (#papers)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mTutorials and Blog Posts[0m[38;5;12m (#tutorials-and-blog-posts)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mBooks[0m[38;5;12m (#books)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mCommunity[0m[38;5;12m (#community)[39m
|
||
|
||
|
||
|
||
[38;2;255;187;0m[4mLibraries[0m
|
||
|
||
[38;5;12m- Neural Network Libraries[39m
|
||
[48;5;235m[38;5;249m- **Flax** (https://github.com/google/flax) - Centered on flexibility and clarity. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Flax NNX** (https://github.com/google/flax/tree/main/flax/nnx) - An evolution on Flax by the same team [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Haiku** (https://github.com/deepmind/dm-haiku) - Focused on simplicity, created by the authors of Sonnet at DeepMind. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Objax** (https://github.com/google/objax) - Has an object oriented design similar to PyTorch. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Elegy** (https://poets-ai.github.io/elegy/) - A High Level API for Deep Learning in JAX. Supports Flax, Haiku, and Optax. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Trax** (https://github.com/google/trax) - "Batteries included" deep learning library focused on providing solutions for common workloads. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Jraph** (https://github.com/deepmind/jraph) - Lightweight graph neural network library. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Neural Tangents** (https://github.com/google/neural-tangents) - High-level API for specifying neural networks of both finite and _infinite_ width. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **HuggingFace Transformers** (https://github.com/huggingface/transformers) - Ecosystem of pretrained Transformers for a wide range of natural language tasks (Flax). [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Equinox** (https://github.com/patrick-kidger/equinox) - Callable PyTrees and filtered JIT/grad transformations => neural networks in JAX. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Scenic** (https://github.com/google-research/scenic) - A Jax Library for Computer Vision Research and Beyond. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Penzai** (https://github.com/google-deepmind/penzai) - Prioritizes legibility, visualization, and easy editing of neural network models with composable tools and a simple mental model. [49m[39m
|
||
[38;5;12m- [39m[38;5;14m[1mLevanter[0m[38;5;12m (https://github.com/stanford-crfm/levanter) - Legible, Scalable, Reproducible Foundation Models with Named Tensors and JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mEasyLM[0m[38;5;12m (https://github.com/young-geng/EasyLM) - LLMs made easy: Pre-training, finetuning, evaluating and serving LLMs in JAX/Flax. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mNumPyro[0m[38;5;12m (https://github.com/pyro-ppl/numpyro) - Probabilistic programming based on the Pyro library. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mChex[0m[38;5;12m (https://github.com/deepmind/chex) - Utilities to write and test reliable JAX code. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mOptax[0m[38;5;12m (https://github.com/deepmind/optax) - Gradient processing and optimization library. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mRLax[0m[38;5;12m (https://github.com/deepmind/rlax) - Library for implementing reinforcement learning agents. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mJAX, M.D.[0m[38;5;12m (https://github.com/google/jax-md) - Accelerated, differential molecular dynamics. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mCoax[0m[38;5;12m (https://github.com/coax-dev/coax) - Turn RL papers into code, the easy way. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mDistrax[0m[38;5;12m (https://github.com/deepmind/distrax) - Reimplementation of TensorFlow Probability, containing probability distributions and bijectors. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mcvxpylayers[0m[38;5;12m (https://github.com/cvxgrp/cvxpylayers) - Construct differentiable convex optimization layers. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mTensorLy[0m[38;5;12m (https://github.com/tensorly/tensorly) - Tensor learning made simple. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mNetKet[0m[38;5;12m (https://github.com/netket/netket) - Machine Learning toolbox for Quantum Physics. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mFortuna[0m[38;5;12m (https://github.com/awslabs/fortuna) - AWS library for Uncertainty Quantification in Deep Learning. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mBlackJAX[0m[38;5;12m (https://github.com/blackjax-devs/blackjax) - Library of samplers for JAX. [39m
|
||
|
||
|
||
|
||
[38;2;255;187;0m[4mNew Libraries[0m
|
||
|
||
[38;5;12mThis section contains libraries that are well-made and useful, but have not necessarily been battle-tested by a large userbase yet.[39m
|
||
|
||
[38;5;12m- Neural Network Libraries[39m
|
||
[48;5;235m[38;5;249m- **FedJAX** (https://github.com/google/fedjax) - Federated learning in JAX, built on Optax and Haiku. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Equivariant MLP** (https://github.com/mfinzi/equivariant-MLP) - Construct equivariant neural network layers. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **jax-resnet** (https://github.com/n2cholas/jax-resnet/) - Implementations and checkpoints for ResNet variants in Flax. [49m[39m
|
||
[48;5;235m[38;5;249m- **Parallax** (https://github.com/srush/parallax) - Immutable Torch Modules for JAX. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[38;5;12m- Nonlinear Optimization[39m
|
||
[48;5;235m[38;5;249m- **Optimistix** (https://github.com/patrick-kidger/optimistix) - Root finding, minimisation, fixed points, and least squares. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **JAXopt** (https://github.com/google/jaxopt) - Hardware accelerated (GPU/TPU), batchable and differentiable optimizers in JAX. [49m[39m
|
||
[38;5;12m- [39m[38;5;14m[1mjax-unirep[0m[38;5;12m (https://github.com/ElArkk/jax-unirep) - Library implementing the [39m[38;5;14m[1mUniRep model[0m[38;5;12m (https://www.nature.com/articles/s41592-019-0598-1) for protein machine learning applications. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mflowjax[0m[38;5;12m (https://github.com/danielward27/flowjax) - Distributions and normalizing flows built as equinox modules. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mjax-flows[0m[38;5;12m (https://github.com/ChrisWaites/jax-flows) - Normalizing flows in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1msklearn-jax-kernels[0m[38;5;12m (https://github.com/ExpectationMax/sklearn-jax-kernels) - [39m[48;5;235m[38;5;249mscikit-learn[49m[39m[38;5;12m kernel matrices using JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mjax-cosmo[0m[38;5;12m (https://github.com/DifferentiableUniverseInitiative/jax_cosmo) - Differentiable cosmology library. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mefax[0m[38;5;12m (https://github.com/NeilGirdhar/efax) - Exponential Families in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mmpi4jax[0m[38;5;12m (https://github.com/PhilipVinc/mpi4jax) - Combine MPI operations with your Jax code on CPUs and GPUs. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mimax[0m[38;5;12m (https://github.com/4rtemi5/imax) - Image augmentations and transformations. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mFlaxVision[0m[38;5;12m (https://github.com/rolandgvc/flaxvision) - Flax version of TorchVision. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mOryx[0m[38;5;12m (https://github.com/tensorflow/probability/tree/master/spinoffs/oryx) - Probabilistic programming language based on program transformations.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mOptimal Transport Tools[0m[38;5;12m (https://github.com/google-research/ott) - Toolbox that bundles utilities to solve optimal transport problems.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mdelta PV[0m[38;5;12m (https://github.com/romanodev/deltapv) - A photovoltaic simulator with automatic differentation. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mjaxlie[0m[38;5;12m (https://github.com/brentyi/jaxlie) - Lie theory library for rigid body transformations and optimization. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mBRAX[0m[38;5;12m (https://github.com/google/brax) - Differentiable physics engine to simulate environments along with learning algorithms to train agents for these environments. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mflaxmodels[0m[38;5;12m (https://github.com/matthias-wright/flaxmodels) - Pretrained models for Jax/Flax. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mCR.Sparse[0m[38;5;12m (https://github.com/carnotresearch/cr-sparse) - XLA accelerated algorithms for sparse representations and compressive sensing. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mexojax[0m[38;5;12m (https://github.com/HajimeKawahara/exojax) - Automatic differentiable spectrum modeling of exoplanets/brown dwarfs compatible to JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mPIX[0m[38;5;12m (https://github.com/deepmind/dm_pix) - PIX is an image processing library in JAX, for JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mbayex[0m[38;5;12m (https://github.com/alonfnt/bayex) - Bayesian Optimization powered by JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mJaxDF[0m[38;5;12m (https://github.com/ucl-bug/jaxdf) - Framework for differentiable simulators with arbitrary discretizations. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mtree-math[0m[38;5;12m (https://github.com/google/tree-math) - Convert functions that operate on arrays into functions that operate on PyTrees. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mjax-models[0m[38;5;12m (https://github.com/DarshanDeshpande/jax-models) - Implementations of research papers originally without code or code written with frameworks other than JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mPGMax[0m[38;5;12m (https://github.com/vicariousinc/PGMax) - A framework for building discrete Probabilistic Graphical Models (PGM's) and running inference inference on them via JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mEvoJAX[0m[38;5;12m (https://github.com/google/evojax) - Hardware-Accelerated Neuroevolution [39m
|
||
[38;5;12m- [39m[38;5;14m[1mevosax[0m[38;5;12m (https://github.com/RobertTLange/evosax) - JAX-Based Evolution Strategies [39m
|
||
[38;5;12m- [39m[38;5;14m[1mSymJAX[0m[38;5;12m (https://github.com/SymJAX/SymJAX) - Symbolic CPU/GPU/TPU programming. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mmcx[0m[38;5;12m (https://github.com/rlouf/mcx) - Express & compile probabilistic programs for performant inference. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mEinshape[0m[38;5;12m (https://github.com/deepmind/einshape) - DSL-based reshaping library for JAX and other frameworks. [39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mALX[0m[38;5;12m [39m[38;5;12m(https://github.com/google-research/google-research/tree/master/alx)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mOpen-source[39m[38;5;12m [39m[38;5;12mlibrary[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mdistributed[39m[38;5;12m [39m[38;5;12mmatrix[39m[38;5;12m [39m[38;5;12mfactorization[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12mAlternating[39m[38;5;12m [39m[38;5;12mLeast[39m[38;5;12m [39m[38;5;12mSquares,[39m[38;5;12m [39m[38;5;12mmore[39m[38;5;12m [39m[38;5;12minfo[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;14m[1m_ALX:[0m[38;5;14m[1m [0m[38;5;14m[1mLarge[0m[38;5;14m[1m [0m[38;5;14m[1mScale[0m[38;5;14m[1m [0m[38;5;14m[1mMatrix[0m[38;5;14m[1m [0m[38;5;14m[1mFactorization[0m[38;5;14m[1m [0m[38;5;14m[1mon[0m[38;5;14m[1m [0m[38;5;14m[1mTPUs_[0m[38;5;12m [39m
|
||
[38;5;12m(https://arxiv.org/abs/2112.02194).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDiffrax[0m[38;5;12m (https://github.com/patrick-kidger/diffrax) - Numerical differential equation solvers in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mtinygp[0m[38;5;12m (https://github.com/dfm/tinygp) - The _tiniest_ of Gaussian process libraries in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mgymnax[0m[38;5;12m (https://github.com/RobertTLange/gymnax) - Reinforcement Learning Environments with the well-known gym API. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mMctx[0m[38;5;12m (https://github.com/deepmind/mctx) - Monte Carlo tree search algorithms in native JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mKFAC-JAX[0m[38;5;12m (https://github.com/deepmind/kfac-jax) - Second Order Optimization with Approximate Curvature for NNs. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mTF2JAX[0m[38;5;12m (https://github.com/deepmind/tf2jax) - Convert functions/graphs to JAX functions. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mjwave[0m[38;5;12m (https://github.com/ucl-bug/jwave) - A library for differentiable acoustic simulations [39m
|
||
[38;5;12m- [39m[38;5;14m[1mGPJax[0m[38;5;12m (https://github.com/thomaspinder/GPJax) - Gaussian processes in JAX.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mJumanji[0m[38;5;12m (https://github.com/instadeepai/jumanji) - A Suite of Industry-Driven Hardware-Accelerated RL Environments written in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mEqxvision[0m[38;5;12m (https://github.com/paganpasta/eqxvision) - Equinox version of Torchvision. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mJAXFit[0m[38;5;12m (https://github.com/dipolar-quantum-gases/jaxfit) - Accelerated curve fitting library for nonlinear least-squares problems (see [39m[38;5;14m[1marXiv paper[0m[38;5;12m (https://arxiv.org/abs/2208.12187)). [39m
|
||
[38;5;12m- [39m[38;5;14m[1meconpizza[0m[38;5;12m (https://github.com/gboehl/econpizza) - Solve macroeconomic models with hetereogeneous agents using JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mSPU[0m[38;5;12m (https://github.com/secretflow/spu) - A domain-specific compiler and runtime suite to run JAX code with MPC(Secure Multi-Party Computation). [39m
|
||
[38;5;12m- [39m[38;5;14m[1mjax-tqdm[0m[38;5;12m (https://github.com/jeremiecoullon/jax-tqdm) - Add a tqdm progress bar to JAX scans and loops. [39m
|
||
[38;5;12m- [39m[38;5;14m[1msafejax[0m[38;5;12m (https://github.com/alvarobartt/safejax) - Serialize JAX, Flax, Haiku, or Objax model params with 🤗[39m[48;5;235m[38;5;249msafetensors[49m[39m[38;5;12m. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mKernex[0m[38;5;12m (https://github.com/ASEM000/kernex) - Differentiable stencil decorators in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mMaxText[0m[38;5;12m (https://github.com/google/maxtext) - A simple, performant and scalable Jax LLM written in pure Python/Jax and targeting Google Cloud TPUs. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mPax[0m[38;5;12m (https://github.com/google/paxml) - A Jax-based machine learning framework for training large scale models. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mPraxis[0m[38;5;12m (https://github.com/google/praxis) - The layer library for Pax with a goal to be usable by other JAX-based ML projects. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mpurejaxrl[0m[38;5;12m (https://github.com/luchris429/purejaxrl) - Vectorisable, end-to-end RL algorithms in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mLorax[0m[38;5;12m (https://github.com/davisyoshida/lorax) - Automatically apply LoRA to JAX models (Flax, Haiku, etc.)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSCICO[0m[38;5;12m (https://github.com/lanl/scico) - Scientific computational imaging in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mSpyx[0m[38;5;12m (https://github.com/kmheckel/spyx) - Spiking Neural Networks in JAX for machine learning on neuromorphic hardware. [39m
|
||
[38;5;12m- Brain Dynamics Programming Ecosystem[39m
|
||
[48;5;235m[38;5;249m- **BrainPy** (https://github.com/brainpy/BrainPy) - Brain Dynamics Programming in Python. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **brainunit** (https://github.com/chaobrain/brainunit) - Physical units and unit-aware mathematical system in JAX. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **dendritex** (https://github.com/chaobrain/dendritex) - Dendritic Modeling in JAX. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **brainstate** (https://github.com/chaobrain/brainstate) - State-based Transformation System for Program Compilation and Augmentation. [49m[39m
|
||
[48;5;235m[38;5;249m- **braintaichi** (https://github.com/chaobrain/braintaichi) - Leveraging Taichi Lang to customize brain dynamics operators. [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[38;5;12m- [39m[38;5;14m[1mOTT-JAX[0m[38;5;12m (https://github.com/ott-jax/ott) - Optimal transport tools in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mQDax[0m[38;5;12m (https://github.com/adaptive-intelligent-robotics/QDax) - Quality Diversity optimization in Jax. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mJAX Toolbox[0m[38;5;12m (https://github.com/NVIDIA/JAX-Toolbox) - Nightly CI and optimized examples for JAX on NVIDIA GPUs using libraries such as T5x, Paxml, and Transformer Engine. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mPgx[0m[38;5;12m (http://github.com/sotetsuk/pgx) - Vectorized board game environments for RL with an AlphaZero example. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mEasyDeL[0m[38;5;12m (https://github.com/erfanzar/EasyDeL) - EasyDeL 🔮 is an OpenSource Library to make your training faster and more Optimized With cool Options for training and serving (Llama, MPT, Mixtral, Falcon, etc) in JAX [39m
|
||
[38;5;12m- [39m[38;5;14m[1mXLB[0m[38;5;12m (https://github.com/Autodesk/XLB) - A Differentiable Massively Parallel Lattice Boltzmann Library in Python for Physics-Based Machine Learning. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mdynamiqs[0m[38;5;12m (https://github.com/dynamiqs/dynamiqs) - High-performance and differentiable simulations of quantum systems with JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mforagax[0m[38;5;12m (https://github.com/i-m-iron-man/Foragax) - Agent-Based modelling framework in JAX. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mtmmax[0m[38;5;12m (https://github.com/bahremsd/tmmax) - Vectorized calculation of optical properties in thin-film structures using JAX. Swiss Army knife tool for thin-film optics research [39m
|
||
[38;5;12m- [39m[38;5;14m[1mCoreax[0m[38;5;12m (https://github.com/gchq/coreax) - Algorithms for finding coresets to compress large datasets while retaining their statistical properties. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mNAVIX[0m[38;5;12m (https://github.com/epignatelli/navix) - A reimplementation of MiniGrid, a Reinforcement Learning environment, in JAX [39m
|
||
|
||
|
||
|
||
|
||
[38;2;255;187;0m[4mModels and Projects[0m
|
||
|
||
[38;2;255;187;0m[4mJAX[0m
|
||
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mFourier[0m[38;5;14m[1m [0m[38;5;14m[1mFeature[0m[38;5;14m[1m [0m[38;5;14m[1mNetworks[0m[38;5;12m [39m[38;5;12m(https://github.com/tancik/fourier-feature-networks)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mOfficial[39m[38;5;12m [39m[38;5;12mimplementation[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;14m[1m_Fourier[0m[38;5;14m[1m [0m[38;5;14m[1mFeatures[0m[38;5;14m[1m [0m[38;5;14m[1mLet[0m[38;5;14m[1m [0m[38;5;14m[1mNetworks[0m[38;5;14m[1m [0m[38;5;14m[1mLearn[0m[38;5;14m[1m [0m[38;5;14m[1mHigh[0m[38;5;14m[1m [0m[38;5;14m[1mFrequency[0m[38;5;14m[1m [0m[38;5;14m[1mFunctions[0m[38;5;14m[1m [0m[38;5;14m[1min[0m[38;5;14m[1m [0m[38;5;14m[1mLow[0m[38;5;14m[1m [0m[38;5;14m[1mDimensional[0m[38;5;14m[1m [0m[38;5;14m[1mDomains_[0m[38;5;12m [39m
|
||
[38;5;12m(https://people.eecs.berkeley.edu/~bmild/fourfeat).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mkalman-jax[0m[38;5;12m (https://github.com/AaltoML/kalman-jax) - Approximate inference for Markov (i.e., temporal) Gaussian processes using iterated Kalman filtering and smoothing.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mjaxns[0m[38;5;12m (https://github.com/Joshuaalbert/jaxns) - Nested sampling in JAX.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mAmortized Bayesian Optimization[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/amortized_bo) - Code related to [39m[38;5;14m[1m_Amortized Bayesian Optimization over Discrete Spaces_[0m[38;5;12m (http://www.auai.org/uai2020/proceedings/329_main_paper.pdf).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mAccurate Quantized Training[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/aqt) - Tools and libraries for running and analyzing neural network quantization experiments in JAX and Flax.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mBNN-HMC[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/bnn_hmc) - Implementation for the paper [39m[38;5;14m[1m_What Are Bayesian Neural Network Posteriors Really Like?_[0m[38;5;12m (https://arxiv.org/abs/2104.14421).[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mJAX-DFT[0m[38;5;12m [39m[38;5;12m(https://github.com/google-research/google-research/tree/master/jax_dft)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mOne-dimensional[39m[38;5;12m [39m[38;5;12mdensity[39m[38;5;12m [39m[38;5;12mfunctional[39m[38;5;12m [39m[38;5;12mtheory[39m[38;5;12m [39m[38;5;12m(DFT)[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mJAX,[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mimplementation[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;14m[1m_Kohn-Sham[0m[38;5;14m[1m [0m[38;5;14m[1mequations[0m[38;5;14m[1m [0m[38;5;14m[1mas[0m[38;5;14m[1m [0m[38;5;14m[1mregularizer:[0m[38;5;14m[1m [0m[38;5;14m[1mbuilding[0m[38;5;14m[1m [0m[38;5;14m[1mprior[0m[38;5;14m[1m [0m[38;5;14m[1mknowledge[0m[38;5;14m[1m [0m[38;5;14m[1minto[0m[38;5;14m[1m [0m[38;5;14m[1mmachine-learned[0m
|
||
[38;5;14m[1mphysics_[0m[38;5;12m [39m[38;5;12m(https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.126.036401).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mRobust Loss[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/robust_loss_jax) - Reference code for the paper [39m[38;5;14m[1m_A General and Adaptive Robust Loss Function_[0m[38;5;12m (https://arxiv.org/abs/1701.03077).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSymbolic Functionals[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/symbolic_functionals) - Demonstration from [39m[38;5;14m[1m_Evolving symbolic density functionals_[0m[38;5;12m (https://arxiv.org/abs/2203.02540).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mTriMap[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/trimap) - Official JAX implementation of [39m[38;5;14m[1m_TriMap: Large-scale Dimensionality Reduction Using Triplets_[0m[38;5;12m (https://arxiv.org/abs/1910.00204).[39m
|
||
|
||
[38;2;255;187;0m[4mFlax[0m
|
||
|
||
[38;5;12m- [39m[38;5;14m[1mDeepSeek-R1-Flax-1.5B-Distill[0m[38;5;12m (https://github.com/J-Rosser-UK/Torch2Jax-DeepSeek-R1-Distill-Qwen-1.5B) - Flax implementation of DeepSeek-R1 1.5B distilled reasoning LLM.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mPerformer[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/performer/fast_attention/jax) - Flax implementation of the Performer (linear transformer via FAVOR+) architecture.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mJaxNeRF[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/jaxnerf) - Implementation of [39m[38;5;14m[1m_NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis_[0m[38;5;12m (http://www.matthewtancik.com/nerf) with multi-device GPU/TPU support.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mmip-NeRF[0m[38;5;12m (https://github.com/google/mipnerf) - Official implementation of [39m[38;5;14m[1m_Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields_[0m[38;5;12m (https://jonbarron.info/mipnerf).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mRegNeRF[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/regnerf) - Official implementation of [39m[38;5;14m[1m_RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs_[0m[38;5;12m (https://m-niemeyer.github.io/regnerf/).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mJaxNeuS[0m[38;5;12m (https://github.com/huangjuite/jaxneus) - Implementation of [39m[38;5;14m[1m_NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction_[0m[38;5;12m (https://lingjie0206.github.io/papers/NeuS/)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mBig Transfer (BiT)[0m[38;5;12m (https://github.com/google-research/big_transfer) - Implementation of [39m[38;5;14m[1m_Big Transfer (BiT): General Visual Representation Learning_[0m[38;5;12m (https://arxiv.org/abs/1912.11370).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mJAX RL[0m[38;5;12m (https://github.com/ikostrikov/jax-rl) - Implementations of reinforcement learning algorithms.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mgMLP[0m[38;5;12m (https://github.com/SauravMaheshkar/gMLP) - Implementation of [39m[38;5;14m[1m_Pay Attention to MLPs_[0m[38;5;12m (https://arxiv.org/abs/2105.08050).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mMLP Mixer[0m[38;5;12m (https://github.com/SauravMaheshkar/MLP-Mixer) - Minimal implementation of [39m[38;5;14m[1m_MLP-Mixer: An all-MLP Architecture for Vision_[0m[38;5;12m (https://arxiv.org/abs/2105.01601).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDistributed Shampoo[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/scalable_shampoo) - Implementation of [39m[38;5;14m[1m_Second Order Optimization Made Practical_[0m[38;5;12m (https://arxiv.org/abs/2002.09018).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mNesT[0m[38;5;12m (https://github.com/google-research/nested-transformer) - Official implementation of [39m[38;5;14m[1m_Aggregating Nested Transformers_[0m[38;5;12m (https://arxiv.org/abs/2105.12723).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mXMC-GAN[0m[38;5;12m (https://github.com/google-research/xmcgan_image_generation) - Official implementation of [39m[38;5;14m[1m_Cross-Modal Contrastive Learning for Text-to-Image Generation_[0m[38;5;12m (https://arxiv.org/abs/2101.04702).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mFNet[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/f_net) - Official implementation of [39m[38;5;14m[1m_FNet: Mixing Tokens with Fourier Transforms_[0m[38;5;12m (https://arxiv.org/abs/2105.03824).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mGFSA[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/gfsa) - Official implementation of [39m[38;5;14m[1m_Learning Graph Structure With A Finite-State Automaton Layer_[0m[38;5;12m (https://arxiv.org/abs/2007.04929).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mIPA-GNN[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/ipagnn) - Official implementation of [39m[38;5;14m[1m_Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks_[0m[38;5;12m (https://arxiv.org/abs/2010.12621).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mFlax Models[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/flax_models) - Collection of models and methods implemented in Flax.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mProtein[0m[38;5;14m[1m [0m[38;5;14m[1mLM[0m[38;5;12m [39m[38;5;12m(https://github.com/google-research/google-research/tree/master/protein_lm)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mImplements[39m[38;5;12m [39m[38;5;12mBERT[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mautoregressive[39m[38;5;12m [39m[38;5;12mmodels[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mproteins,[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mdescribed[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;14m[1m_Biological[0m[38;5;14m[1m [0m[38;5;14m[1mStructure[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mFunction[0m[38;5;14m[1m [0m[38;5;14m[1mEmerge[0m[38;5;14m[1m [0m[38;5;14m[1mfrom[0m[38;5;14m[1m [0m[38;5;14m[1mScaling[0m[38;5;14m[1m [0m[38;5;14m[1mUnsupervised[0m[38;5;14m[1m [0m[38;5;14m[1mLearning[0m[38;5;14m[1m [0m[38;5;14m[1mto[0m[38;5;14m[1m [0m[38;5;14m[1m250[0m
|
||
[38;5;14m[1mMillion[0m[38;5;14m[1m [0m[38;5;14m[1mProtein[0m[38;5;14m[1m [0m[38;5;14m[1mSequences_[0m[38;5;12m [39m[38;5;12m(https://www.biorxiv.org/content/10.1101/622803v1.full)[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;14m[1m_ProGen:[0m[38;5;14m[1m [0m[38;5;14m[1mLanguage[0m[38;5;14m[1m [0m[38;5;14m[1mModeling[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mProtein[0m[38;5;14m[1m [0m[38;5;14m[1mGeneration_[0m[38;5;12m [39m[38;5;12m(https://www.biorxiv.org/content/10.1101/2020.03.07.982272v2).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSlot Attention[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/ptopk_patch_selection) - Reference implementation for [39m[38;5;14m[1m_Differentiable Patch Selection for Image Recognition_[0m[38;5;12m (https://arxiv.org/abs/2104.03059).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mVision Transformer[0m[38;5;12m (https://github.com/google-research/vision_transformer) - Official implementation of [39m[38;5;14m[1m_An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale_[0m[38;5;12m (https://arxiv.org/abs/2010.11929).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mFID computation[0m[38;5;12m (https://github.com/matthias-wright/jax-fid) - Port of [39m[38;5;14m[1mmseitzer/pytorch-fid[0m[38;5;12m (https://github.com/mseitzer/pytorch-fid) to Flax.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mARDM[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/autoregressive_diffusion) - Official implementation of [39m[38;5;14m[1m_Autoregressive Diffusion Models_[0m[38;5;12m (https://arxiv.org/abs/2110.02037).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mD3PM[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/d3pm) - Official implementation of [39m[38;5;14m[1m_Structured Denoising Diffusion Models in Discrete State-Spaces_[0m[38;5;12m (https://arxiv.org/abs/2107.03006).[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mGumbel-max[0m[38;5;14m[1m [0m[38;5;14m[1mCausal[0m[38;5;14m[1m [0m[38;5;14m[1mMechanisms[0m[38;5;12m [39m[38;5;12m(https://github.com/google-research/google-research/tree/master/gumbel_max_causal_gadgets)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mCode[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;14m[1m_Learning[0m[38;5;14m[1m [0m[38;5;14m[1mGeneralized[0m[38;5;14m[1m [0m[38;5;14m[1mGumbel-max[0m[38;5;14m[1m [0m[38;5;14m[1mCausal[0m[38;5;14m[1m [0m[38;5;14m[1mMechanisms_[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/2111.06888),[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mextra[39m[38;5;12m [39m[38;5;12mcode[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m
|
||
[38;5;14m[1mGuyLor/gumbel_max_causal_gadgets_part2[0m[38;5;12m [39m[38;5;12m(https://github.com/GuyLor/gumbel_max_causal_gadgets_part2).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mLatent Programmer[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/latent_programmer) - Code for the ICML 2021 paper [39m[38;5;14m[1m_Latent Programmer: Discrete Latent Codes for Program Synthesis_[0m[38;5;12m (https://arxiv.org/abs/2012.00377).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSNeRG[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/snerg) - Official implementation of [39m[38;5;14m[1m_Baking Neural Radiance Fields for Real-Time View Synthesis_[0m[38;5;12m (https://phog.github.io/snerg).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSpin-weighted Spherical CNNs[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/spin_spherical_cnns) - Adaptation of [39m[38;5;14m[1m_Spin-Weighted Spherical CNNs_[0m[38;5;12m (https://arxiv.org/abs/2006.10731).[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mVDVAE[0m[38;5;12m [39m[38;5;12m(https://github.com/google-research/google-research/tree/master/vdvae_flax)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mAdaptation[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;14m[1m_Very[0m[38;5;14m[1m [0m[38;5;14m[1mDeep[0m[38;5;14m[1m [0m[38;5;14m[1mVAEs[0m[38;5;14m[1m [0m[38;5;14m[1mGeneralize[0m[38;5;14m[1m [0m[38;5;14m[1mAutoregressive[0m[38;5;14m[1m [0m[38;5;14m[1mModels[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mCan[0m[38;5;14m[1m [0m[38;5;14m[1mOutperform[0m[38;5;14m[1m [0m[38;5;14m[1mThem[0m[38;5;14m[1m [0m[38;5;14m[1mon[0m[38;5;14m[1m [0m[38;5;14m[1mImages_[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/2011.10650),[39m[38;5;12m [39m[38;5;12moriginal[39m[38;5;12m [39m[38;5;12mcode[39m[38;5;12m [39m[38;5;12mat[39m[38;5;12m [39m
|
||
[38;5;14m[1mopenai/vdvae[0m[38;5;12m [39m[38;5;12m(https://github.com/openai/vdvae).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mMUSIQ[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/musiq) - Checkpoints and model inference code for the ICCV 2021 paper [39m[38;5;14m[1m_MUSIQ: Multi-scale Image Quality Transformer_[0m[38;5;12m (https://arxiv.org/abs/2108.05997)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mAQuaDem[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/aquadem) - Official implementation of [39m[38;5;14m[1m_Continuous Control with Action Quantization from Demonstrations_[0m[38;5;12m (https://arxiv.org/abs/2110.10149).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mCombiner[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/combiner) - Official implementation of [39m[38;5;14m[1m_Combiner: Full Attention Transformer with Sparse Computation Cost_[0m[38;5;12m (https://arxiv.org/abs/2107.05768).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDreamfields[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/dreamfields) - Official implementation of the ICLR 2022 paper [39m[38;5;14m[1m_Progressive Distillation for Fast Sampling of Diffusion Models_[0m[38;5;12m (https://ajayj.com/dreamfields).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mGIFT[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/gift) - Official implementation of [39m[38;5;14m[1m_Gradual Domain Adaptation in the Wild:When Intermediate Distributions are Absent_[0m[38;5;12m (https://arxiv.org/abs/2106.06080).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mLight Field Neural Rendering[0m[38;5;12m (https://github.com/google-research/google-research/tree/master/light_field_neural_rendering) - Official implementation of [39m[38;5;14m[1m_Light Field Neural Rendering_[0m[38;5;12m (https://arxiv.org/abs/2112.09687).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSharpened Cosine Similarity in JAX by Raphael Pisoni[0m[38;5;12m (https://colab.research.google.com/drive/1KUKFEMneQMS3OzPYnWZGkEnry3PdzCfn?usp=sharing) - A JAX/Flax implementation of the Sharpened Cosine Similarity layer.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mGNNs[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mSolving[0m[38;5;14m[1m [0m[38;5;14m[1mCombinatorial[0m[38;5;14m[1m [0m[38;5;14m[1mOptimization[0m[38;5;14m[1m [0m[38;5;14m[1mProblems[0m[38;5;12m [39m[38;5;12m(https://github.com/IvanIsCoding/GNN-for-Combinatorial-Optimization)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mJAX[39m[38;5;12m [39m[38;5;12m+[39m[38;5;12m [39m[38;5;12mFlax[39m[38;5;12m [39m[38;5;12mimplementation[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;14m[1mCombinatorial[0m[38;5;14m[1m [0m[38;5;14m[1mOptimization[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mPhysics-Inspired[0m[38;5;14m[1m [0m[38;5;14m[1mGraph[0m[38;5;14m[1m [0m[38;5;14m[1mNeural[0m[38;5;14m[1m [0m[38;5;14m[1mNetworks[0m[38;5;12m [39m
|
||
[38;5;12m(https://arxiv.org/abs/2107.01188).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDETR[0m[38;5;12m (https://github.com/MasterSkepticista/detr) - Flax implementation of [39m[38;5;14m[1m_DETR: End-to-end Object Detection with Transformers_[0m[38;5;12m (https://github.com/facebookresearch/detr) using Sinkhorn solver and parallel bipartite matching.[39m
|
||
|
||
[38;2;255;187;0m[4mHaiku[0m
|
||
|
||
[38;5;12m- [39m[38;5;14m[1mAlphaFold[0m[38;5;12m (https://github.com/deepmind/alphafold) - Implementation of the inference pipeline of AlphaFold v2.0, presented in [39m[38;5;14m[1m_Highly accurate protein structure prediction with AlphaFold_[0m[38;5;12m (https://www.nature.com/articles/s41586-021-03819-2).[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mAdversarial[0m[38;5;14m[1m [0m[38;5;14m[1mRobustness[0m[38;5;12m [39m[38;5;12m(https://github.com/deepmind/deepmind-research/tree/master/adversarial_robustness)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mReference[39m[38;5;12m [39m[38;5;12mcode[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;14m[1m_Uncovering[0m[38;5;14m[1m [0m[38;5;14m[1mthe[0m[38;5;14m[1m [0m[38;5;14m[1mLimits[0m[38;5;14m[1m [0m[38;5;14m[1mof[0m[38;5;14m[1m [0m[38;5;14m[1mAdversarial[0m[38;5;14m[1m [0m[38;5;14m[1mTraining[0m[38;5;14m[1m [0m[38;5;14m[1magainst[0m[38;5;14m[1m [0m[38;5;14m[1mNorm-Bounded[0m[38;5;14m[1m [0m[38;5;14m[1mAdversarial[0m[38;5;14m[1m [0m[38;5;14m[1mExamples_[0m[38;5;12m [39m
|
||
[38;5;12m(https://arxiv.org/abs/2010.03593)[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;14m[1m_Fixing[0m[38;5;14m[1m [0m[38;5;14m[1mData[0m[38;5;14m[1m [0m[38;5;14m[1mAugmentation[0m[38;5;14m[1m [0m[38;5;14m[1mto[0m[38;5;14m[1m [0m[38;5;14m[1mImprove[0m[38;5;14m[1m [0m[38;5;14m[1mAdversarial[0m[38;5;14m[1m [0m[38;5;14m[1mRobustness_[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/2103.01946).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mBootstrap Your Own Latent[0m[38;5;12m (https://github.com/deepmind/deepmind-research/tree/master/byol) - Implementation for the paper [39m[38;5;14m[1m_Bootstrap your own latent: A new approach to self-supervised Learning_[0m[38;5;12m (https://arxiv.org/abs/2006.07733).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mGated Linear Networks[0m[38;5;12m (https://github.com/deepmind/deepmind-research/tree/master/gated_linear_networks) - GLNs are a family of backpropagation-free neural networks.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mGlassy[0m[38;5;14m[1m [0m[38;5;14m[1mDynamics[0m[38;5;12m [39m[38;5;12m(https://github.com/deepmind/deepmind-research/tree/master/glassy_dynamics)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mOpen[39m[38;5;12m [39m[38;5;12msource[39m[38;5;12m [39m[38;5;12mimplementation[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mpaper[39m[38;5;12m [39m[38;5;14m[1m_Unveiling[0m[38;5;14m[1m [0m[38;5;14m[1mthe[0m[38;5;14m[1m [0m[38;5;14m[1mpredictive[0m[38;5;14m[1m [0m[38;5;14m[1mpower[0m[38;5;14m[1m [0m[38;5;14m[1mof[0m[38;5;14m[1m [0m[38;5;14m[1mstatic[0m[38;5;14m[1m [0m[38;5;14m[1mstructure[0m[38;5;14m[1m [0m[38;5;14m[1min[0m[38;5;14m[1m [0m[38;5;14m[1mglassy[0m[38;5;14m[1m [0m[38;5;14m[1msystems_[0m[38;5;12m [39m
|
||
[38;5;12m(https://www.nature.com/articles/s41567-020-0842-8).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mMMV[0m[38;5;12m (https://github.com/deepmind/deepmind-research/tree/master/mmv) - Code for the models in [39m[38;5;14m[1m_Self-Supervised MultiModal Versatile Networks_[0m[38;5;12m (https://arxiv.org/abs/2006.16228).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mNormalizer-Free Networks[0m[38;5;12m (https://github.com/deepmind/deepmind-research/tree/master/nfnets) - Official Haiku implementation of [39m[38;5;14m[1m_NFNets_[0m[38;5;12m (https://arxiv.org/abs/2102.06171).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mNuX[0m[38;5;12m (https://github.com/Information-Fusion-Lab-Umass/NuX) - Normalizing flows with JAX.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mOGB-LSC[0m[38;5;12m [39m[38;5;12m(https://github.com/deepmind/deepmind-research/tree/master/ogb_lsc)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mThis[39m[38;5;12m [39m[38;5;12mrepository[39m[38;5;12m [39m[38;5;12mcontains[39m[38;5;12m [39m[38;5;12mDeepMind's[39m[38;5;12m [39m[38;5;12mentry[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;14m[1mPCQM4M-LSC[0m[38;5;12m [39m[38;5;12m(https://ogb.stanford.edu/kddcup2021/pcqm4m/)[39m[38;5;12m [39m[38;5;12m(quantum[39m[38;5;12m [39m[38;5;12mchemistry)[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;14m[1mMAG240M-LSC[0m[38;5;12m [39m
|
||
[38;5;12m(https://ogb.stanford.edu/kddcup2021/mag240m/)[39m[38;5;12m [39m[38;5;12m(academic[39m[38;5;12m [39m[38;5;12mgraph)[39m
|
||
[38;5;12mtracks of the [39m[38;5;14m[1mOGB Large-Scale Challenge[0m[38;5;12m (https://ogb.stanford.edu/kddcup2021/) (OGB-LSC).[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mPersistent[0m[38;5;14m[1m [0m[38;5;14m[1mEvolution[0m[38;5;14m[1m [0m[38;5;14m[1mStrategies[0m[38;5;12m [39m[38;5;12m(https://github.com/google-research/google-research/tree/master/persistent_es)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mCode[39m[38;5;12m [39m[38;5;12mused[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mpaper[39m[38;5;12m [39m[38;5;14m[1m_Unbiased[0m[38;5;14m[1m [0m[38;5;14m[1mGradient[0m[38;5;14m[1m [0m[38;5;14m[1mEstimation[0m[38;5;14m[1m [0m[38;5;14m[1min[0m[38;5;14m[1m [0m[38;5;14m[1mUnrolled[0m[38;5;14m[1m [0m[38;5;14m[1mComputation[0m[38;5;14m[1m [0m[38;5;14m[1mGraphs[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mPersistent[0m[38;5;14m[1m [0m[38;5;14m[1mEvolution[0m[38;5;14m[1m [0m[38;5;14m[1mStrategies_[0m[38;5;12m [39m
|
||
[38;5;12m(http://proceedings.mlr.press/v139/vicol21a.html).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mTwo Player Auction Learning[0m[38;5;12m (https://github.com/degregat/two-player-auctions) - JAX implementation of the paper [39m[38;5;14m[1m_Auction learning as a two-player game_[0m[38;5;12m (https://arxiv.org/abs/2006.05684).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mWikiGraphs[0m[38;5;12m (https://github.com/deepmind/deepmind-research/tree/master/wikigraphs) - Baseline code to reproduce results in [39m[38;5;14m[1m_WikiGraphs: A Wikipedia Text - Knowledge Graph Paired Datase_[0m[38;5;12m (https://aclanthology.org/2021.textgraphs-1.7).[39m
|
||
|
||
[38;2;255;187;0m[4mTrax[0m
|
||
|
||
[38;5;12m- [39m[38;5;14m[1mReformer[0m[38;5;12m (https://github.com/google/trax/tree/master/trax/models/reformer) - Implementation of the Reformer (efficient transformer) architecture.[39m
|
||
|
||
[38;2;255;187;0m[4mNumPyro[0m
|
||
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mlqg[0m[38;5;12m [39m[38;5;12m(https://github.com/RothkopfLab/lqg)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mOfficial[39m[38;5;12m [39m[38;5;12mimplementation[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mBayesian[39m[38;5;12m [39m[38;5;12minverse[39m[38;5;12m [39m[38;5;12moptimal[39m[38;5;12m [39m[38;5;12mcontrol[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mlinear-quadratic[39m[38;5;12m [39m[38;5;12mGaussian[39m[38;5;12m [39m[38;5;12mproblems[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mpaper[39m[38;5;12m [39m[38;5;14m[1m_Putting[0m[38;5;14m[1m [0m[38;5;14m[1mperception[0m[38;5;14m[1m [0m[38;5;14m[1minto[0m[38;5;14m[1m [0m[38;5;14m[1maction[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1minverse[0m[38;5;14m[1m [0m[38;5;14m[1moptimal[0m[38;5;14m[1m [0m[38;5;14m[1mcontrol[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mcontinuous[0m[38;5;14m[1m [0m
|
||
[38;5;14m[1mpsychophysics_[0m[38;5;12m [39m[38;5;12m(https://elifesciences.org/articles/76635)[39m
|
||
|
||
|
||
|
||
|
||
[38;2;255;187;0m[4mVideos[0m
|
||
|
||
[38;5;12m- [39m[38;5;14m[1mNeurIPS 2020: JAX Ecosystem Meetup[0m[38;5;12m (https://www.youtube.com/watch?v=iDxJxIyzSiM) - JAX, its use at DeepMind, and discussion between engineers, scientists, and JAX core team.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mIntroduction to JAX[0m[38;5;12m (https://youtu.be/0mVmRHMaOJ4) - Simple neural network from scratch in JAX.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mJAX: Accelerated Machine Learning Research | SciPy 2020 | VanderPlas[0m[38;5;12m (https://youtu.be/z-WSrQDXkuM) - JAX's core design, how it's powering new research, and how you can start using it.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mBayesian Programming with JAX + NumPyro — Andy Kitchen[0m[38;5;12m (https://youtu.be/CecuWGpoztw) - Introduction to Bayesian modelling using NumPyro.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mJAX:[0m[38;5;14m[1m [0m[38;5;14m[1mAccelerated[0m[38;5;14m[1m [0m[38;5;14m[1mmachine-learning[0m[38;5;14m[1m [0m[38;5;14m[1mresearch[0m[38;5;14m[1m [0m[38;5;14m[1mvia[0m[38;5;14m[1m [0m[38;5;14m[1mcomposable[0m[38;5;14m[1m [0m[38;5;14m[1mfunction[0m[38;5;14m[1m [0m[38;5;14m[1mtransformations[0m[38;5;14m[1m [0m[38;5;14m[1min[0m[38;5;14m[1m [0m[38;5;14m[1mPython[0m[38;5;14m[1m [0m[38;5;14m[1m|[0m[38;5;14m[1m [0m[38;5;14m[1mNeurIPS[0m[38;5;14m[1m [0m[38;5;14m[1m2019[0m[38;5;14m[1m [0m[38;5;14m[1m|[0m[38;5;14m[1m [0m[38;5;14m[1mSkye[0m[38;5;14m[1m [0m[38;5;14m[1mWanderman-Milne[0m[38;5;12m [39m
|
||
[38;5;12m(https://slideslive.com/38923687/jax-accelerated-machinelearning-research-via-composable-function-transformations-in-python)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mJAX[39m[38;5;12m [39m[38;5;12mintro[39m[38;5;12m [39m[38;5;12mpresentation[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;14m[1m_Program[0m[38;5;14m[1m [0m[38;5;14m[1mTransformations[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mMachine[0m[38;5;14m[1m [0m[38;5;14m[1mLearning_[0m[38;5;12m [39m[38;5;12m(https://program-transformations.github.io)[39m[38;5;12m [39m
|
||
[38;5;12mworkshop.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mJAX on Cloud TPUs | NeurIPS 2020 | Skye Wanderman-Milne and James Bradbury[0m[38;5;12m (https://drive.google.com/file/d/1jKxefZT1xJDUxMman6qrQVed7vWI0MIn/edit) - Presentation of TPU host access with demo.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mDeep[0m[38;5;14m[1m [0m[38;5;14m[1mImplicit[0m[38;5;14m[1m [0m[38;5;14m[1mLayers[0m[38;5;14m[1m [0m[38;5;14m[1m-[0m[38;5;14m[1m [0m[38;5;14m[1mNeural[0m[38;5;14m[1m [0m[38;5;14m[1mODEs,[0m[38;5;14m[1m [0m[38;5;14m[1mDeep[0m[38;5;14m[1m [0m[38;5;14m[1mEquilibirum[0m[38;5;14m[1m [0m[38;5;14m[1mModels,[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mBeyond[0m[38;5;14m[1m [0m[38;5;14m[1m|[0m[38;5;14m[1m [0m[38;5;14m[1mNeurIPS[0m[38;5;14m[1m [0m[38;5;14m[1m2020[0m[38;5;12m [39m[38;5;12m(https://slideslive.com/38935810/deep-implicit-layers-neural-odes-equilibrium-models-and-beyond)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mTutorial[39m[38;5;12m [39m[38;5;12mcreated[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mZico[39m[38;5;12m [39m[38;5;12mKolter,[39m[38;5;12m [39m[38;5;12mDavid[39m[38;5;12m [39m[38;5;12mDuvenaud,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mMatt[39m[38;5;12m [39m
|
||
[38;5;12mJohnson[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mColab[39m[38;5;12m [39m[38;5;12mnotebooks[39m[38;5;12m [39m[38;5;12mavaliable[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;14m[1m_Deep[0m[38;5;14m[1m [0m[38;5;14m[1mImplicit[0m[38;5;14m[1m [0m[38;5;14m[1mLayers_[0m[38;5;12m [39m[38;5;12m(http://implicit-layers-tutorial.org).[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mSolving[0m[38;5;14m[1m [0m[38;5;14m[1my=mx+b[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mJax[0m[38;5;14m[1m [0m[38;5;14m[1mon[0m[38;5;14m[1m [0m[38;5;14m[1ma[0m[38;5;14m[1m [0m[38;5;14m[1mTPU[0m[38;5;14m[1m [0m[38;5;14m[1mPod[0m[38;5;14m[1m [0m[38;5;14m[1mslice[0m[38;5;14m[1m [0m[38;5;14m[1m-[0m[38;5;14m[1m [0m[38;5;14m[1mMat[0m[38;5;14m[1m [0m[38;5;14m[1mKelcey[0m[38;5;12m [39m[38;5;12m(http://matpalm.com/blog/ymxb_pod_slice/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mfour[39m[38;5;12m [39m[38;5;12mpart[39m[38;5;12m [39m[38;5;12mYouTube[39m[38;5;12m [39m[38;5;12mtutorial[39m[38;5;12m [39m[38;5;12mseries[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mColab[39m[38;5;12m [39m[38;5;12mnotebooks[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mstarts[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mJax[39m[38;5;12m [39m[38;5;12mfundamentals[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mmoves[39m[38;5;12m [39m[38;5;12mup[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mtraining[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mparallel[39m[38;5;12m [39m
|
||
[38;5;12mapproach[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mv3-32[39m[38;5;12m [39m[38;5;12mTPU[39m[38;5;12m [39m[38;5;12mPod[39m[38;5;12m [39m[38;5;12mslice.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mJAX,[0m[38;5;14m[1m [0m[38;5;14m[1mFlax[0m[38;5;14m[1m [0m[38;5;14m[1m&[0m[38;5;14m[1m [0m[38;5;14m[1mTransformers[0m[38;5;14m[1m [0m[38;5;14m[1m🤗[0m[38;5;12m [39m[38;5;12m(https://github.com/huggingface/transformers/blob/9160d81c98854df44b1d543ce5d65a6aa28444a2/examples/research_projects/jax-projects/README.md#talks)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12m3[39m[38;5;12m [39m[38;5;12mdays[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mtalks[39m[38;5;12m [39m[38;5;12maround[39m[38;5;12m [39m[38;5;12mJAX[39m[38;5;12m [39m[38;5;12m/[39m[38;5;12m [39m[38;5;12mFlax,[39m[38;5;12m [39m[38;5;12mTransformers,[39m[38;5;12m [39m[38;5;12mlarge-scale[39m[38;5;12m [39m
|
||
[38;5;12mlanguage[39m[38;5;12m [39m[38;5;12mmodeling[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mother[39m[38;5;12m [39m[38;5;12mgreat[39m[38;5;12m [39m[38;5;12mtopics.[39m
|
||
|
||
|
||
|
||
[38;2;255;187;0m[4mPapers[0m
|
||
|
||
[38;5;12mThis section contains papers focused on JAX (e.g. JAX-based library whitepapers, research on JAX, etc). Papers implemented in JAX are listed in the [39m[38;5;14m[1mModels/Projects[0m[38;5;12m (#projects) section.[39m
|
||
|
||
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1m__Compiling[0m[38;5;14m[1m [0m[38;5;14m[1mmachine[0m[38;5;14m[1m [0m[38;5;14m[1mlearning[0m[38;5;14m[1m [0m[38;5;14m[1mprograms[0m[38;5;14m[1m [0m[38;5;14m[1mvia[0m[38;5;14m[1m [0m[38;5;14m[1mhigh-level[0m[38;5;14m[1m [0m[38;5;14m[1mtracing__.[0m[38;5;14m[1m [0m[38;5;14m[1mRoy[0m[38;5;14m[1m [0m[38;5;14m[1mFrostig,[0m[38;5;14m[1m [0m[38;5;14m[1mMatthew[0m[38;5;14m[1m [0m[38;5;14m[1mJames[0m[38;5;14m[1m [0m[38;5;14m[1mJohnson,[0m[38;5;14m[1m [0m[38;5;14m[1mChris[0m[38;5;14m[1m [0m[38;5;14m[1mLeary.[0m[38;5;14m[1m [0m[38;5;14m[1m_MLSys[0m[38;5;14m[1m [0m[38;5;14m[1m2018_.[0m[38;5;12m [39m[38;5;12m(https://mlsys.org/Conferences/doc/2018/146.pdf)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mWhite[39m[38;5;12m [39m[38;5;12mpaper[39m[38;5;12m [39m[38;5;12mdescribing[39m[38;5;12m [39m[38;5;12man[39m[38;5;12m [39m[38;5;12mearly[39m[38;5;12m [39m[38;5;12mversion[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mJAX,[39m[38;5;12m [39m[38;5;12mdetailing[39m[38;5;12m [39m[38;5;12mhow[39m[38;5;12m [39m
|
||
[38;5;12mcomputation[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mtraced[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mcompiled.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1m__JAX,[0m[38;5;14m[1m [0m[38;5;14m[1mM.D.:[0m[38;5;14m[1m [0m[38;5;14m[1mA[0m[38;5;14m[1m [0m[38;5;14m[1mFramework[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mDifferentiable[0m[38;5;14m[1m [0m[38;5;14m[1mPhysics__.[0m[38;5;14m[1m [0m[38;5;14m[1mSamuel[0m[38;5;14m[1m [0m[38;5;14m[1mS.[0m[38;5;14m[1m [0m[38;5;14m[1mSchoenholz,[0m[38;5;14m[1m [0m[38;5;14m[1mEkin[0m[38;5;14m[1m [0m[38;5;14m[1mD.[0m[38;5;14m[1m [0m[38;5;14m[1mCubuk.[0m[38;5;14m[1m [0m[38;5;14m[1m_NeurIPS[0m[38;5;14m[1m [0m[38;5;14m[1m2020_.[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/1912.04232)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mIntroduces[39m[38;5;12m [39m[38;5;12mJAX,[39m[38;5;12m [39m[38;5;12mM.D.,[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mdifferentiable[39m[38;5;12m [39m[38;5;12mphysics[39m[38;5;12m [39m[38;5;12mlibrary[39m[38;5;12m [39m[38;5;12mwhich[39m[38;5;12m [39m[38;5;12mincludes[39m[38;5;12m [39m[38;5;12msimulation[39m[38;5;12m [39m[38;5;12menvironments,[39m[38;5;12m [39m
|
||
[38;5;12minteraction[39m[38;5;12m [39m[38;5;12mpotentials,[39m[38;5;12m [39m[38;5;12mneural[39m[38;5;12m [39m[38;5;12mnetworks,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mmore.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1m__Enabling[0m[38;5;14m[1m [0m[38;5;14m[1mFast[0m[38;5;14m[1m [0m[38;5;14m[1mDifferentially[0m[38;5;14m[1m [0m[38;5;14m[1mPrivate[0m[38;5;14m[1m [0m[38;5;14m[1mSGD[0m[38;5;14m[1m [0m[38;5;14m[1mvia[0m[38;5;14m[1m [0m[38;5;14m[1mJust-in-Time[0m[38;5;14m[1m [0m[38;5;14m[1mCompilation[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mVectorization__.[0m[38;5;14m[1m [0m[38;5;14m[1mPranav[0m[38;5;14m[1m [0m[38;5;14m[1mSubramani,[0m[38;5;14m[1m [0m[38;5;14m[1mNicholas[0m[38;5;14m[1m [0m[38;5;14m[1mVadivelu,[0m[38;5;14m[1m [0m[38;5;14m[1mGautam[0m[38;5;14m[1m [0m[38;5;14m[1mKamath.[0m[38;5;14m[1m [0m[38;5;14m[1m_arXiv[0m[38;5;14m[1m [0m[38;5;14m[1m2020_.[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/2010.09063)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mUses[39m[38;5;12m [39m[38;5;12mJAX's[39m[38;5;12m [39m[38;5;12mJIT[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mVMAP[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12machieve[39m[38;5;12m [39m[38;5;12mfaster[39m[38;5;12m [39m
|
||
[38;5;12mdifferentially[39m[38;5;12m [39m[38;5;12mprivate[39m[38;5;12m [39m[38;5;12mthan[39m[38;5;12m [39m[38;5;12mexisting[39m[38;5;12m [39m[38;5;12mlibraries.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1m__XLB:[0m[38;5;14m[1m [0m[38;5;14m[1mA[0m[38;5;14m[1m [0m[38;5;14m[1mDifferentiable[0m[38;5;14m[1m [0m[38;5;14m[1mMassively[0m[38;5;14m[1m [0m[38;5;14m[1mParallel[0m[38;5;14m[1m [0m[38;5;14m[1mLattice[0m[38;5;14m[1m [0m[38;5;14m[1mBoltzmann[0m[38;5;14m[1m [0m[38;5;14m[1mLibrary[0m[38;5;14m[1m [0m[38;5;14m[1min[0m[38;5;14m[1m [0m[38;5;14m[1mPython__.[0m[38;5;14m[1m [0m[38;5;14m[1mMohammadmehdi[0m[38;5;14m[1m [0m[38;5;14m[1mAtaei,[0m[38;5;14m[1m [0m[38;5;14m[1mHesam[0m[38;5;14m[1m [0m[38;5;14m[1mSalehipour.[0m[38;5;14m[1m [0m[38;5;14m[1m_arXiv[0m[38;5;14m[1m [0m[38;5;14m[1m2023_.[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/2311.16080)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mWhite[39m[38;5;12m [39m[38;5;12mpaper[39m[38;5;12m [39m[38;5;12mdescribing[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mXLB[39m[38;5;12m [39m[38;5;12mlibrary:[39m[38;5;12m [39m[38;5;12mbenchmarks,[39m[38;5;12m [39m[38;5;12mvalidations,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m
|
||
[38;5;12mmore[39m[38;5;12m [39m[38;5;12mdetails[39m[38;5;12m [39m[38;5;12mabout[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mlibrary.[39m
|
||
|
||
|
||
|
||
|
||
|
||
[38;2;255;187;0m[4mTutorials and Blog Posts[0m
|
||
|
||
[38;5;12m- [39m[38;5;14m[1mUsing JAX to accelerate our research by David Budden and Matteo Hessel[0m[38;5;12m (https://deepmind.com/blog/article/using-jax-to-accelerate-our-research) - Describes the state of JAX and the JAX ecosystem at DeepMind.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mGetting started with JAX (MLPs, CNNs & RNNs) by Robert Lange[0m[38;5;12m (https://roberttlange.github.io/posts/2020/03/blog-post-10/) - Neural network building blocks from scratch with the basic JAX operators.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mLearn[0m[38;5;14m[1m [0m[38;5;14m[1mJAX:[0m[38;5;14m[1m [0m[38;5;14m[1mFrom[0m[38;5;14m[1m [0m[38;5;14m[1mLinear[0m[38;5;14m[1m [0m[38;5;14m[1mRegression[0m[38;5;14m[1m [0m[38;5;14m[1mto[0m[38;5;14m[1m [0m[38;5;14m[1mNeural[0m[38;5;14m[1m [0m[38;5;14m[1mNetworks[0m[38;5;14m[1m [0m[38;5;14m[1mby[0m[38;5;14m[1m [0m[38;5;14m[1mRito[0m[38;5;14m[1m [0m[38;5;14m[1mGhosh[0m[38;5;12m [39m[38;5;12m(https://www.kaggle.com/code/truthr/jax-0)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mgentle[39m[38;5;12m [39m[38;5;12mintroduction[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mJAX[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12mit[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mimplement[39m[38;5;12m [39m[38;5;12mLinear[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mLogistic[39m[38;5;12m [39m[38;5;12mRegression,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mNeural[39m[38;5;12m [39m[38;5;12mNetwork[39m[38;5;12m [39m[38;5;12mmodels[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12mthem[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m
|
||
[38;5;12msolve[39m[38;5;12m [39m[38;5;12mreal[39m[38;5;12m [39m[38;5;12mworld[39m[38;5;12m [39m[38;5;12mproblems.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mTutorial:[0m[38;5;14m[1m [0m[38;5;14m[1mimage[0m[38;5;14m[1m [0m[38;5;14m[1mclassification[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mJAX[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mFlax[0m[38;5;14m[1m [0m[38;5;14m[1mLinen[0m[38;5;14m[1m [0m[38;5;14m[1mby[0m[38;5;14m[1m [0m[38;5;14m[1m8bitmp3[0m[38;5;12m [39m[38;5;12m(https://github.com/8bitmp3/JAX-Flax-Tutorial-Image-Classification-with-Linen)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mLearn[39m[38;5;12m [39m[38;5;12mhow[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mcreate[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12msimple[39m[38;5;12m [39m[38;5;12mconvolutional[39m[38;5;12m [39m[38;5;12mnetwork[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mLinen[39m[38;5;12m [39m[38;5;12mAPI[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mFlax[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mtrain[39m[38;5;12m [39m[38;5;12mit[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m
|
||
[38;5;12mrecognize[39m[38;5;12m [39m[38;5;12mhandwritten[39m[38;5;12m [39m[38;5;12mdigits.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mPlugging Into JAX by Nick Doiron[0m[38;5;12m (https://medium.com/swlh/plugging-into-jax-16c120ec3302) - Compares Flax, Haiku, and Objax on the Kaggle flower classification challenge.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mMeta-Learning in 50 Lines of JAX by Eric Jang[0m[38;5;12m (https://blog.evjang.com/2019/02/maml-jax.html) - Introduction to both JAX and Meta-Learning.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mNormalizing Flows in 100 Lines of JAX by Eric Jang[0m[38;5;12m (https://blog.evjang.com/2019/07/nf-jax.html) - Concise implementation of [39m[38;5;14m[1mRealNVP[0m[38;5;12m (https://arxiv.org/abs/1605.08803).[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDifferentiable Path Tracing on the GPU/TPU by Eric Jang[0m[38;5;12m (https://blog.evjang.com/2019/11/jaxpt.html) - Tutorial on implementing path tracing.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mEnsemble networks by Mat Kelcey[0m[38;5;12m (http://matpalm.com/blog/ensemble_nets) - Ensemble nets are a method of representing an ensemble of models as one single logical model.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mOut of distribution (OOD) detection by Mat Kelcey[0m[38;5;12m (http://matpalm.com/blog/ood_using_focal_loss) - Implements different methods for OOD detection.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mUnderstanding Autodiff with JAX by Srihari Radhakrishna[0m[38;5;12m (https://www.radx.in/jax.html) - Understand how autodiff works using JAX.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mFrom PyTorch to JAX: towards neural net frameworks that purify stateful code by Sabrina J. Mielke[0m[38;5;12m (https://sjmielke.com/jax-purify.htm) - Showcases how to go from a PyTorch-like style of coding to a more Functional-style of coding.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mExtending JAX with custom C++ and CUDA code by Dan Foreman-Mackey[0m[38;5;12m (https://github.com/dfm/extending-jax) - Tutorial demonstrating the infrastructure required to provide custom ops in JAX.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mEvolving Neural Networks in JAX by Robert Tjarko Lange[0m[38;5;12m (https://roberttlange.github.io/posts/2021/02/cma-es-jax/) - Explores how JAX can power the next generation of scalable neuroevolution algorithms.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mExploring[0m[38;5;14m[1m [0m[38;5;14m[1mhyperparameter[0m[38;5;14m[1m [0m[38;5;14m[1mmeta-loss[0m[38;5;14m[1m [0m[38;5;14m[1mlandscapes[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mJAX[0m[38;5;14m[1m [0m[38;5;14m[1mby[0m[38;5;14m[1m [0m[38;5;14m[1mLuke[0m[38;5;14m[1m [0m[38;5;14m[1mMetz[0m[38;5;12m [39m[38;5;12m(http://lukemetz.com/exploring-hyperparameter-meta-loss-landscapes-with-jax/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mDemonstrates[39m[38;5;12m [39m[38;5;12mhow[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12muse[39m[38;5;12m [39m[38;5;12mJAX[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mperform[39m[38;5;12m [39m[38;5;12minner-loss[39m[38;5;12m [39m[38;5;12moptimization[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mSGD[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mMomentum,[39m[38;5;12m [39m[38;5;12mouter-loss[39m
|
||
[38;5;12moptimization[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mgradients,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mouter-loss[39m[38;5;12m [39m[38;5;12moptimization[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12mevolutionary[39m[38;5;12m [39m[38;5;12mstrategies.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDeterministic ADVI in JAX by Martin Ingram[0m[38;5;12m (https://martiningram.github.io/deterministic-advi/) - Walk through of implementing automatic differentiation variational inference (ADVI) easily and cleanly with JAX.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mEvolved[0m[38;5;14m[1m [0m[38;5;14m[1mchannel[0m[38;5;14m[1m [0m[38;5;14m[1mselection[0m[38;5;14m[1m [0m[38;5;14m[1mby[0m[38;5;14m[1m [0m[38;5;14m[1mMat[0m[38;5;14m[1m [0m[38;5;14m[1mKelcey[0m[38;5;12m [39m[38;5;12m(http://matpalm.com/blog/evolved_channel_selection/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mTrains[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mclassification[39m[38;5;12m [39m[38;5;12mmodel[39m[38;5;12m [39m[38;5;12mrobust[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12mcombinations[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12minput[39m[38;5;12m [39m[38;5;12mchannels[39m[38;5;12m [39m[38;5;12mat[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12mresolutions,[39m[38;5;12m [39m[38;5;12mthen[39m[38;5;12m [39m[38;5;12muses[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mgenetic[39m[38;5;12m [39m[38;5;12malgorithm[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mdecide[39m[38;5;12m [39m
|
||
[38;5;12mthe[39m[38;5;12m [39m[38;5;12mbest[39m[38;5;12m [39m[38;5;12mcombination[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mparticular[39m[38;5;12m [39m[38;5;12mloss.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mIntroduction to JAX by Kevin Murphy[0m[38;5;12m (https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/jax_intro.ipynb) - Colab that introduces various aspects of the language and applies them to simple ML problems.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mWriting an MCMC sampler in JAX by Jeremie Coullon[0m[38;5;12m (https://www.jeremiecoullon.com/2020/11/10/mcmcjax3ways/) - Tutorial on the different ways to write an MCMC sampler in JAX along with speed benchmarks.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mHow to add a progress bar to JAX scans and loops by Jeremie Coullon[0m[38;5;12m (https://www.jeremiecoullon.com/2021/01/29/jax_progress_bar/) - Tutorial on how to add a progress bar to compiled loops in JAX using the [39m[48;5;235m[38;5;249mhost_callback[49m[39m[38;5;12m module.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mGet started with JAX by Aleksa Gordić[0m[38;5;12m (https://github.com/gordicaleksa/get-started-with-JAX) - A series of notebooks and videos going from zero JAX knowledge to building neural networks in Haiku.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mWriting[0m[38;5;14m[1m [0m[38;5;14m[1ma[0m[38;5;14m[1m [0m[38;5;14m[1mTraining[0m[38;5;14m[1m [0m[38;5;14m[1mLoop[0m[38;5;14m[1m [0m[38;5;14m[1min[0m[38;5;14m[1m [0m[38;5;14m[1mJAX[0m[38;5;14m[1m [0m[38;5;14m[1m+[0m[38;5;14m[1m [0m[38;5;14m[1mFLAX[0m[38;5;14m[1m [0m[38;5;14m[1mby[0m[38;5;14m[1m [0m[38;5;14m[1mSaurav[0m[38;5;14m[1m [0m[38;5;14m[1mMaheshkar[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mSoumik[0m[38;5;14m[1m [0m[38;5;14m[1mRakshit[0m[38;5;12m [39m[38;5;12m(https://wandb.ai/jax-series/simple-training-loop/reports/Writing-a-Training-Loop-in-JAX-FLAX--VmlldzoyMzA4ODEy)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mtutorial[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mwriting[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12msimple[39m[38;5;12m [39m[38;5;12mend-to-end[39m[38;5;12m [39m[38;5;12mtraining[39m[38;5;12m [39m
|
||
[38;5;12mand[39m[38;5;12m [39m[38;5;12mevaluation[39m[38;5;12m [39m[38;5;12mpipeline[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mJAX,[39m[38;5;12m [39m[38;5;12mFlax[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mOptax.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mImplementing[0m[38;5;14m[1m [0m[38;5;14m[1mNeRF[0m[38;5;14m[1m [0m[38;5;14m[1min[0m[38;5;14m[1m [0m[38;5;14m[1mJAX[0m[38;5;14m[1m [0m[38;5;14m[1mby[0m[38;5;14m[1m [0m[38;5;14m[1mSoumik[0m[38;5;14m[1m [0m[38;5;14m[1mRakshit[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mSaurav[0m[38;5;14m[1m [0m[38;5;14m[1mMaheshkar[0m[38;5;12m [39m[38;5;12m(https://wandb.ai/wandb/nerf-jax/reports/Implementing-NeRF-in-JAX--VmlldzoxODA2NDk2?galleryTag=jax)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mtutorial[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12m3D[39m[38;5;12m [39m[38;5;12mvolumetric[39m[38;5;12m [39m[38;5;12mrendering[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mscenes[39m[38;5;12m [39m[38;5;12mrepresented[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mNeural[39m[38;5;12m [39m[38;5;12mRadiance[39m
|
||
[38;5;12mFields[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mJAX.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mDeep[0m[38;5;14m[1m [0m[38;5;14m[1mLearning[0m[38;5;14m[1m [0m[38;5;14m[1mtutorials[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mJAX+Flax[0m[38;5;14m[1m [0m[38;5;14m[1mby[0m[38;5;14m[1m [0m[38;5;14m[1mPhillip[0m[38;5;14m[1m [0m[38;5;14m[1mLippe[0m[38;5;12m [39m[38;5;12m(https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/JAX/tutorial2/Introduction_to_JAX.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mseries[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mnotebooks[39m[38;5;12m [39m[38;5;12mexplaining[39m[38;5;12m [39m[38;5;12mvarious[39m[38;5;12m [39m[38;5;12mdeep[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mconcepts,[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mbasics[39m[38;5;12m [39m
|
||
[38;5;12m(e.g.[39m[38;5;12m [39m[38;5;12mintro[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mJAX/Flax,[39m[38;5;12m [39m[38;5;12mactiviation[39m[38;5;12m [39m[38;5;12mfunctions)[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mrecent[39m[38;5;12m [39m[38;5;12madvances[39m[38;5;12m [39m[38;5;12m(e.g.,[39m[38;5;12m [39m[38;5;12mVision[39m[38;5;12m [39m[38;5;12mTransformers,[39m[38;5;12m [39m[38;5;12mSimCLR),[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mtranslations[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mPyTorch.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mAchieving 4000x Speedups with PureJaxRL[0m[38;5;12m (https://chrislu.page/blog/meta-disco/) - A blog post on how JAX can massively speedup RL training through vectorisation.[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mSimple[0m[38;5;14m[1m [0m[38;5;14m[1mPDE[0m[38;5;14m[1m [0m[38;5;14m[1msolver[0m[38;5;14m[1m [0m[38;5;14m[1m+[0m[38;5;14m[1m [0m[38;5;14m[1mConstrained[0m[38;5;14m[1m [0m[38;5;14m[1mOptimization[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mJAX[0m[38;5;14m[1m [0m[38;5;14m[1mby[0m[38;5;14m[1m [0m[38;5;14m[1mPhilip[0m[38;5;14m[1m [0m[38;5;14m[1mMocz[0m[38;5;12m [39m[38;5;12m(https://levelup.gitconnected.com/create-your-own-automatically-differentiable-simulation-with-python-jax-46951e120fbb?sk=e8b9213dd2c6a5895926b2695d28e4aa)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12msimple[39m[38;5;12m [39m[38;5;12mexample[39m[38;5;12m [39m
|
||
[38;5;12mof[39m[38;5;12m [39m[38;5;12msolving[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12madvection-diffusion[39m[38;5;12m [39m[38;5;12mequations[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mJAX[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12mit[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mconstrained[39m[38;5;12m [39m[38;5;12moptimization[39m[38;5;12m [39m[38;5;12mproblem[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mfind[39m[38;5;12m [39m[38;5;12minitial[39m[38;5;12m [39m[38;5;12mconditions[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12myield[39m[38;5;12m [39m[38;5;12mdesired[39m[38;5;12m [39m[38;5;12mresult.[39m
|
||
|
||
|
||
|
||
[38;2;255;187;0m[4mBooks[0m
|
||
|
||
[38;5;12m- [39m[38;5;14m[1mJax in Action[0m[38;5;12m (https://www.manning.com/books/jax-in-action) - A hands-on guide to using JAX for deep learning and other mathematically-intensive applications.[39m
|
||
|
||
|
||
|
||
[38;2;255;187;0m[4mCommunity[0m
|
||
|
||
[38;5;12m- [39m[38;5;14m[1mJaxLLM (Unofficial) Discord[0m[38;5;12m (https://discord.com/channels/1107832795377713302/1107832795688083561)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mJAX GitHub Discussions[0m[38;5;12m (https://github.com/google/jax/discussions)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mReddit[0m[38;5;12m (https://www.reddit.com/r/JAX/)[39m
|
||
|
||
[38;2;255;187;0m[4mContributing[0m
|
||
|
||
[38;5;12mContributions welcome! Read the [39m[38;5;14m[1mcontribution guidelines[0m[38;5;12m (contributing.md) first.[39m
|
||
|
||
[38;5;12mjax Github: https://github.com/n2cholas/awesome-jax[39m
|