From 8083aed53f6f673c0e8a97d28d91110fd4f8fb77 Mon Sep 17 00:00:00 2001 From: Annamalai Date: Thu, 21 Dec 2023 23:42:32 +0530 Subject: [PATCH] post dec20 fix4 --- _posts/2023-1-20-The_Heilmeier_Catechism.md | 2 +- _posts/2023-12-20-TIL.md | 44 ------------------- _posts/2023-12-20-TIL_Dec_20.md | 4 +- ...-12-21-TIL.md => 2023-12-21-TIL_Dec_21.md} | 0 4 files changed, 2 insertions(+), 48 deletions(-) delete mode 100644 _posts/2023-12-20-TIL.md rename _posts/{2023-12-21-TIL.md => 2023-12-21-TIL_Dec_21.md} (100%) diff --git a/_posts/2023-1-20-The_Heilmeier_Catechism.md b/_posts/2023-1-20-The_Heilmeier_Catechism.md index 9b89401fca951..898fb9c37add1 100644 --- a/_posts/2023-1-20-The_Heilmeier_Catechism.md +++ b/_posts/2023-1-20-The_Heilmeier_Catechism.md @@ -8,7 +8,7 @@ title: The Heilmeier Catechism - George H. Heilmeier, a former DARPA director (1975-1977), crafted a set of questions known as the **Heilmeier Catechism** to help Agency officials think through and evaluate proposed research programs. -> ### The 8 Questions +### The 8 Questions 1. What are you trying to do? Articulate your objectives using absolutely no jargon. diff --git a/_posts/2023-12-20-TIL.md b/_posts/2023-12-20-TIL.md deleted file mode 100644 index ebec520a4ec94..0000000000000 --- a/_posts/2023-12-20-TIL.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -layout: post -title: TIL (20/12/23) ---- - -## Bayesian Optimization - -**Bayesian optimization** is a powerful strategy for finding the extrema of objective functions that are expensive to evaluate. It is particularly useful when these evaluations are costly, when one does not have access to derivatives, or when the problem at hand is non-convex. - -``` - The Bayesian Optimization algorithm can be summarized as follows: - - 1. Select a Sample by Optimizing the Acquisition Function. - 2. Evaluate the Sample With the Objective Function. - 3. Update the Data and, in turn, the Surrogate Function. - 4. Go To 1. -``` - -- It uses a - - **Surrogate function** - that approximates the relationship between I/O data of the sample. There are many ways to model, one of the ways is to use Random Forest/Gaussian Process (GP, with many different kernels) i.e here we're kind of approximating the *objective function* such that it can be *easily sampled*. - - **Acquisition function** - It gives a sample that is to evaluated by the objective function. It is found by optimizing this function by various methods and it balances exploitation and exploration (E&E)[1]. - -- It is highly used in Tuning of Hyperparameters e.g. Optuna,HyperOpt. - -## Optuna - -- [API Reference](https://optuna.readthedocs.io/en/stable/reference/index.html) - -- This [video](https://www.youtube.com/watch?v=t-INgABWULw) gives a good intro. - -I am trying to use it for HPO of lunar lander environment, initally results weren't that good. I think it's because of not giving a *proper intreval* i.e a large intreval that won't result in a good choice of HP. May be I have to give try other ways to make it work. - -## Paper Reading - -- I'm reading this paper, "[Improving Environment Robustness of Deep Reinforcement Learning Approaches for Autonomous Racing Using Bayesian Optimization-based Curriculum Learning](https://arxiv.org/pdf/2312.10557.pdf)", which I saw from ArXiV (in CS.RO) today, where they propose a method for **automated curriculum selection** which lead me to know about Bayesian Optimization. - -- *Curriculum Learning* is about **spoon feeding different environments for agent exploration**, so that we can get a robust policy. - - -## References - -1. https://machinelearningmastery.com/what-is-bayesian-optimization/ - - diff --git a/_posts/2023-12-20-TIL_Dec_20.md b/_posts/2023-12-20-TIL_Dec_20.md index 58b1567cb1130..0a3190c4a6ffa 100644 --- a/_posts/2023-12-20-TIL_Dec_20.md +++ b/_posts/2023-12-20-TIL_Dec_20.md @@ -1,20 +1,18 @@ --- layout: post -title: TIL(20/12/23) +title: TIL (20/12/23) --- ## Bayesian Optimization **Bayesian optimization** is a powerful strategy for finding the extrema of objective functions that are expensive to evaluate. It is particularly useful when these evaluations are costly, when one does not have access to derivatives, or when the problem at hand is non-convex. -``` The Bayesian Optimization algorithm can be summarized as follows: 1. Select a Sample by Optimizing the Acquisition Function. 2. Evaluate the Sample With the Objective Function. 3. Update the Data and, in turn, the Surrogate Function. 4. Go To 1. -``` - It uses a - **Surrogate function** - that approximates the relationship between I/O data of the sample. There are many ways to model, one of the ways is to use Random Forest/Gaussian Process (GP, with many different kernels) i.e here we're kind of approximating the *objective function* such that it can be *easily sampled*. diff --git a/_posts/2023-12-21-TIL.md b/_posts/2023-12-21-TIL_Dec_21.md similarity index 100% rename from _posts/2023-12-21-TIL.md rename to _posts/2023-12-21-TIL_Dec_21.md