diff --git a/8-Reinforcement/1-QLearning/solution/assignment-solution.ipynb b/8-Reinforcement/1-QLearning/solution/assignment-solution.ipynb index 9c1adae003..dc5c0e6828 100644 --- a/8-Reinforcement/1-QLearning/solution/assignment-solution.ipynb +++ b/8-Reinforcement/1-QLearning/solution/assignment-solution.ipynb @@ -246,7 +246,7 @@ "metadata": {}, "outputs": [], "source": [ - "Q = np.ones((width,height,len(actions)),dtype=np.float)*1.0/len(actions)" + "Q = np.ones((width,height,len(actions)),dtype=np.float64)*1.0/len(actions)" ] }, { diff --git a/8-Reinforcement/1-QLearning/solution/notebook.ipynb b/8-Reinforcement/1-QLearning/solution/notebook.ipynb index de19ad0bd3..9bbc6c95c0 100644 --- a/8-Reinforcement/1-QLearning/solution/notebook.ipynb +++ b/8-Reinforcement/1-QLearning/solution/notebook.ipynb @@ -264,7 +264,7 @@ "metadata": {}, "outputs": [], "source": [ - "Q = np.ones((width,height,len(actions)),dtype=np.float)*1.0/len(actions)" + "Q = np.ones((width,height,len(actions)),dtype=np.float64)*1.0/len(actions)" ] }, { diff --git a/8-Reinforcement/2-Gym/README.md b/8-Reinforcement/2-Gym/README.md index 6d272116ed..c3c0cd9472 100644 --- a/8-Reinforcement/2-Gym/README.md +++ b/8-Reinforcement/2-Gym/README.md @@ -24,9 +24,9 @@ In this lesson, we will be using a library called **OpenAI Gym** to simulate dif ## OpenAI Gym -In the previous lesson, the rules of the game and the state were given by the `Board` class which we defined ourselves. Here we will use a special **simulation environment**, which will simulate the physics behind the balancing pole. One of the most popular simulation environments for training reinforcement learning algorithms is called a [Gym](https://gym.openai.com/), which is maintained by [OpenAI](https://openai.com/). By using this gym we can create difference **environments** from a cartpole simulation to Atari games. +In the previous lesson, the rules of the game and the state were given by the `Board` class which we defined ourselves. Here we will use a special **simulation environment**, which will simulate the physics behind the balancing pole. One of the most popular simulation environments for training reinforcement learning algorithms is called a [Gym](https://gymnasium.farama.org/), which is maintained by [OpenAI](https://openai.com/). By using this gym we can create difference **environments** from a cartpole simulation to Atari games. -> **Note**: You can see other environments available from OpenAI Gym [here](https://gym.openai.com/envs/#classic_control). +> **Note**: You can see other environments available from OpenAI Gym [here](https://gymnasium.farama.org/environments/classic_control/). First, let's install the gym and import required libraries (code block 1): diff --git a/8-Reinforcement/2-Gym/assignment.md b/8-Reinforcement/2-Gym/assignment.md index 9bffaa7c47..603c79daaf 100644 --- a/8-Reinforcement/2-Gym/assignment.md +++ b/8-Reinforcement/2-Gym/assignment.md @@ -1,10 +1,10 @@ # Train Mountain Car -[OpenAI Gym](http://gym.openai.com) has been designed in such a way that all environments provide the same API - i.e. the same methods `reset`, `step` and `render`, and the same abstractions of **action space** and **observation space**. Thus is should be possible to adapt the same reinforcement learning algorithms to different environments with minimal code changes. +[OpenAI Gym](https://gymnasium.farama.org) has been designed in such a way that all environments provide the same API - i.e. the same methods `reset`, `step` and `render`, and the same abstractions of **action space** and **observation space**. Thus is should be possible to adapt the same reinforcement learning algorithms to different environments with minimal code changes. ## A Mountain Car Environment -[Mountain Car environment](https://gym.openai.com/envs/MountainCar-v0/) contains a car stuck in a valley: +[Mountain Car environment](https://gymnasium.farama.org/environments/classic_control/mountain_car/) contains a car stuck in a valley: diff --git a/8-Reinforcement/2-Gym/solution/notebook.ipynb b/8-Reinforcement/2-Gym/solution/notebook.ipynb index b6338c8962..b576eb27ef 100644 --- a/8-Reinforcement/2-Gym/solution/notebook.ipynb +++ b/8-Reinforcement/2-Gym/solution/notebook.ipynb @@ -209,7 +209,7 @@ "outputs": [], "source": [ "def discretize(x):\n", - " return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))" + " return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int_))" ] }, {