Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

This lesson is for members only. Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

Hint: You can adjust the default video playback speed in your account settings.
Hint: You can set your subtitle preferences in your account settings.
Sorry! Looks like there’s an issue with video playback 🙁 This might be due to a temporary outage or because of a configuration issue with your browser. Please refer to our video player troubleshooting guide for assistance.

Introduce Enemies and Rewards

00:00 Introduce Enemies and Rewards to the Maze. Instead of using pure distance as an edge weight to assess how good a path is, you can take enemies and rewards into account to encourage or discourage certain paths in the maze.

00:16 Because you already recorded squares with special roles, including rewards and enemies, as nodes in the graph, you can assume they’ll be present at either end of an edge.

00:26 This means you don’t have to consider the squares that possibly exist between a pair of nodes connected by an edge.

00:34 The smaller the weight, the lower the cost of the edge connecting two nodes. Therefore, you can assign a lower weight to edges that pass through a reward.

00:44 This will cause the algorithm to prioritize such paths over others. Conversely, you can set a higher weight to edges leading to undesirable destinations.

00:54 Giving a penalty for enemies will make the algorithm avoid these paths.

01:01 You must carefully consider the scale and distribution of your edge weights. In particular, you shouldn’t have any negative weights in the graph because Djikstra’s algorithm, which NetworkX uses by default, can’t cope with them.

01:14 While you could try using the Bellman-Ford algorithm instead, it will fail as soon as it finds a negative cycle in your graph, whose edges add up to a negative value. So, it’s best to avoid negative weights altogether.

01:27 How you assign the weights depends on the specific problem, and you may need to experiment a little. Here, you are going to follow a few rules, which are seen on-screen.

01:37 You will want to avoid enemies and seek rewards. Given two paths with equal lengths, you would prefer one with more rewards. Unless there’s no other choice, you will follow a path with fewer enemies.

01:53 And if a path contains both a reward and an enemy, then you will give it a lower priority compared to an equivalent path without any rewards or enemies. And you’ll use the distance between an edge’s nodes as the baseline weight, measured in squares in the maze.

02:09 One way to achieve this is to turn your maze into a directed graph and tie the edge weights to their target nodes scaled by some arbitrary bonus and penalty points. For example, you can subtract one bonus point from an edge’s weight for each reward while adding two penalty points for each enemy.

02:29 This will always penalize enemies more than reward bonuses, giving the algorithm a preference for safer paths at the cost of potentially missing out on rewards. To ensure that all edge weights are greater than zero, you could offset them so that the lowest weight is always positive.

02:46 Unfortunately, adding a fixed offset to the weights would again make them depend on the number of superfluous nodes in the graph rather than the actual distance in the maze.

02:56 Therefore, you won’t be doing that. Instead, you’ll implement this method in the Edge class to calculate the weight of a directed edge.

03:25 By default, this method subtracts one point from the baseline distance if the current edge leads to a reward. On the other hand, if there’s an enemy at the end of this edge, then the method adds two penalty points to increase the cost of that connection. Otherwise, the weight of an edge is equal to its distance.

03:46 When both nodes connected by an edge are located on adjacent squares in the maze, the minimum distance is equal to one unit.

03:57 Feel free to tweak the bonus and penalty points to your liking—for example, to favor even more distant rewards, making bigger detours from the shortest path worthwhile. However, note that while you can safely bump up penalty points, using more bonus points will increase the risk of ending up with a negative weight, which would lead to an error. At the moment, the graph you made is undirected, meaning you can’t know whether the search algorithm is moving towards the first or second node in the edge. Go ahead and replace it with a directed graph, using the new method as the weight.

04:55 You create an nx.DiGraph object instead of nx.Graph to represent the graph as directed. The

05:29 .flip property of an edge returns a new Edge instance with its two nodes reversed. You use it in get_directed_edges() to return a set of edges in both directions based on the undirected edges. Depending on the direction, the same edge may have different weights.

05:47 To put your new weights to the test, try solving sample mazes, but this time, add enemies and rewards into them. You can streamline this process by making your maze_solver package runable, and that’s what you’ll be doing in the next section of the course.

Become a Member to join the conversation.