Introduce Enemies and Rewards
00:00 Introduce Enemies and Rewards to the Maze. Instead of using pure distance as an edge weight to assess how good a path is, you can take enemies and rewards into account to encourage or discourage certain paths in the maze.
01:01 You must carefully consider the scale and distribution of your edge weights. In particular, you shouldn’t have any negative weights in the graph because Djikstra’s algorithm, which NetworkX uses by default, can’t cope with them.
01:14 While you could try using the Bellman-Ford algorithm instead, it will fail as soon as it finds a negative cycle in your graph, whose edges add up to a negative value. So, it’s best to avoid negative weights altogether.
01:37 You will want to avoid enemies and seek rewards. Given two paths with equal lengths, you would prefer one with more rewards. Unless there’s no other choice, you will follow a path with fewer enemies.
01:53 And if a path contains both a reward and an enemy, then you will give it a lower priority compared to an equivalent path without any rewards or enemies. And you’ll use the distance between an edge’s nodes as the baseline weight, measured in squares in the maze.
02:09 One way to achieve this is to turn your maze into a directed graph and tie the edge weights to their target nodes scaled by some arbitrary bonus and penalty points. For example, you can subtract one bonus point from an edge’s weight for each reward while adding two penalty points for each enemy.
02:29 This will always penalize enemies more than reward bonuses, giving the algorithm a preference for safer paths at the cost of potentially missing out on rewards. To ensure that all edge weights are greater than zero, you could offset them so that the lowest weight is always positive.
03:25 By default, this method subtracts one point from the baseline distance if the current edge leads to a reward. On the other hand, if there’s an enemy at the end of this edge, then the method adds two penalty points to increase the cost of that connection. Otherwise, the weight of an edge is equal to its distance.
03:57 Feel free to tweak the bonus and penalty points to your liking—for example, to favor even more distant rewards, making bigger detours from the shortest path worthwhile. However, note that while you can safely bump up penalty points, using more bonus points will increase the risk of ending up with a negative weight, which would lead to an error. At the moment, the graph you made is undirected, meaning you can’t know whether the search algorithm is moving towards the first or second node in the edge. Go ahead and replace it with a directed graph, using the new method as the weight.
.flip property of an edge returns a new
Edge instance with its two nodes reversed. You use it in
get_directed_edges() to return a set of edges in both directions based on the undirected edges. Depending on the direction, the same edge may have different weights.
To put your new weights to the test, try solving sample mazes, but this time, add enemies and rewards into them. You can streamline this process by making your
maze_solver package runable, and that’s what you’ll be doing in the next section of the course.
Become a Member to join the conversation.