Demystifying Heuristic Search Algorithms

RMAG news

Introduction:

In the vast landscape of artificial intelligence, heuristic search algorithms are a beacon of efficiency and optimization. These algorithms, which cleverly estimate states’ proximity to goals, are invaluable in domains ranging from video games to robotics.

This blog post will delve into the mechanics of heuristic-driven searches, focusing on key algorithms like Greedy Search and A*. We’ll explore how heuristics guide these algorithms toward their objectives, the challenges they face, and how they excel in various applications.

Whether you’re a software engineer, researcher, or just curious about AI, this exploration will offer valuable insights into the art and science of informed search.

What are Heuristics?

A heuristic is a function that estimates how close a state is to a goal. Each heuristic is designed for a particular search problem. Some heuristic examples for pathfinding are the Manhattan distance and Euclidean distance. A heuristic function (h(x)) takes the state as input and outputs the associated heuristic value.

Admissible Heuristics:

Inadmissible (pessimistic) heuristics break optimality by trapping good plans on the fringe. Admissible (optimistic) heuristics slow down bad plans but never outweigh actual costs.

A heuristic ℎ is admissible (optimistic) if 0≤ℎ(𝑛)≤ℎ∗(𝑛) where ℎ∗(𝑛) is the actual cost to the nearest goal. Most of what’s involved in using A* in practice is coming up with admissible heuristics.

Greedy Search:

In greedy search, the node closest to the goal is expanded first. A heuristic estimates the distance to the nearest goal for each state. However, sometimes, this approach leads the algorithm to the wrong goal. In the worst case, this algorithm performs like a poorly guided DFS.

A* Search:

The A* algorithm is like a combination of USC and greedy search. Uniform-cost orders by path cost or backward cost 𝑔(𝑛). Greedy orders by goal proximity or forward cost ℎ(𝑛). A* search orders by the sum of both 𝑓(𝑛)=𝑔(𝑛)+ℎ(𝑛).

An issue that can arise using the A* algorithm is that taking a suboptimal path is possible if the heuristic estimation is significantly larger than the actual path cost. For example, the optimal path might be SAG with a cost of 4, but if the heuristic estimates AG with a cost of 6, the algorithm could instead take SG.

Optimality of A* Tree Search:

Assume A is an optimal goal node, B is a suboptimal goal node, and h is admissible. The claim is that A will exit the fringe before B, but does this happen?

Imagine B is on the fringe, and some ancestor 𝑛 of A is on the fringe, too. If 𝑛 is expanded before B, then A will exit the fringe before B. But 𝑛 will only be expanded before B if 𝑓(𝑛)≤𝑓(𝐴)<𝑓(𝐵).

Properties of A*:

Uniform-cost search expands equally in all directions. In contrast, A* expands mainly toward the goal while hedging its bets to ensure optimality. Due to its speed and optimality, A* is used in many applications, such as video games, pathing/routing problems, resource planning problems, robot motion planning, language analysis, machine translation, speech recognition, and more.

Creating Admissible Heuristics:

Most of the work in solving complex search problems optimally involves using admissible heuristics. Admissible heuristics are often solutions to relaxed problems where new actions are available. Inadmissible heuristics can also be helpful, however.

Consider the 8-puzzle problem. There are plenty of states and actions in this problem, but only a single start and goal state. One heuristic that could be used is the number of tiles misplaced. In the image to the right, (h(start) = 8) and (h(end) = 0). This is a relaxed problem heuristic.

With A*, a trade-off between the quality of the estimate and the work per node needs to be made. As heuristics get closer to the actual cost, fewer nodes will be expanded, but more work per node will be required to compute the heuristic.

Semi-lattice of Heuristics:

When considering multiple heuristics, some may be closer to the optimal in some scenarios but further in others. In such a scenario, you can combine the heuristics to get the best of both worlds.
When multiple heuristics are combined, heuristic ℎ𝑎 is said to dominate heuristic ℎ𝑐 if ∀𝑛:ℎ𝑎(𝑛)≥ℎ𝑐(𝑛).

Heuristics form a semi-lattice where the max of admissible heuristics is admissible. ℎ(𝑛)=max(ℎ𝑎(𝑛),ℎ𝑏(𝑛))
Trivial heuristics are at the bottom of the lattice, called the zero heuristics. The top of the lattice is called the exact heuristic.

Graph Search:

When using tree search, the failure to detect repeated states can cause exponentially more work. For example, in the image to the right, there’s no reason to expand the circled nodes when using BFS because they will have already been searched previously.

Graph search has the idea never to expand a state twice. To implement graph search, combine tree search with a set of expanded states (“closed set”), expand the search tree node-by-node, and check before expanding each node to ensure its state has never been developed. If it’s not new, skip it in the search process; if it is new, add it to the closed set and add its children to the fringe.

It’s essential to store the closed set as a set instead of a list.

Graph search can improve efficiency but can also ruin optimality. If two nodes reach the same node, for example, but the suboptimal one is expanded first, the search algorithm will find the goal sub-optimally.

Consistency Of Heuristics:
An important property to consider is the consistency of heuristics. The main idea is that the estimated heuristic cost is less than or equal to the actual cost. This is different from admissibility, as instead of calculating the cost to the goal, the cost between each node is estimated (ℎ(𝐴)−ℎ(𝐶)≤𝑐𝑜𝑠𝑡(𝐴→𝐶)).

The consequences of consistency are that the 𝑓 value along a path never decreases (ℎ(𝐴)≤𝑐𝑜𝑠𝑡(𝐴→𝐶)+ℎ(𝐶)) and A* graph search is optimal.

When a consistent heuristic is used, nodes expand in tree search A* to increase the total f value (creating f-contours). Also, for every state 𝑠, nodes that reach 𝑠 optimally are expanded before nodes that reach 𝑠 sub-optimally. The overall result is that A* graph search is optimal.’

Note that consistency implies admissibility. Most natural admissible heuristics tend to be consistent, especially from relaxed problems.

Conclusion:
Heuristic search algorithms are vital tools in the AI toolkit, offering speed and precision. By leveraging heuristics, algorithms like Greedy Search and A* can efficiently navigate complex problems. However, as we’ve seen, choosing or designing a suitable heuristic is crucial for achieving optimal results. Informed search algorithms aren’t just theoretical constructs—they profoundly impact our daily lives, from enhancing the realism of video games to optimizing logistics and beyond. By understanding these algorithms and their nuances, we can appreciate the power of AI and its potential to solve real-world challenges.

Leave a Reply

Your email address will not be published. Required fields are marked *