Final Exam Master Challenge

Coverage: Modules 7-13 (Transform-and-Conquer, Space-Time Trade-offs, Dynamic Programming, Greedy, Limitations, Coping).
Part 1: 70 Multiple Choice Questions. Part 2: 10 Concept Essay Questions.

Part 1: The Gauntlet (MCQ)

Question 1/70 Score: 0

Topic: General

Part 2: The Deep Dive (Concepts)

1. Heaps vs. BSTs

Mod 7

Explain the structural and ordering differences between a Heap and a Binary Search Tree (BST). Why is a Heap generally preferred for implementing a Priority Queue?

Difference: In a BST, for any node, left children are smaller and right children are larger (Left-Right Ordering). In a Heap, the parent is larger/smaller than BOTH children (Top-Down Ordering).

Priority Queue: Heaps allow accessing the Min/Max in O(1) and removing it in O(log n). A BST takes O(log n) to find min/max (if balanced) but has higher overhead to maintain balance (AVL/Red-Black). Heaps can also be implemented efficiently as arrays.

2. Open vs. Closed Hashing

Mod 9

Compare Open Hashing (Separate Chaining) and Closed Hashing (Open Addressing). Under what conditions does Closed Hashing degrade significantly?

Open Hashing: Keys stored in linked lists outside the table. Handles Load Factor > 1 well.
Closed Hashing: Keys stored inside the table slots. Requires Load Factor < 1.

Degradation: Closed hashing degrades rapidly as the Load Factor approaches 1 (table gets full) due to "Clustering" (long sequences of occupied cells), turning search/insert into O(n).

3. Dynamic Programming vs. Divide & Conquer

Mod 10

Both techniques split problems into subproblems. What is the fundamental difference that dictates when to use DP instead of D&C? Give an example where D&C fails but DP works.

Difference: D&C implies independent subproblems. DP implies overlapping subproblems.

Example: Calculating Fibonacci numbers. D&C naively recalculates F(n-2) multiple times, leading to exponential time. DP stores the result of F(n-2) once and reuses it, achieving linear time.

4. The Principle of Optimality

Mod 10

Define the Principle of Optimality. Explain why the Longest Path problem does not satisfy this principle, whereas Shortest Path does.

Definition: An optimal solution to a problem instance must be composed of optimal solutions to its sub-instances.

Shortest Path: If A->B->C is shortest, then A->B must be the shortest path to B.
Longest Path: The longest simple path from A to C might not use the longest simple path from A to B because using the longest A->B path might use vertices needed for the B->C leg (causing a cycle).

5. Greedy vs. DP in Knapsack

Mod 11

Why does the Greedy strategy (sorting by value/weight) work for the Fractional Knapsack problem but fail for the 0/1 Knapsack problem?

In Fractional Knapsack, we can fill the remaining space with a fraction of the next best item, ensuring no capacity is wasted. In 0/1 Knapsack, picking the item with the highest ratio might leave a small, unusable gap in capacity that could have been better filled by a combination of lower-ratio items.

6. Dijkstra and Negative Weights

Mod 11

Explain precisely why Dijkstra's algorithm fails if the graph contains negative edge weights. What algorithm should be used instead?

Dijkstra's is "Greedy"; it assumes that once a node is visited (marked final), its distance cannot be improved. A negative edge later in the graph could theoretically loop back and reduce the cost of an already-finalized node, violating this assumption. Use Bellman-Ford instead.

7. P vs NP

Mod 12

Define the classes P and NP. Clarify the common misconception that "NP stands for Not Polynomial".

P: Problems solvable in Polynomial Time.
NP: Nondeterministic Polynomial. Problems where a solution can be verified in Polynomial Time.
Misconception: It does NOT mean "Not Polynomial". P is a subset of NP.

8. The Halting Problem

Mod 12

What is the Halting Problem and what does it prove about the limitations of algorithms?

It is the problem of determining whether a given program will eventually stop or run forever. Alan Turing proved it is Undecidable—no algorithm can exist that solves this for all possible input programs. It proves some problems are computationally unsolvable.

9. Backtracking vs. Branch-and-Bound

Mod 13

Distinguish between Backtracking and Branch-and-Bound. Which one is used for N-Queens and which for Knapsack? Why?

Backtracking: DFS traversal of state space. Used for non-optimization problems (finding all solutions) like N-Queens.
Branch-and-Bound: Uses a numerical bound (cost function) to prune branches. Used for optimization problems (finding best solution) like Knapsack or TSP.

10. Approximation Accuracy

Mod 13

What is the Performance Ratio ($\rho$) of an approximation algorithm? For TSP Nearest Neighbor, is this ratio bounded?

$\rho$ is the ratio of the approximate solution cost to the optimal solution cost ($\ge 1$). For general TSP, the ratio is unbounded (can be infinitely bad). For Euclidean TSP, it is bounded by a function of log N.