1. 程式人生 > >Coursera course review: Algorithms: Design and Analysis, Part 2

Coursera course review: Algorithms: Design and Analysis, Part 2

I recently finished the Coursera course Algorithms: Design and Analysis, Part 2 by Professor Tim Roughgarden of Stanford. I’ve already reviewed part 1, and here are my thoughts on the second part.

The main theme of part 1 was the divide and conquer paradigm. In the second part the main themes were greedy algorithms, dynamic programming and NP-Complete problems. The lectures were excellent, with clear and easy to follow algorithm development and proofs. At six weeks, it was one week longer than part 1, and I found it quite a bit harder than part 1. Here’s more on each part.

Greedy Algorithms

In the greedy paradigm, you iteratively make the best possible choice, and hope that in the end you have an optimal solution. Usually the algorithms are easy to come up with, and easy to analyze, but it is equally easy to come up with an incorrect algorithm. The greedy algorithms covered were:

  • Job scheduling
  • Minimum spanning tree 
  • Clustering
  • Huffman coding

For job scheduling, a number of jobs, each with a weight and duration, are to be ordered so that the weighted sum of the completion times is as small as possible. The first programming assignment dealt with job scheduling. 10,000 jobs were to be scheduled in two different ways, and the answers were the sums of the completion times. Straight-forward, and the easiest of the programming assignments.

A minimum spanning tree (MST) connects all the nodes of a graph together as cheaply as possible. Two algorithms were covered: Prim’s, where you start with an arbitrary node, and expand with the cheapest available edge, and Kruskal’s, where you sort the edges in cost order, and keep using the cheapest ones (while avoiding cycles) until the minimum spanning tree is complete. For Kruskal’s algorithm, the Union Find data structure was introduced. It is needed to be able to quickly determine if a cycle would be created. It’s analogous to how Dijkstra’s shortest path algorithm uses a heap. I hadn’t seen the Union Find data structure before, but it is quite neat – both simple and easy to implement.

In clustering, the objective is to collect “related” nodes of a graph together in groups. This is a special case of Kruskal’s algorithm, but you stop before the clusters are merged into one big group. There were programming assignments to calculate the cost of the MST of a 500 nodes graph, and to calculate the maximum spacing of a 4-clustering of another graph. Both were straight-forward.

Huffman coding is a way of assigning binary code words to symbols of an alphabet, considering the probability of each symbol occurring, so that the resulting encoding is as short as possible. I learnt about Huffman coding when I went to university, and I still remember the idea, so I skipped this part.

Dynamic Programming

This was the most interesting part of the course for me. I had heard about dynamic programming before, but I had never really looked into it. If you can express the optimal solution in terms of the solution to smaller sub-problems, then dynamic programming may be suitable. The algorithms covered were:

  • Independent set of maximum weight
  • Knapsack problem
  • Sequence alignment
  • Optimal binary search trees
  • Bellman-Ford single-source shortest path
  • Floyd-Warshall all pairs shortest path
  • Johnson’s all pairs shortest path

The independent set problem was used to illustrate how dynamic programming works. Given a path of nodes with non-negative weights, pick a subset of non-adjacent nodes with the maximum total weight. For the simple 4-nodes example with weights 1, 4, 5 and 4 (with maximum weight 4 + 4 = 8), we saw how both a greedy approach and a divide and conquer approach failed.

Then the optimal solution is expressed in terms of two smaller paths (without the last one or two nodes respectively). A recursive solution would lead to an exponential number of sub-problems, but by caching the result of already solved sub-problems (“memoization”), the time-complexity becomes O(n). Finally the solution is reformulated as a bottom up solution, where a table is created where the solutions to the sub-problems is filled in. I thought this was a great introduction to dynamic programming – well-reasoned, clear and easy to follow.

Next was the classic knapsack problem (I knew about it, but had never attempted to solve it). Given n items, each with a value and a size, and a knapsack of size W, find a subset of items that can fit in the knapsack that maximizes the total value of the items. Programming assignment 3 deals with the knapsack problem. In the first part, there are 100 items, and the knapsack capacity is 10,000. This can be solved with the bottom up approach of filling in a 2-dimensional array with the solutions to the sub-problems. In the second part, there are 500 items, and the knapsack capacity is 2,000,000. In this case the table approach would take up too much memory. Instead it can be solved recursively with caching of intermediate results.

The next application of dynamic programming was the really cool algorithm for sequence alignment. Given two strings (for example DNA), how can we make them align so that we minimize the cost, where there is a cost for each inserted gap and mismatched letter. For example, the strings AGGGCT and AGGCA can be made to align by inserting a gap in the second string so it becomes AGG_CA. The cost in this case is one gap, and one mismatch (the T and A at the end). For long strings (thousands of letters), a brute force solution is staggeringly time consuming. Fortunately, with dynamic programming, it is possible to express the solution in terms of the solution to sub-problems, and the running time is O(mn), where m and n are the lengths of the two strings. Very cool!

For optimal binary search trees, given n items, each with a given probability of being selected, construct the binary search tree that will minimize the average search time for the items. This is similar to Huffman codes in that they both output a binary tree, and they both try to minimize the average depth. However, in Huffman codes, the items are only leaves in the tree, whereas here the items are both nodes and leaves. I didn’t find this algorithm that interesting, since if you have a balanced search tree (albeit not optimal), the performance will still be quite good.

Next were three different algorithms for finding shortest paths in directed graphs. The Bellman-Ford algorithm finds the shortest path from a source node to all other nodes in the graph, and is developed by using the dynamic programming idea of building the solution from the solution to sub-problems. In this case, the sub-problems consist of the shortest path using only a fixed number of edges. The algorithm works for negative edge weights, unlike Dijkstra’s algorithm (covered in part 1).

Floyd-Warshall’s algorithm computes all pairs shortest paths in a directed graph. The key trick in its development is to let the sub-problem consist of the shortest path limited to using a certain sub-set of the edges. In this way, the dynamic programming solution is found. It is built using a 3-dimentional array, where each axis has the number of nodes. In programming assignment 4, you had to find the shortest of all the shortest paths in three given 1000-node graphs. If there were any negative cycles in the graph, this had to be indicated. This assignment was a little bit trickier, since you have to make sure you don’t run out of memory (the 3-dimensional array contains a billion entries).

The final graph algorithm was Johnson’s algorithm for finding all pairs shortest paths. It uses a clever technique to re-weight the edges to make sure there are no negative edges, and then uses Dijkstra’s algorithm to find the shortest paths.

NP-Complete Problems and Ways to Deal With Them

We covered NP-complete problems when I went to university, so this wasn’t new to me, but it was good with a refresher. For a lot of problems, like the travelling salesman problem, there are no known algorithms that run in polynomial time. So we are stuck with solutions with exponential running time, meaning that only problems with very small input sizes can be solved optimally. However, all is not lost. Various ways to deal with NP-complete problems were covered.

The following topics, algorithms and techniques were covered:

  • P, NP, and what they mean
  • Reductions between problems
  • NP-complete problems
  • The P vs. NP problem
  • Vertex cover
  • Travelling salesman
  • Approximate solutions to the knapsack problem
  • Local search
  • Papadimitriou’s 2SAT algorithm

The first part of this section of the course defined the concepts of P, NP, NP-hard and NP-complete problems. Next, the reductions between NP-complete problems was covered, and the question of whether P = NP. After the theory had been established, various ways of dealing with NP-complete problems were presented.

It is still possible to solve the NP-complete problems for small inputs using brute force. It may also be possible to find special cases that are of interest and can be solved more quickly. Yet another way is to find algorithms that are better than brute force search, but still not polynomial. An example of this was an algorithm for the travelling salesman (more on this below). Otherwise we may have to turn to non-optimal solutions. An example algorithm was developed for the general knapsack problem, where it is possible to come within a given fraction of the optimal solution (the closer to the optimal solution, the longer the running time).

One meta-algorithm covered was local search. The idea is to start with a randomly generated solution that is most likely not optimal. Then we iteratively try to improve on that solution by making local changes until we can’t improve anymore. Then we start with a new random solution and repeat the process. There is obviously the risk of finding local maxima, while still not finding a global maximum.

For programming assignment 5, we had to find the shortest path for a travelling salesman problem with 25 cities to visit. This was the hardest of the programming problems. The algorithm that was developed in the lecture videos used dynamic programming to improve the running time from O(n!) to O(n^2×2^n), so it is not polynomial. The algorithm, while reasonable easy to understand conceptually, takes quite a bit of work to implement correctly. However, the biggest problem with the solution wasn’t running time, but memory usage. I started out using the Java class BitSet to keep track of which cities to include. But I ran out of memory and changed to integers instead. I also changed from pre-calculating all integers with x number of bits set, and instead calculated them on the fly. With these memory optimizations, I was finally able to run the algorithm, but it took quite a bit of work.

The last algorithm covered in the class was a clever randomized local search algorithm for the 2SAT problem. In the 2SAT problem (which I had never heard of before, even though it is famous), the task is to find boolean values for n boolean variables used in clauses of the form (x or y). A number of these clauses are basically and:ed together, and the task is to determine if there is an assignment of true or false to all the variables that makes the whole expression true (there may not be such an assignment). The algorithm covered was Papadimitriou’s algorithm, which uses the idea of a random walk to try to find an assignment of the variables that makes the expression true. A clever idea, and a nice analysis of the properties of the algorithm.

The final programming problem was to solve the 2SAT problem. One way would have been to use Papadimitiou’s algorithm, but I chose to use the program I developed in part 1 of the course for finding strongly connected components (SCC) instead. By (again very cleverly) transforming the 2SAT expression to a graph, it was possible to check if it is solvable by checking if a given boolean variable and its negation are both present in the same SCC. If so, the 2SAT expression can not be solved. This programming problem was harder than the first ones, but not as hard as the travelling salesman one.

Conclusion

I really enjoyed this course. It started in the beginning of December, had a break over Christmas, and continued in January. Since I am always very busy around Christmas and New Year’s, I wasn’t able to start the course until the second week of January. This meant that I had to work quite hard to complete it in the time left. But even if that had not been the case, the course was still a lot harder than part 1 I thought. Especially the quizzes, but also the programming assignments. I skipped part 2 of programming assignment 2 (clustering in a bigger graph), but completed all the other programming assignments, all the quizzes, and the final exam. Since I started late, my score on the first three weeks’ assignments was reduced by 50%. My final score was 67% (80% without reductions), and the cut-off for getting a statement of accomplishment was 70%, so no certificate for me this time.

Regardless of the score, I learnt a lot by taking this course. It took a lot of time and effort, by I have definitely improved my algorithm knowledge and skill. I can really recommend both parts of the course to anyone wanting to learn more about algorithms and problem solving.