Last Modified: 2017-04-19
10 where the exponentiation can be performed using fewer multiplications, by some other method. 1.17. Consider the problem of computing xy for given integers x and y: we want the whole answer, not modulo a third integer. We know two algorithms for doing this: the iterative algorithm which performs y 1 multiplications by x; and the recursive algorithm based on the binary expansion of y. Compare the time requirements of these two algorithms, assuming that the time to multiply an n-bit number by an m-bit number is O(mn). 1.18. Compute gcd(210;588) two different ways: by nding the factorization of each number, and by using Euclid’s algorithm. 1.19. The Fibonacci numbers F0;F1;::: are given by the recurrence Fn+1 = Fn +Fn 1, F0 = 0, F1 = 1. Show that for any n 1, gcd(Fn+1;Fn) = 1. 1.20. Find the inverse of: 20 mod 79; 3 mod 62; 21 mod 91; 5 mod 23. 1.21. How many integers modulo 113 have inverses? (Note: 113 = 1331.) 1.22. Prove or disprove: If a has an inverse modulo b, then b has an inverse modulo a. 1.23. Show that if a has a multiplicative inverse modulo N, then this inverse is unique (modulo N). 1.24. If p is prime, how many elements off0;1;:::;pn 1ghave an inverse modulo pn? 1.25. Calculate 2125 mod 127 using any method you choose. (Hint: 127 is prime.) 1.26. What is the least signi cant decimal digit of 171717? (Hint: For distinct primesp;q, and any a6 0 (mod pq), we proved the formula a(p 1)(q 1) 1 (mod pq) in Section 1.4.2.) 1.27. Consider an RSA key set with p = 17;q = 23;N = 391, and e = 3 (as in Figure 1.9). What value of d should be used for the secret key? What is the encryption of the message M = 41? 1.28. In an RSA cryptosystem, p = 7 and q = 11 (as in Figure 1.9). Find appropriate exponents d and e. 1.29. Let [m] denote the setf0;1;:::;m 1g. For each of the following families of hash functions, say whether or not it is universal, and determine how many random bits are needed to choose a function from the family. (a) H =fha1;a2 : a1;a2 2[m]g, where m is a xed prime and ha1;a2(x1;x2) = a1x1 +a2x2 mod m: Notice that each of these functions has signature ha1;a2 : [m]2 ![m], that is, it maps a pair of integers in [m] to a single integer in [m]. (b) H is as before, except that now m = 2k is some xed power of 2. (c) H is the set of all functions f : [m]![m 1]. 50 Algorithms 1.30. The grade-school algorithm for multiplying two n-bit binary numbers x and y consists of adding together n copies of x, each appropriately left-shifted. Each copy, when shifted, is at most 2n bits long. In this problem, we will examine a scheme for adding n binary numbers, each m bits long, using a circuit or a parallel architecture. The main parameter of interest in this question is therefore the depth of the circuit or the longest path from the input to the output of the circuit. This determines the total time taken for computing the function. To add twom-bit binary numbers naively, we must wait for the carry bit from position i 1 before we can gure out the ith bit of the answer. This leads to a circuit of depth O(m). However carry lookahead circuits (see wikipedia.com if you want to know more about this) can add in O(logm) depth. (a) Assuming you have carry lookahead circuits for addition, show how to add n numbers each m bits long using a circuit of depth O((logn)(logm)). (b) When adding three m-bit binary numbers x+y+z, there is a trick we can use to parallelize the process. Instead of carrying out the addition completely, we can re-express the result as the sum of just two binary numbers r +s, such that the ith bits of r and s can be computed independently of the other bits. Show how this can be done. (Hint: One of the numbers represents carry bits.) (c) Show how to use the trick from the previous part to design a circuit of depth O(logn) for multiplying two n-bit numbers. 1.31. Consider the problem of computing N! = 1 2 3 N. (a) If N is an n-bit number, how many bits long is N!, approximately (in ( ) form)? (b) Give an algorithm to compute N! and analyze its running time. 1.32. A positive integer N is a power if it is of the form qk, where q;k are positive integers and k> 1. (a) Give an ef cient algorithm that takes as input a number N and determines whether it is a square, that is, whether it can be written as q2 for some positive integer q. What is the running time of your algorithm? (b) Show that if N = qk (with N, q, and k all positive integers), then either k logN or N = 1. (c) Give an ef cient algorithm for determining whether a positive integerN is a power. Analyze its running time. 1.33. Give an ef cient algorithm to compute the least common multiple of two n-bit numbers x and y, that is, the smallest number divisible by both x and y. What is the running time of your algorithm as a function of n? 1.34. On page 38, we claimed that since about a 1=n fraction of n-bit numbers are prime, on average it is suf cient to draw O(n) random n-bit numbers before hitting a prime. We now justify this rigorously. Suppose a particular coin has a probability p of coming up heads. How many times must you toss it, on average, before it comes up heads? (Hint: Method 1: start by showing that the correct expression is P1i=1 i(1 p)i 1p. Method 2: if E is the average number of coin tosses, show that E = 1 + (1 p)E.) 1.35. Wilson’s theorem says that a number N is prime if and only if (N 1)! 1 (mod N): S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 51 (a) If p is prime, then we know every number 1 x
0 (in the multiplication algorithm, a = 3, b = 2, and d = 1). Their running time can therefore be captured by the equation T(n) = aT(dn=be) + O(nd). We next derive a closed-form solution to this general recurrence so that we no longer have to solve it explicitly in each new instance. Master theorem2 If T(n) = aT(dn=be) + O(nd) for some constants a > 0, b > 1, and d 0, 2There are even more general results of this type, but we will not be needing them. S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 59 Figure 2.3 Each problem of size n is divided into a subproblems of size n=b. Size 1 Size n=b2 Size n=b Size n Depth logbn Width alogb n = nlogb a Branching factor a then T(n) = 8 < : O(nd) if d> logba O(nd logn) if d = logba O(nlogb a) if d< logba . This single theorem tells us the running times of most of the divide-and-conquer procedures we are likely to use. Proof. To prove the claim, let’s start by assuming for the sake of convenience that n is a power of b. This will not in uence the nal bound in any important way after all, n is at most a multiplicative factor of b away from some power of b (Exercise 2.2) and it will allow us to ignore the rounding effect indn=be. Next, notice that the size of the subproblems decreases by a factor of b with each level of recursion, and therefore reaches the base case after logbn levels. This is the height of the recursion tree. Its branching factor is a, so the kth level of the tree is made up of ak subproblems, each of size n=bk (Figure 2.3). The total work done at this level is ak O n bk d = O(nd) a bd k : As k goes from 0 (the root) to logbn (the leaves), these numbers form a geometric series with 60 Algorithms ratio a=bd. Finding the sum of such a series in big-O notation is easy (Exercise 0.2), and comes down to three cases. 1. The ratio is less than 1. Then the series is decreasing, and its sum is just given by its rst term, O(nd). 2. The ratio is greater than 1. The series is increasing and its sum is given by its last term, O(nlogb a): nd a bd logb n = nd alog b n (blogb n)d = alogb n = a(loga n)(logb a) = nlogb a: 3. The ratio is exactly 1. In this case all O(logn) terms of the series are equal to O(nd). These cases translate directly into the three contingencies in the theorem statement. Binary search The ultimate divide-and-conquer algorithm is, of course, binary search: to nd a key k in a large le containing keys z[0;1;:::;n 1] in sorted order, we rst compare k with z[n=2], and depending on the result we recurse either on the rst half of the le, z[0;:::;n=2 1], or on the second half, z[n=2;:::;n 1]. The recurrence now is T(n) = T(dn=2e)+O(1), which is the case a = 1;b = 2;d = 0. Plugging into our master theorem we get the familiar solution: a running time of just O(logn). 2.3 Mergesort The problem of sorting a list of numbers lends itself immediately to a divide-and-conquer strategy: split the list into two halves, recursively sort each half, and then merge the two sorted sublists. function mergesort(a[1:::n]) Input: An array of numbers a[1:::n] Output: A sorted version of this array if n> 1: return merge(mergesort(a[1:::bn=2c]), mergesort(a[bn=2c+ 1:::n])) else: return a The correctness of this algorithm is self-evident, as long as a correct merge subroutine is speci ed. If we are given two sorted arrays x[1:::k] and y[1:::l], how do we ef ciently merge them into a single sorted array z[1:::k +l]? Well, the very rst element of z is either x or y, whichever is smaller. The rest of z[ ] can then be constructed recursively. S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 61 Figure 2.4 The sequence of merge operations in mergesort. 2 3 10 1 6 7 135 102 53 137 1 6 2 5 3 7 13 1 610 Input: 10 2 3 1135 7 6 1 6 10 1332 5 7 . function merge(x[1:::k];y[1:::l]) if k = 0: return y[1:::l] if l = 0: return x[1:::k] if x y: return x merge(x[2:::k];y[1:::l]) else: return y merge(x[1:::k];y[2:::l]) Here denotes concatenation. This merge procedure does a constant amount of work per recursive call (provided the required array space is allocated in advance), for a total running time of O(k +l). Thus merge’s are linear, and the overall time taken by mergesort is T(n) = 2T(n=2) +O(n); or O(nlogn). Looking back at the mergesort algorithm, we see that all the real work is done in merg- ing, which doesn’t start until the recursion gets down to singleton arrays. The singletons are merged in pairs, to yield arrays with two elements. Then pairs of these 2-tuples are merged, producing 4-tuples, and so on. Figure 2.4 shows an example. This viewpoint also suggests how mergesort might be made iterative. At any given mo- ment, there is a set of active arrays initially, the singletons which are merged in pairs to give the next batch of active arrays. These arrays can be organized in a queue, and processed by repeatedly removing two arrays from the front of the queue, merging them, and putting the result at the end of the queue. 62 Algorithms In the following pseudocode, the primitive operation inject adds an element to the end of the queue while eject removes and returns the element at the front of the queue. function iterative-mergesort(a[1:::n]) Input: elements a1;a2;:::;an to be sorted Q = [ ] (empty queue) for i = 1 to n: inject(Q;[ai]) while jQj> 1: inject(Q;merge(eject(Q);eject(Q))) return eject(Q) S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 63 An n log n lower bound for sorting Sorting algorithms can be depicted as trees. The one in the following gure sorts an array of three elements, a1;a2;a3. It starts by comparing a1 to a2 and, if the rst is larger, compares it with a3; otherwise it compares a2 and a3. And so on. Eventually we end up at a leaf, and this leaf is labeled with the true order of the three elements as a permutation of 1;2;3. For example, if a2
Words From Our Students
"StudyBlue is great for studying. I love the study guides, flashcards, and quizzes. So extremely helpful for all of my classes!"
Alice, Arizona State University
"I'm a student using StudyBlue, and I can 100% say that it helps me so much. Study materials for almost every subject in school are available in StudyBlue. It is so helpful for my education!"
Tim, University of Florida
"StudyBlue provides way more features than other studying apps, and thus allows me to learn very quickly! I actually feel much more comfortable taking my exams after I study with this app. It's amazing!"
Jennifer, Rutgers University
"I love flashcards but carrying around physical flashcards is cumbersome and simply outdated. StudyBlue is exactly what I was looking for!"