Unit V- Algorithm Design Techniques

Total Page:16

File Type:pdf, Size:1020Kb

Unit V- Algorithm Design Techniques

UNIT V- ALGORITHM DESIGN TECHNIQUES

Objectives  Describe the fundamentals of algorithmic problem solving  Understand how to calculate the complexity of an algorithm  Describe the use of an divide and conquer method  Explain about greedy method  Explain the concept of knapsack problem and solve the problems in knapsack

1.I n t r odu ct i o n to A l g o r i thm de s i g n a n d a n al y s i s Compute the efficiency of the algorithm.. The efficiency iis depend on the following factors • Time: The amount of time to take execute the algorithm. If the algorithm take less amount of time to execute then this one will be the best one. . Space: The amount of memory required to store the algorithm and the amount of memory required to store the input for that algorithm. • Simplicity: The algorithm should not having any complex instructions. That type of instructions are simplified to number of smaller instructions. • Generality: The algorithm should be in general. It should be implemented in any language. The algorithm is not fit in to a specific output.

Implementation After satisfying above all factors code the algorithm in any of language you known and execute the program. Properties of the algorithm 1) Finiteness: - an algorithm terminates after a finite numbers of steps. 2) Definiteness: - each step in algorithm is unambiguous. This means that the action specified by the step cannot be interpreted (explain the meaning of) in multiple ways & can be performed without any confusion. 3) Input:- an algorithm accepts zero or more inputs 4) Output:- it produces at least one output.

Data structures-P.Vasantha Kumari,L/IT 1 5) Effectiveness:- it consists of basic instructions that are realizable. This means that the instructions 6) Non Ambiguity: The algorithm should not having any conflict meaning.. 7) Range of Input: Before design a algorithm,, decide what type of iinput going to given,, whatsis the required output.. 8) Multiplicity: You can write different algorithms to solve the same problem.. 9) Speed : Apply some idea to speed up the execution time..

2. Analysis of algorithm Analysis Framework : Analysis : To compute the efficiency of an algorithm. When compute the efficiency of an algorithm, consider the following two factors. _ Space Complexity _ Time Complexity

(i) Space Complexity : The amount of memory required to store the algorithm and the amount of memory required to store the inputs for this algorithm. S(p) = C + Sp where, C – Constant (The amount of memory required to store the algorithm) Sp – The amount of memory required to store the inputs. Each input stored in one unit. Example : Write an algorithm to find the summation of n numbers and analysis the space complexity for that algorithm.. Algorithm Summation (X ,,n) //// Input : n Number of elements //// Output : The result for summation of n numbers.. sum = 0 for I = 1 to n sum = sum+ X[ii] return sum The Space Complexity of above algorithm: S(p) = C + Sp 1.. One unit for each element in the array.. The array having n number of elements so the array

Data structures-P.Vasantha Kumari,L/IT 2 required in unit.. 2.. One unit for the variable n,, one unit for the variable I and one unit for the variable sum.. 3.. Add the above all units and find the space complexity.. S(p) = C + (n + 1 + 1 + 1) S(p) = C + (n + 3) (ii) Time Complexity The amount of time required to run the algorithm.. The execution time depends on the following factors.. • System load • Number of other programs running • Speed of hardware How to measure the running time? • Find the basic operation of the algorithm. (The inner most loop or operation is called basic operation) • Compute the time required to execute the basic operation. • Compute how many times the basic operation is executed. The Time complexity calculated by the following formula T(n) = Cop * C(n) Where, Cop – Constant (The amount of time required to execute the basic operation) C(n) – How many times the basic operation is executed. Example : The Time complexity of the above algorithm (Summation of n Numbers) 1. The Basic operation is: addition 2. To compute how many times the Basic operation is executed: n 3. So, T(n) Cop * n 4. Remove the constant or assume Cop = 1 5. The Time Complexity is T(n) = n. 4.2 Order of Growth The Performance of an algorithm relation with the input size n of that algorithm. This is called Order of Growth. For example the Function is 2n In that,, if the input n = 1 then the output is 2. Best case, Worst case and Average case

Data structures-P.Vasantha Kumari,L/IT 3 In some algorithm the time complexity comes under in three categories.. • Best case • Worst case • Average case _ In the Best case the algorithm (Basic operation) execute only less number of times compare to other cases.. _ In the Worst case the algorithm (Basic operation) execute high number of times compare to other cases.. _ In the Average case the algorithm (Basic operation) execute in between Best case and Worst case.. Example : Write an algorithm to search a key in the given set of elements.. Algorithm Seq_Search ( X,, key,, n) ////Input : The array of elements and a Search key.. //// Output : The search key is present in the list or not.. for i = 1 to n if ( X[i] == key ) return true return false In the above algorithm the best case is One. T(n) = 1 In the above algorithm the worst case come under in two situations,, • If the search key is located at the end of the list. • If the search key is not present in the list. Here the basic operation executes n Number of times. So the Time Complexity of this algorithm is n. i.e T(n) = n Let assume, P – The probability of successful search. 1-P - The probability of unsuccessful search. P/n – The probability of occurring first match for the I the element. Cavg (n) = [ 1 P/n + 2P/n + …..nP/n]+ n(1-P) = P/n[1+2+…..+n]+n(1-P) = P/n(n(n+1)/2)+ n(1-P)

Data structures-P.Vasantha Kumari,L/IT 4 = P(n+1)/2 + n(1-P) In the above,, Apply P = 0 if there is no such a key in the list Cavg (n) = 0(n+1)//2 + n(1-0) = n Apply P = 1 if the key is present in the list Cavg (n) = 1(n+1)//2 + n(1-1) = (n+1)//2..

3. Asymptotic Notations (i) Big–O (ii) Big–Omega (iii) Big–Theta (i) Big–O (or) O( ) Notation A function t(n) is said to be in O(g(n)), denote t(n) € O(g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 such that t(n) ≤ c.g(n) for all n ≥ n0.

(ii) Big–Omega (or) Ω( ) Notation A function t(n) is said to be in Ω (g(n)), denote t(n) € Ω (g(n)), if t(n) is bounded below by some constant multiple of g(n) for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 such that t(n) ≥ c.g(n) for all n ≥ n0. (iii) Big–Theta (or) θ( ) Notation A function t(n) is said to be in O(g(n)), denote t(n) € O (g(n)), if t(n) is bounded both above and below by some constant multiple of g(n) for all large n, i.e., if there exists some positive constant c1 and c2 and some nonnegative integer n0 such that C2.g(n) ≤ t(n) ≥ c1.g(n) for all n ≥ n0.

4. Greedy method This is another approach that is often used to design algorithms for solving. In contrast to dynamic programming, however,

Data structures-P.Vasantha Kumari,L/IT 5 • Greedy algorithms do not always yield a genuinely optimal solution. In such cases the greedy method is frequently the basis of a heuristic approach. • Even for problems which can be solved exactly by a greedy algorithm, establishing the correctness of the method may be a non-trivial process. The greedy method has, • Most straightforward design technique – Most problems have n inputs – Solution contains a subset of inputs that satisfies a given constraint – Feasible solution: Any subset that satisfies the constraint – Need to find a feasible solution that maximizes or minimizes a given objective function – optimal solution • Used to determine a feasible solution that may or may not be optimal – At every point, make a decision that is locally optimal; and hope that it leads to a globally optimal solution – Leads to a powerful method for getting a solution that works well for a wide range of applications _ The OPT algorithm for process scheduling, and its variant SRTN, in operating systems – May not guarantee the best solution • Ultimate goal is to find a feasible solution that minimizes [or maximizes] an objective Algorithm Knapsack_greedy(W,n) For i:=1 to n do If(w[i]

5.Divide and conquer The divide-and-conquer strategy solves a problem by: 1. Breaking it into sub problems that are themselves smaller instances of the same type of problem 2. Recursively solving these sub problems

Data structures-P.Vasantha Kumari,L/IT 6 3. Appropriately combining their answers

Binary Search Generally, to find a value in unsorted array, we should look through elements of an array one by one, until searched value is found. In case of searched value is absent from array, we go through all elements. In average, complexity of such an algorithm is proportional to the length of the array.

Algorithm Algorithm is quite simple. It can be done either recursively or iteratively: 1. get the middle element; 2. if the middle element equals to the searched value, the algorithm stops; 3. otherwise, two cases are possible: o Searched value is less, than the middle element. In this case, go to the step 1 for the part of the array, before middle element. o Searched value is greater, than the middle element. In this case, go to the step 1 for the part of the array, after middle element. Now we should define, when iterations should stop. First case is when searched element is found. Second one is when subarray has no elements. In this case, we can conclude, that searched value doesn't present in the array. Examples Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}. Step 1 (middle element is 19 > 6): -1 5 6 18 19 25 46 78 102 114 Step 2 (middle element is 5 < 6): -1 5 6 18 19 25 46 78 102 114 Step 3 (middle element is 6 == 6): -1 5 6 18 19 25 46 78 102 114 Example 2. Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}. Step 1 (middle element is 19 < 103): -1 5 6 18 19 25 46 78 102 114 Step 2 (middle element is 78 < 103): -1 5 6 18 19 25 46 78 102 114 Step 3 (middle element is 102 < 103): -1 5 6 18 19 25 46 78 102 114 Step 4 (middle element is 114 > 103): -1 5 6 18 19 25 46 78 102 114 Step 5 (searched value is absent): -1 5 6 18 19 25 46 78 102 114

Data structures-P.Vasantha Kumari,L/IT 7 procedure: int binarySearch(int arr[], int value, int left, int right) while (left <= right) do int middle = (left + right) / 2; if (arr[middle] == value) return middle; else if (arr[middle] > value) right = middle - 1; else left = middle + 1; return -1;

Analysis: The recurrence relation for binary search C(n)=C(n/2)+1 for n>1, C(1)=1 Substitute n=2k then we get, C(2k)=k+1=log2n+1 C(n)= log2n +1= log2n (n+1) If n=2i then C(n)= log2n +1= log2n 2i+1= log22+ log2i+1 =1+ log2i+1= log2i+2

Finding Maximum and Minimum element A natural approach is to try a divide and conquer algorithm. Split the list into two sub lists of equal size. (Assume that the initial list size is a power of two.) Find the maxima and minima of the sub lists. Two more comparisons then suffice to find the maximum and minimum of the list. The steps are, Dividing the given array into two equal halves Ø Repeat step1 until the array contains a single element Ø Combining the two arrays and select the minimum and maximum element Ø Repeat stpe3 until the final solution is found

procedure maxmin(A[1...n] of numbers) -> (min, max)

Data structures-P.Vasantha Kumari,L/IT 8 begin if (n == 1) return (A[1], A[1]) else if (n == 2) if( A[1] < A[2]) return (A[1], A[2]) else return (A[2], A[1]) else (max_left, min_left) = maxmin(A[1...(n/2)]) (max_right, min_right) = maxmin(A[(n/2 +1)...n]) if (max_left < max_right) max = max_right else max = max_left if (min_left < min_right) max = min_left else min = min_right return (min, max) end

6.Dynamic Pogramming Dynamic programming is a method for efficiently solving a broad range of search and optimization problems which exhibit the characteristics of overlappling sub problems and optimal substructure.

All pairs shortest path problem The all-pairs shortest path problem is the determination of the shortest graph distances between every pair of vertices in a given graph. The problem can be solved using applications of Dijkstra's algorithm or all at once using the Floyd-Warshall algorithm. The latter algorithm also works in the case of a

Data structures-P.Vasantha Kumari,L/IT 9 weighted graph where the edges have negative weights. The matrix of all distances between pairs of vertices is called the graph distance matrix, or sometimes the all-pairs shortest path matrix.

Floyd's Algorithm Floyd's algorithm takes as input the cost matrix C[v,w] • C[v,w] = oo if (v,w) is not in E. It returns as output • a distance matrix D[v,w] containing the cost of the lowest cost path from v to wo initially D[v,w] = C[v,w] • a path matrix P, where P[v,w] holds the intermediate vertex k on the least cost path between v and w that led to the cost stored in D[v,w]. Floyd's algorithm computes the sequence of matrices D0,D1,….D|V| . The distances in Di represent paths with intermediate vertices in Vi. Since Vi+1=ViU{Vi+1}, we can obtain the distances in Di+1 from those in Di by considering only the paths that pass through For every pair of vertices (v,w), we compare the distance Di(v,w), (which represents the shortest path from v to w that does not pass through vi+1) with the sum Di(v,vi+1)+Di(vi+1,w) (which represents the shortest path from v to w that does pass through vi+1).

7.Backtracking Definition: Backtracking is a process where steps are taken towards the final solution and the details are recorded. If these steps do not lead to a solution some or all of them may have to be retraced and the relevant details discarded. In theses circumstances it is often necessary to search through a large number of possible situations in search of feasible solutions.

General method • Useful technique for optimizing search under some constraints • Express the desired solution as an n-tuple (x1, . . . , xn) where each xi 2 Si, Si being a finite set • The solution is based on finding one or more vectors that maximize, minimize, or satisfy a criterion function P(x1, . . . , xn) • Sorting an array a[n] – Find an n-tuple where the element xi is the index of ith smallest element in a – Criterion function is given by a[xi] _ a[xi+1] for 1 _ i < n

Data structures-P.Vasantha Kumari,L/IT 10 – Set Si is a finite set of integers in the range [1,n] • Brute force approach – Let the size of set Si be mi – There are m = m1m2 ・ ・ ・mn n-tuples that satisfy the criterion function P – In brute force algorithm, you have to form all the m n-tuples to determine the optimal solutions • Backtrack approach – Requires less than m trials to determine the solution – Form a solution (partial vector) and check at every step if this has any chance of success – If the solution at any point seems not-promising, ignore it – If the partial vector (x1, x2, . . . , xi) does not yield an optimal solution, ignore mi+1 ・ ・ ・mn possible test vectors even without looking at them

Solution space and tree organization State-space search methods in problem solving have often been illustrated using tree diagrams. We explore a set of issues related to coordination in collaborative problem solving and design, and we present a variety of interactive features for state-space search trees intended to facilitate such activity.A node in the state space tree is promising if it corresponds to the partials constructed solution that may lead to the complete solution otherwise the nodes are called non-promising. Leaves of the tree represent either the non- promising dead end or complete solution found by the algorithm. _ The tree organization of the solution space is state space tree _ Each node in the state space tree defines problem state _ Solution state are those problem states s for which the path from the root to s defines a tuple in the solution space _ Answer state are those solution states s for which the path from root to s defines a tuple that is a member of solutions (it satisfies the implicit constraints) _ Solution space is partitioned into disjoint sub-solution space at each internal node _ Static vs. dynamic tree _ Static trees are independent of the problem instance

Data structures-P.Vasantha Kumari,L/IT 11 _ Dynamic trees are dependent of the problem instance _ A node which has been generated and all of whose children have not yet been generated is called a live node _ The live node whose children are currently being generated is called E-node _ A dead node is a generated node which is not to be expanded further or all of whose children have been generated The Eight Queens problem Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. Formulation : Ø States: any arrangement of 0 to 8 queens on the board Ø Initial state: 0 queens on the board Ø Successor function: add a queen in any square Ø Goal test: 8 queens on the board, none attacked Idea of solution: • Each recursive call attempts to place a queen in a specific column – A loop is used, since there are 8 squares in the column • For a given call, the state of the board from previous placements is known (i.e. where are the other queens?) • Current step backtracking: If a placement within the column does not lead to a solution, the queen is removed and moved "down" the column • Previous step backtracking: When all rows in a column have been tried, the call terminates and backtracks to the previous call (in the previous column) • Pruning: If a queen cannot be placed into column i, do not even try to place one onto column i+1 – rather, backtrack to column i-1 and move the queen that had been placed there • Using this approach we can reduce the number of potential solutions even more. Algorithm void NQueens(int k, int n) // Using backtracking, this procedure prints all // possible placements of n queens on an nXn // chessboard so that they are nonattacking. {

Data structures-P.Vasantha Kumari,L/IT 12 for (int i=1; i<=n; i++) { if (Place(k, i)) { x[k] = i; if (k==n) { for (int j=1;j<=n;j++) cout << x[j] << ' '; cout << endl;} else NQueens(k+1, n); } } } bool Place(int k, int i) // Returns true if a queen can be placed in kth row and // ith column. Otherwise it returns false. x[] is a // global array whose first (k-1) values have been set. // abs(r) returns the absolute value of r. { for (int j=1; j < k; j++) if ((x[j] == i) // Two in the same column || (abs(x[j]-i) == abs(j-k))) // or in the same diagonal return(false); return(true); }

8. Branch and Bound algorithm A B&B algorithm searches the complete space of solutions for a given problem for the best solution. Ø Branch and bound is a systematic method for solving optimization problems Ø B&B is a rather general optimization technique that applies where the greedy method and dynamic programming fail. Ø However, it is much slower. Indeed, it often leads to exponential time complexities in the worst case. Ø On the other hand, if applied carefully, it can lead to algorithms that run reasonably fast on average.

Data structures-P.Vasantha Kumari,L/IT 13 Ø The general idea of B&B is a BFS-like search for the optimal solution, but not all nodes get expanded (i.e., their children generated). Rather, a carefully selected criterion determines which node to expand and when, and another criterion tells the algorithm when an optimal solution has been found.

Traveling Salesperson Problem Traveling salesperson problem is to find the shortest path in directed graph that starts at a given vertex, visits each vertex in the graph exactly once. Such a path is called an optimal tour. Construct the state-space tree: Ø A node = a vertex: a vertex in the graph. Ø A node that is not a leaf represents all the tours that start with the path stored at that node; each leaf represents a tour (or non-promising node). Ø Branch-and-bound: we need to determine a lower bound for each node Ø Expand each promising node, and stop when all the promising nodes have been expanded. During this procedure, prune all the nonpromising nodes. – Promising node: the node’s lower bound is less than current minimum tour length. – Non-promising node: the node’s lower bound is NO less than current minimum tour length. Ø Because a tour must leave every vertex exactly once, a lower bound on the length of a tour is b (lower bound) minimum cost of leaving every vertex. o The lower bound on the cost of leaving vertex v1 is given by the minimum of all the nonzero entries in row 1 of the adjacency matrix. The lower bound on the cost of leaving vertex vn is given by the minimum of all the nonzero entries in row n of the adjacency matrix. Ø Note: This is not to say that there is a tour with this length. Rather, it says that there can be no shorter tour. Ø Assume that the tour starts with v1. Ø Because every vertex must be entered and exited exactly once, a lower bound on the length of a tour is the sum of the minimum cost of entering and leaving every vertex. o For a given edge (u, v), think of half of its weight as the exiting cost of u, and half of its weight as the entering cost of v. o The total length of a tour = the total cost of visiting( entering and exiting) every vertex exactly once.

Data structures-P.Vasantha Kumari,L/IT 14 o The lower bound of the length of a tour = the lower bound of the total cost of visiting (entering and exiting ) every vertex exactly once.

Calculation: v for each vertex, pick top two shortest adjacent edges (their sum divided by 2 is the lower bound of the total cost of entering and exiting the vertex); v add up these summations for all the vertices. Ø Assume that the tour starts with vertex a and that b is visited before c.

Algorithm TSP 1. First, find out all (n -1)! Possible solutions, where n is the number of cities. 2. Next, determine the minimum cost by finding out the cost of everyone of these (n -1)!

Solutions. Finally, keep the one with the minimum cost. Input Number of cities n Cost of traveling between the cities. c (i, j) i, j = 1, . . , n. Start with city 1 Output Vector of cities and total cost. Main Steps Initialization c← 0 Cost ← 0 visits ← 0 e = 1 /* pointer of the visited city */ For 1 ≤ r ≤ n Do { Choose pointer j with minimum = c (e, j) = min{c (e, k); visits (k) = 0 and 1 ≤ k ≤ n }

Data structures-P.Vasantha Kumari,L/IT 15 cost ← cost + minimum - cost e = j C(r) ← j C(n) = 1 cost = cost + c (e, 1)

9. NP Hard and NP Complete problems NP stands for Non-deterministic Polynomial time. This means that the problem can be solved in polynomial time using a Non-deterministic Turing machine (like a regular Turing machine but also including a non-deterministic "choice" function). Basically, a solution has to be testable in poly time. If that's the case, and a known NP problem can be solved using the given problem with modified input (an NP problem can be reduced to the given problem) then the problem is NP complete. The main thing to take away from an NP-complete problem is that it cannot be solved in polynomial time in any known way. NP-Hard/NP-Complete is a way of showing that certain classes of problems are not solvable in realistic time. NP-Complete means something very specific and you have to be careful or you will get the definition wrong. First, an NP problem is a yes/no problem such that 1. There is polynomial-time proof for every instance of the problem with a "yes" answer that the answer is "yes", or (equivalently) 2. There exists a polynomial-time algorithm (possibly using random variables) that has a nonzero probability of answering "yes" if the answer to an instance of the problem is "yes" and w ill say "no" 100% of the time if the answer is "no." In other words, the algorithm must have a false-negative rate less than 100% and no false positives. A problem X is NP-Complete if 1. X is in NP, and 2. For any problem Y in NP, there is a "reduction" from Y to X: a polynomial-time algorithm that transforms any instance of Y into an instance of X such that the answer to the Yinstance is "yes" if and only if the answer X-instance is "yes".

What is NP? NP is the set of all decision problems (question with yes-or-no answer) for which the 'yes'-answers

Data structures-P.Vasantha Kumari,L/IT 16 can be verified in polynomial time (O(n^k) where n is the problem size, and k is a constant) by a deterministic Turing machine. Polynomial time is sometimes used as the definition of fast or quickly.

What is P? P is the set of all decision problems which can be solved in polynomial time by a deterministic Turing machine. Since it can solve in polynomial time, it can also be verified in polynomial time. Therefore P is a subset of NP. What is NP-Complete? A problem x that is in NP is also in NP-Complete if and only if every other problem in NP can be quickly (ie. in polynomial time) transformed into x. In other words: 1. x is in NP, and 2. Every problem in NP is reducible to x

What is NP-Hard? NP-Hard are problems that are at least as hard as the hardest problems in NP. Note that NPComplete problems are also NP-hard. However not all NP-hard problems are NP (or even a decision problem), despite having 'NP' as a prefix. That is the NP in NP-hard does not mean 'non deterministic polynomial time'. Yes this is confusing but its usage is entrenched and unlikely to change.

Basic concepts • Solvability of algorithms – There are algorithms for which there is no known solution, for example, Turing’s Halting Problem – Halting problem cannot be solved by any computer, no matter how much time is provided U In algorithmic terms, there is no algorithm of any complexity to solve this problem • Efficient algorithms – Efficiency measured in terms of speed – For some problems, there is no known efficient solution – Distinction between problems that can be solved in polynomial time and problems for which no polynomial time algorithm is known • Problems classified to belong to one of the two groups

Data structures-P.Vasantha Kumari,L/IT 17 1. Problems with solution times bound by a polynomial of a small degree – Most searching and sorting algorithms – Also called tractable algorithms 2. Problems with best known algorithms not bound by a polynomial – Hard, or intractable, problems – Traveling salesperson (O(n22n)), knapsack (O(2n/2)) – None of the problems in this group has been solved by any polynomial time algorithm – NP-complete problems • Theory of NP-completeness – Show that many of the problems with no polynomial time algorithms are computationally related – The group of problems is further subdivided into two classes NP-complete. A problem that is NP-complete can be solved in polynomial time iff all other NP-complete problems can also be solved in polynomial time NP-hard. If an NP-hard problem can be solved in polynomial time then all NP-complete problems can also be solved in polynomial time – All NP-complete problems are NP-hard but some NP-hard problems are known not to be NPcomplete

Review Questions Two Mark Questions 1. Define NP hard.

2. List out and define the perform ance m easures of an algorithm

3. Define NP complete

4. State the algorithmic technique used in m erge sort

5. What is the worst case complexity of Quick sort?

6. What is the type of the algorithm used in solving the 8 Queens problem ?

Big Questions 1. Explain the dynam ic programm ing algorithm in detail. 2. Explain the greedy algorithm with an exam ple. 3. Discuss the following algorithm design techniques in detail: Data structures-P.Vasantha Kumari,L/IT 18 a. Divide and conquer b. Branch and Bound 4. Describe the concept of backtracking with an exam ple. 5. What is m eant by random ized algorithm ic technique? Give exam ple. 6. What is NP- Com plete Problem ? Describe any two such problem s.

------END OF FIFTH UNIT ------

Data structures-P.Vasantha Kumari,L/IT 19

Recommended publications