Conference Abstracts Index Abderraman´ Marrero, Jesus,´ 72 Beckermann, Bernhard, 18 Absil, Pierre-Antoine, 49, 78 Beitia, M

Total Page:16

File Type:pdf, Size:1020Kb

Conference Abstracts Index Abderraman´ Marrero, Jesus,´ 72 Beckermann, Bernhard, 18 Absil, Pierre-Antoine, 49, 78 Beitia, M Conference Abstracts Index Abderraman´ Marrero, Jesus,´ 72 Beckermann, Bernhard, 18 Absil, Pierre-Antoine, 49, 78 Beitia, M. Asuncion,´ 74 Acar, Evrim, 40 Belhaj, Skander, 89 Agullo, Emmanuel, 39, 51 Ben-Ari, Iddo, 24 Ahuja, Kapil, 41, 45 Bendito, Enrique, 19, 66 Aidoo, Anthony, 81 Benner, Peter, 22, 41, 42, 76, 81 Aishima, Kensuke, 50 Benning, Martin, 59 Akian, Marianne, 53 Bergamaschi, Luca, 58 Al-Ammari, Maha, 79 Bergqvist, Goran,¨ 87 Al-Mohy, Awad, 96 Berman, Abraham, 33 Alastair Spence, Alastair, 56 Bernasconi, Michele, 81 Albera, Laurent, 86 Beyer, Lucas, 37 Ali, Murtaza, 37 Bientinesi, Paolo, 37 Allamigeon, Xavier, 59 Bing, Zheng, 82 Alter, O., 73 Bini, Dario A., 12 Amat, Sergio, 67, 76, 96 Bioucas-Dias, Jose, 55 Amestoy, Patrick, 65 Birken, Philipp, 92 Ammar, Amine, 26 Blank, Luise, 32 Amodei, Luca, 69 Boiteau, Olivier, 65 Amsallem, David, 15 Boito, Paola, 63 Andrianov, Alexander, 91 Bolten, Matthias, 14 Antoine, Xavier, 24 Boman, Erik G., 39 Antoulas, A.C., 43 Borges, Anabela, 83 Araujo,´ Claudia Mendes, 95 Borm,¨ Steffen, 56 Arbenz, Peter, 86 Borobia, Alberto, 80 Arioli, Mario, 36, 45, 58 Borwein, Jonathan M., 65 Armentia, Gorka, 84 Bouhamidi, Abderrahman, 28, 79 Ashcraft, Cleve, 65 Boumal, Nicolas, 78 Asif, M. Salman, 53 Boutsidis, Christos, 18 Aurentz, Jared L., 64 Boyanova, Petia, 17 Avron, Haim, 21 Bozkurt, Durmus¸, 71, 72 Axelsson, Owe, 17 Bozoviˇ c,´ Nemanja, 44 Bras,´ Isabel, 81 Baboulin, Marc, 17, 21 Braman, Karen, 72 Bai, Zhong-Zhi, 25, 86, 92 Brannick, James, 30 Bailey, David H., 65 Brauchart, Johann S., 69 Baker, Christopher G., 77 Breiten, Tobias, 22 Ballard, Grey, 27, 34 Bro, Rasmus, 40 Baragana,˜ Itziar, 74 Bru, Rafael, 52 Barioli, Francesco, 32 Brualdi, Richard A., 26 Barlow, Jesse, 37 Buchot, Jean-Marie, 69 Barreras, Alvaro,´ 14 Budko, Neil, 24 Barrett, Wayne, 32 Bueno, Maria I., 64 Barrio, Roberto, 65, 66, 71 Buluc¸, Aydın, 34 Basermann, Achim, 75 Busquier, S., 67 Batselier, Kim, 71, 80 Butkovic,ˇ Peter, 60 Baum, Ann-Kristin, 82 Buttari, Alfredo, 65 Baykara, N. A., 97 Byckling, Mikko, 24 Beattie, Christopher, 18, 43, 46 B´ıy´ıkoglu,˘ Turker,¨ 66 Bebiano, Natalia,´ 18 Becker, Dulceneia, 17, 21 Cai, Xiao-Chuan, 45 Becker, Stephen, 53 Calvin, Christophe, 50 2 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Deadman, Edvin, 13, 84 Candes,` Emmanuel, 53, 59 Delgado, Jorge, 95 Candela, Vicente F., 67, 68 Delvenne, Jean-Charles, 93 Canning, Andrew, 90 Demanet, Laurent, 99 Canogar, Roberto, 80 Demiralp, Metin, 78, 97, 97 Canto,´ Rafael, 14 Demmel, James W., 27, 28, 34 Carapito, Ana Catarina, 81 Derevyagin, Maxim, 22 Carden, Russell L., 46, 60 Devesa, Antonio, 79 Cardoso, Joao˜ R., 95 Dhillon, Inderjit S., 29 Carmona, Angeles,´ 19, 66 Di Benedetto, Maria Domenica, 19 Carpentieri, Bruno, 94 Diao, Huaian, 37, 48 Carriegos, Miguel V., 98 Djordjevi, Dragan S., 41 Carson, Erin, 34, 34 Dmytryshyn, Andrii, 74, 74 Casadei, Astrid, 74 Dolean, Victorita, 39 Castro Gonzalez,´ Nieves, 40 Donatelli, Marco, 14, 35 Catral, M., 14 Dongarra, Jack, 17, 21, 51 Cerdan,´ Juana, 58 Dopico, Froilan´ M., 42, 42, 54, 64 Chan, Raymond H., 61, 92 Dreesen, Philippe, 71, 80 Chang, Xiao-Wen, 37 Drineas, Petros, 12, 18 Chaturantabut, Saifon, 18 Drmac,ˇ Zlatko, 46 Chen, Jie, 29 Druinsky, Alex, 27, 64 Chiang, Chun-Yueh, 47 Drummond, L. A. (Tony), 51, 57 Chinesta, Francisco, 26 Druskin, Vladimir, 18, 22 Chitour, Yacine, 19 Drag˘ anescu,˘ Andrei, 92 Chiu, Jiawei, 99 Du, Xiuhong, 31 Choirat, Christine, 81 Duan, Yong, 94 Chow, Edmond, 35 Dubois, Jer´ ome,ˆ 50 Chu, Delin, 47 Duff, Iain S., 36, 36, 39 Chu, Eric King-wah, 43, 44, 94 Duminil, Sebastien, 100 Chu, Moody T., 44 Dunlavy, Daniel M., 40 Cicone, Antonio, 88 Effenberger, Cedric, 38 Constantinides, George A., 73 Eidelman, Yuli, 57, 63 Cornelio, Anastasia, 35 Elman, Howard, 56 Cravo, Gloria,´ 87 Emad, Nahid, 50, 51 Criado, Regino, 94 Embree, Mark, 23, 60 26 Cueto, Elias, , 26 Encinas, Andres´ M., 19, 66 41 Cvetkovic-Ili´ c,´ Dragana S., Erhel, Jocelyne, 58 Erlacher, Virginie, 33 da Cruz, Henrique F., 80 Erway, Jennifer B., 55, 61 Dachsel, Holger, 48 Ezquerro, J.A., 96 Dag,˘ Hasan, 96 Ezzatti, Pablo, 37 Dahl, Geir, 26 Damm, Tobias, 69 Faber, Vance, 73 Dandouna, Makarem, 50, 51 Fabregat-Traver, Diego, 37 Dassios, Ioannis K., 78 Fadili, Jalal, 53 de Hoop, Maarten V., 65 Fagas, Giorgos, 96 de Hoyos, Inmaculada, 74 Falco,´ Antonio, 33 de la Puente, Mar´ıa Jesus,´ 59 Falgout, Robert D., 16 De Lathauwer, Lieven, 40 Fallat, Shaun M., 32, 33 de Lima, Teresa Pedroso, 83 Fan, Hung-Yuan, 94 De Moor, Bart, 71, 80 Farber, Miriam, 26 De Sterck, Hans, 10, 40 Farhat, Charbel, 15 de Sturler, Eric, 16, 18, 41, 45 Faverge, Mathieu, 51 De Teran,´ Fernando, 64, 64 Feng, Lihong, 42 De Vlieger, Jeroen, 61 Fenu, Caterina, 93 2012 SIAM Conference on Applied Linear Algebra 4 Fercoq, Olivier, 94 Greif, Chen, 20, 20 Fernandes, Rosario,´ 80 Grigori, Laura, 27, 31, 34, 86 Ferreira, Carla, 54 Grimes, Roger, 65 Ferrer, Josep, 98 Grubisiˇ c,´ Luka, 97 Ferronato, Massimiliano, 52 Gu, Ming, 27, 88 Fezzani, Riadh, 31 Guermouche, Abdou, 39, 51 Figueiredo, Mario, 55 Guerrero, Johana, 61 Filar, Jerzy, 24 Gugercin, Serkan, 16, 18, 41, 43, 46 Flaig, Cyril, 86 Guglielmi, Nicola, 15, 60, 62, 88 Fornasini, Ettore, 20 Guivarch, Roman, 39 Forster,¨ Malte, 88 Guo, Chun-Hua, 68 Freitag, Melina A., 56, 79 Gupta, Anshul, 11 Fritzsche, Bernd, 89 Gurb¨ uzbalaban,¨ Mert, 60 Fritzsche, David, 86 Gurvit,¨ Ercan, 97 Frommer, Andreas, 86, 91 Guterman, Alexander, 80 Furtado, Susana, 18 Gutierrez,´ Jose´ M., 67 Futamura, Yasunori, 50 Gutierrez-Guti´ errez,´ Jess, 23 Gutknecht, Martin H., 90 Galantai,´ Aurel,´ 51 Guttel,¨ Stefan, 18, 84 Gallivan, Kyle, 77 Gyamfi, Kwasi Baah, 81 Gansterer, Wilfried N., 98 Gao,Weiguo, 82 Hached, Mustapha, 28, 79 Garc´ıa, Antonio G., 84 Hall, H. Tracy, 32 Garc´ıa, Esther, 94 Han, Lixing, 14 Garc´ıa-Planas, M. Isabel, 74, 83, 98 Hanke, Martin, 35 Gaspar, Francisco J., 99 Hartwig, R., 48 Gasso,´ Mar´ıa T., 95 Hauret, Patrice, 39 Gaubert, Stephane,´ 53, 59, 94 Have,´ Pascal, 30 Gaul, Andre,´ 90 Hayami, Ken, 38 Gavalec, Martin, 54, 60, 72 Hein, Matthias, 29 Geebelen, Dries, 75 Heinecke, Alexander, 49 Gemignani, Luca, 63 Heins, Pia, 59 Geuzaine, Christophe, 24 Henson, Van Emden, 16 Ghandchi Tehrani, Maryam, 62 Herault, Thomas, 51 Ghysels, Pieter, 34 Hernandez,´ Araceli, 47 Gil-Farina,˜ Mar´ıa Candelaria, 99 Hernandez,´ M.A., 96 Gillis, Nicolas, 55, 82 Hernandez-Medina,´ Miguel Angel,´ 84 Gillman, Adrianna, 91 Heroux, Michael A., 39 Gimenez,´ Isabel, 95 Herranz, Victoria, 79 Giraldi, Lo¨ıc, 34 Herzog, Roland, 11, 32 Giraud, Luc, 39, 51 Hess, Martin, 76 Giscard, Pierre-Louis, 93 Heuveline, Vincent, 36 Gohberg, Israel, 63 Higham, Des, 10 Gokmen,¨ Muhittin, 87 Higham, Nicholas J., 13, 43, 77, 84, 96 Goldberg, Felix, 33 Hlad´ık, Milan, 88 Goldfarb, Donald, 59 Ho, Kenneth L., 57 Gonzalez-Concepci´ on,´ Concepcion,´ 99 Hogben, Leslie, 32 Goubault, Eric, 59 Hollanders, Romain, 93 Gracia, Juan-Miguel, 54, 84 Holodnak, John T., 45 Graf,¨ Manuel, 70 Holtz, Olga, 27 Grant, Michael, 53 Hossain, Mohammad-Sahadet, 81 Grasedyck, Lars, 11, 69 Hsieh, Cho-Jui, 29 Grassmann, Winfried, 91 Hu, Xiaofei, 55 Gratton, Serge, 31 Huang, Ting-Zhu, 94 Greengard, Leslie, 57 Huang, Tsung-Ming, 47 5 2012 SIAM Conference on Applied Linear Algebra Huang, Yu-Mei, 86 Klein, Andre,´ 96 Huckle, Thomas, 14, 52 Klymko, Christine, 85 Huhs, Georg, 50 Knight, Nicholas, 34, 34 Huhtanen, Marko, 24 Knizhnerman, Leonid, 84 Humet, Matthias, 57 Koev, Plamen, 95 Hunter, Jeffrey J., 24 Kolda, Tamara G., 40 Hunutlu, Fatih, 97 Kollmann, Markus, 25 Hur, Youngmi, 76 Kopal, Jirˇ´ı, 42 Korkmaz, Evrim, 78 Iakovidis, Marios, 96 Koutis, Ioannis, 21 Iannazzo, Bruno, 12, 85 Kozubek, Toma´s,ˇ 74 Igual, Francisco D., 37 Kraus, Johannes, 16 Ipsen, Ilse C. F., 17, 45 Krendl, Wolfgang, 25 Ishteva, Mariya, 49 Kressner, Daniel, 44, 60 Kronbichler, Martin, 17 Jagels, Carl, 73 Krukier, Boris, 99 Jain, Sapna, 78 Krukier, Lev, 99 Jaklic,ˇ Gasper,ˇ 93 Kruschel, Christian, 75 Jaksch, Dieter, 93 Kucera,ˇ Radek, 74 Jameson, Antony, 92 Kumar, Pawan, 76 Janna, Carlo, 52 Kumar, Shiv Datt, 80 Jarlebring, Elias, 38, 69 Kunis, Susanne, 48 Jbilou, Khalide, 28, 28, 79 Kuo, Chun-Hung, 44 Jerez, Juan L., 73 Kuo, Yueh-Cheng, 47 Jeuris, Ben, 49 Kurschner,¨ Patrick, 79 Jiang, Hao, 66, 71 Jiang, Mi, 90 L’Excellent, Jean-Yves, 65 Jimenez,´ Adrian,´ 59 Laayouni, Lahcen, 92 Jing, Yan-Fei, 94 75 Jiranek, Pavel, 31 Lacoste, Xavier, 55 Johansson, Pedher, 74 Landi, Germana, Johansson, Stefan, 74, 74 Langguth, Johannes, 36 John Pearson, John, 32 Langlois, Philippe, 78 Johnson, Charles R., 19 Langou, Julien, 51 Jokar, Sadegh, 43 Lantner, Roland, 93 Jonsth¨ ovel,¨ Tom, 52 Latouche, Guy, 68 Joshi, Shantanu H., 61 Lattanzi, Marina, 47 Jungers, Raphael¨ M., 88, 93 Laub, Alan J., 18 Leader, Jeffery J., 77 Kabadshow, Ivo, 48 Lecomte, Christophe, 62 Kagstr˚ om,¨ Bo, 74 Legrain, Gregory,´ 34 Kahl, Karsten, 30, 88, 91 Lelievre,` Tony, 33 Kambites, Mark, 54 Lemos, Rute, 80 Kannan, Ramaseshan, 77 Leydold, Josef, 66 Karimi, Sahar, 59 Li, Qingshen, 90 Karow, Michael, 54 Li, Ren-Cang, 68 Katz, Ricardo D., 59 Li, Shengguo, 88 Kaya, Oguz, 35 Li, Tiexiang, 43, 44 Kempker, Pia L., 83 Li, Xiaoye S., 62, 63, 65 Kerrigan, Eric C., 73 Liesen, Jorg,¨ 16, 73, 90 Khabou, Amal, 27 Lim, Lek-Heng, 45 Khan, Mohammad Kazim, 35 Lim, Yongdo, 49 Kılıc¸, Mustafa, 60 Lin, Lijing, 84 Kilmer, Misha E., 10, 18 Lin, Lin, 85 Kirkland, Steve, 25 Lin, Matthew M., 44, 47 Kirstein, Bernd, 89 Lin, Wen-Wei, 43, 44, 47 2012 SIAM Conference on Applied Linear Algebra 6 Lin, Yiding, 81 Miedlar, Agnieszka, 45 Lingsheng, Meng, 82 Mikkelsen, Carl Christian K., 81 Lippert, Th., 91 Miller, Gary, 21 Lipshitz, Benjamin, 27 Miller, Killian, 40 Lisbona, Francisco J., 99 Milzarek, Andre, 53 Litvinenko, Alexander, 27 Ming, Li, 45 Lorenz, Dirk, 53, 75 Mingueza, David, 83 Lu, Linzhang, 89 Miodragovic,´ Suzana, 97 Lubich, Christian, 15 Mishra, Aditya Mani, 85 Lucas, Bob, 65 Mishra, Ratnesh Kumar, 80 Luce, Robert, 36 Mitjana, Margarida, 19, 66 Lukarski, Dimitar, 36 Miyata, Takafumi, 77 Modic, Jolanda, 93 Ma, Shiqian, 59 Moller,¨ Michael, 59 Mach, Susann, 32 Montoro, M.
Recommended publications
  • 18.06 Linear Algebra, Problem Set 2 Solutions
    18.06 Problem Set 2 Solution Total: 100 points Section 2.5. Problem 24: Use Gauss-Jordan elimination on [U I] to find the upper triangular −1 U : 2 3 2 3 2 3 1 a b 1 0 0 −1 4 5 4 5 4 5 UU = I 0 1 c x1 x2 x3 = 0 1 0 : 0 0 1 0 0 1 −1 Solution (4 points): Row reduce [U I] to get [I U ] as follows (here Ri stands for the ith row): 2 3 2 3 1 a b 1 0 0 (R1 = R1 − aR2) 1 0 b − ac 1 −a 0 4 5 4 5 0 1 c 0 1 0 −! (R2 = R2 − cR2) 0 1 0 0 1 −c 0 0 1 0 0 1 0 0 1 0 0 1 ( ) 2 3 R1 = R1 − (b − ac)R3 1 0 0 1 −a ac − b −! 40 1 0 0 1 −c 5 : 0 0 1 0 0 1 Section 2.5. Problem 40: (Recommended) A is a 4 by 4 matrix with 1's on the diagonal and −1 −a; −b; −c on the diagonal above. Find A for this bidiagonal matrix. −1 Solution (12 points): Row reduce [A I] to get [I A ] as follows (here Ri stands for the ith row): 2 3 1 −a 0 0 1 0 0 0 6 7 60 1 −b 0 0 1 0 07 4 5 0 0 1 −c 0 0 1 0 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + aR2) 1 0 −ab 0 1 a 0 0 6 7 (R2 = R2 + bR2) 60 1 0 −bc 0 1 b 07 −! 4 5 (R3 = R3 + cR4) 0 0 1 0 0 0 1 c 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + abR3) 1 0 0 0 1 a ab abc (R = R + bcR ) 60 1 0 0 0 1 b bc 7 −! 2 2 4 6 7 : 40 0 1 0 0 0 1 c 5 0 0 0 1 0 0 0 1 Alternatively, write A = I − N.
    [Show full text]
  • [Math.RA] 19 Jun 2003 Two Linear Transformations Each Tridiagonal with Respect to an Eigenbasis of the Ot
    Two linear transformations each tridiagonal with respect to an eigenbasis of the other; comments on the parameter array∗ Paul Terwilliger Abstract Let K denote a field. Let d denote a nonnegative integer and consider a sequence ∗ K p = (θi,θi , i = 0...d; ϕj , φj, j = 1...d) consisting of scalars taken from . We call p ∗ ∗ a parameter array whenever: (PA1) θi 6= θj, θi 6= θj if i 6= j, (0 ≤ i, j ≤ d); (PA2) i−1 θh−θd−h ∗ ∗ ϕi 6= 0, φi 6= 0 (1 ≤ i ≤ d); (PA3) ϕi = φ1 + (θ − θ )(θi−1 − θ ) h=0 θ0−θd i 0 d i−1 θh−θd−h ∗ ∗ (1 ≤ i ≤ d); (PA4) φi = ϕ1 + (Pθ − θ )(θd−i+1 − θ0) (1 ≤ i ≤ d); h=0 θ0−θd i 0 −1 ∗ ∗ ∗ ∗ −1 (PA5) (θi−2 − θi+1)(θi−1 − θi) P, (θi−2 − θi+1)(θi−1 − θi ) are equal and independent of i for 2 ≤ i ≤ d − 1. In [13] we showed the parameter arrays are in bijection with the isomorphism classes of Leonard systems. Using this bijection we obtain the following two characterizations of parameter arrays. Assume p satisfies PA1, PA2. Let ∗ ∗ A, B, A ,B denote the matrices in Matd+1(K) which have entries Aii = θi, Bii = θd−i, ∗ ∗ ∗ ∗ ∗ ∗ Aii = θi , Bii = θi (0 ≤ i ≤ d), Ai,i−1 = 1, Bi,i−1 = 1, Ai−1,i = ϕi, Bi−1,i = φi (1 ≤ i ≤ d), and all other entries 0. We show the following are equivalent: (i) p satisfies −1 PA3–PA5; (ii) there exists an invertible G ∈ Matd+1(K) such that G AG = B and G−1A∗G = B∗; (iii) for 0 ≤ i ≤ d the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θ0)(λ − θ1) · · · (λ − θn−1)(θi − θ0)(θi − θ1) · · · (θi − θn−1) ϕ1ϕ2 · · · ϕ nX=0 n is a scalar multiple of the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θd)(λ − θd−1) · · · (λ − θd−n+1)(θ − θ )(θ − θ ) · · · (θ − θ ) i 0 i 1 i n−1 .
    [Show full text]
  • Self-Interlacing Polynomials Ii: Matrices with Self-Interlacing Spectrum
    SELF-INTERLACING POLYNOMIALS II: MATRICES WITH SELF-INTERLACING SPECTRUM MIKHAIL TYAGLOV Abstract. An n × n matrix is said to have a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are distributed as follows n−1 λ1 > −λ2 > λ3 > ··· > (−1) λn > 0: A method for constructing sign definite matrices with self-interlacing spectra from totally nonnegative ones is presented. We apply this method to bidiagonal and tridiagonal matrices. In particular, we generalize a result by O. Holtz on the spectrum of real symmetric anti-bidiagonal matrices with positive nonzero entries. 1. Introduction In [5] there were introduced the so-called self-interlacing polynomials. A polynomial p(z) is called self- interlacing if all its roots are real, semple and interlacing the roots of the polynomial p(−z). It is easy to see that if λk, k = 1; : : : ; n, are the roots of a self-interlacing polynomial, then the are distributed as follows n−1 (1.1) λ1 > −λ2 > λ3 > ··· > (−1) λn > 0; or n (1.2) − λ1 > λ2 > −λ3 > ··· > (−1) λn > 0: The polynomials whose roots are distributed as in (1.1) (resp. in (1.2)) are called self-interlacing of kind I (resp. of kind II). It is clear that a polynomial p(z) is self-interlacing of kind I if, and only if, the polynomial p(−z) is self-interlacing of kind II. Thus, it is enough to study self-interlacing of kind I, since all the results for self-interlacing of kind II will be obtained automatically. Definition 1.1. An n × n matrix is said to possess a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are real, simple, are distributed as in (1.1).
    [Show full text]
  • Accurate Singular Values of Bidiagonal Matrices
    d d Accurate Singular Values of Bidiagonal Matrices (Appeared in the SIAM J. Sci. Stat. Comput., v. 11, n. 5, pp. 873-912, 1990) James Demmel W. Kahan Courant Institute Computer Science Division 251 Mercer Str. University of California New York, NY 10012 Berkeley, CA 94720 Abstract Computing the singular values of a bidiagonal matrix is the ®nal phase of the standard algo- rithm for the singular value decomposition of a general matrix. We present a new algorithm which computes all the singular values of a bidiagonal matrix to high relative accuracy indepen- dent of their magnitudes. In contrast, the standard algorithm for bidiagonal matrices may com- pute sm all singular values with no relative accuracy at all. Numerical experiments show that the new algorithm is comparable in speed to the standard algorithm, and frequently faster. Keywords: singular value decomposition, bidiagonal matrix, QR iteration AMS(MOS) subject classi®cations: 65F20, 65G05, 65F35 1. Introduction The standard algorithm for computing the singular value decomposition (SVD ) of a gen- eral real matrix A has two phases [7]: = T 1) Compute orthogonal matrices P 11and Q such that B PAQ 11is in bidiagonal form, i.e. has nonzero entries only on its diagonal and ®rst superdiagonal. Σ= T 2) Compute orthogonal matrices P 22and Q such that PBQ 22is diagonal and nonnega- σ Σ tive. The diagonal entries i of are the singular values of A. We will take them to be σ≥σ = sorted in decreasing order: ii+112. The columns of Q QQ are the right singular vec- = tors,andthecolumnsofP PP12are the left singular vectors.
    [Show full text]
  • High-Performance Parallel Computations Using Python As High-Level Language
    Aachen Institute for Advanced Study in Computational Engineering Science Preprint: AICES-2010/08-01 23/August/2010 High-Performance Parallel Computations using Python as High-Level Language Stefano Masini and Paolo Bientinesi Financial support from the Deutsche Forschungsgemeinschaft (German Research Association) through grant GSC 111 is gratefully acknowledged. ©Stefano Masini and Paolo Bientinesi 2010. All rights reserved List of AICES technical reports: http://www.aices.rwth-aachen.de/preprints High-Performance Parallel Computations using Python as High-Level Language Stefano Masini⋆ and Paolo Bientinesi⋆⋆ RWTH Aachen, AICES, Aachen, Germany Abstract. High-performance and parallel computations have always rep- resented a challenge in terms of code optimization and memory usage, and have typically been tackled with languages that allow a low-level management of resources, like Fortran, C and C++. Nowadays, most of the implementation effort goes into constructing the bookkeeping logic that binds together functionalities taken from standard libraries. Because of the increasing complexity of this kind of codes, it becomes more and more necessary to keep it well organized through proper software en- gineering practices. Indeed, in the presence of chaotic implementations, reasoning about correctness is difficult, even when limited to specific aspects like concurrency; moreover, due to the lack in flexibility of the code, making substantial changes for experimentation becomes a grand challenge. Since the bookkeeping logic only accounts for a tiny fraction of the total execution time, we believe that for such a task it can be afforded to introduce an overhead due to a high-level language. We consider Python as a preliminary candidate with the intent of improving code readability, flexibility and, in turn, the level of confidence with respect to correctness.
    [Show full text]
  • Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Universitat Politècnica De Catalunya
    Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Author: Josep Lluis Larriba Pey Advisor: Juan José Navarro Guerrero Barcelona, January 1995 UPC Universitat Politècnica de Catalunya Departament d'Arquitectura de Computadors UNIVERSITAT POLITÈCNICA DE CATALU NYA B L I O T E C A X - L I B R I S Tesi doctoral presentada per Josep Lluis Larriba Pey per tal d'aconseguir el grau de Doctor en Informàtica per la Universitat Politècnica de Catalunya UNIVERSITAT POLITÈCNICA DE CATAl U>'YA ADA'UNÍIEÏRACÍÓ ;;//..;,3UMPf';S AO\n¿.-v i S -i i::-.« c:¿fCM or,re: iïhcc'a a la pàgina .rf.S# a;; 3 b el r;ú¡; Barcelona, 6 N L'ENCARREGAT DEL REGISTRE. Barcelona, de de 1995 To my wife Marta To my parents Elvira and José Luis "A journey of a thousand miles must begin with a single step" Lao-Tse Acknowledgements I would like to thank my parents, José Luis and Elvira, for their unconditional help and love to me. I will always be indebted to you. I also want to thank my wife, Marta, for being the sparkle in my life, for her support and for understanding my (good) moods. I thank Juanjo, my advisor, for his friendship, patience and good advice. Also, I want to thank Àngel Jorba for his unconditional collaboration, work and support in some of the contributions of this work. I thank the members of "Comissió de Doctorat del DAG" for their comments and specially Miguel Valero for his suggestions on the topics of chapter 4. I thank Mateo Valero for his friendship and always good advice.
    [Show full text]
  • Iterative Refinement for Symmetric Eigenvalue Decomposition II
    Japan Journal of Industrial and Applied Mathematics (2019) 36:435–459 https://doi.org/10.1007/s13160-019-00348-4 ORIGINAL PAPER Iterative refinement for symmetric eigenvalue decomposition II: clustered eigenvalues Takeshi Ogita1 · Kensuke Aishima2 Received: 22 October 2018 / Revised: 5 February 2019 / Published online: 22 February 2019 © The Author(s) 2019 Abstract We are concerned with accurate eigenvalue decomposition of a real symmetric matrix A. In the previous paper (Ogita and Aishima in Jpn J Ind Appl Math 35(3): 1007–1035, 2018), we proposed an efficient refinement algorithm for improving the accuracy of all eigenvectors, which converges quadratically if a sufficiently accurate initial guess is given. However, since the accuracy of eigenvectors depends on the eigenvalue gap, it is difficult to provide such an initial guess to the algorithm in the case where A has clustered eigenvalues. To overcome this problem, we propose a novel algorithm that can refine approximate eigenvectors corresponding to clustered eigenvalues on the basis of the algorithm proposed in the previous paper. Numerical results are presented showing excellent performance of the proposed algorithm in terms of convergence rate and overall computational cost and illustrating an application to a quantum materials simulation. Keywords Accurate numerical algorithm · Iterative refinement · Symmetric eigenvalue decomposition · Clustered eigenvalues Mathematics Subject Classification 65F15 · 15A18 · 15A23 This study was partially supported by CREST, JST and JSPS KAKENHI Grant numbers 16H03917, 25790096. B Takeshi Ogita [email protected] Kensuke Aishima [email protected] 1 Division of Mathematical Sciences, School of Arts and Sciences, Tokyo Woman’s Christian University, 2-6-1 Zempukuji, Suginami-ku, Tokyo 167-8585, Japan 2 Faculty of Computer and Information Sciences, Hosei University, 3-7-2 Kajino-cho, Koganei-shi, Tokyo 184-8584, Japan 123 436 T.
    [Show full text]
  • Computing the Moore-Penrose Inverse for Bidiagonal Matrices
    УДК 519.65 Yu. Hakopian DOI: https://doi.org/10.18523/2617-70802201911-23 COMPUTING THE MOORE-PENROSE INVERSE FOR BIDIAGONAL MATRICES The Moore-Penrose inverse is the most popular type of matrix generalized inverses which has many applications both in matrix theory and numerical linear algebra. It is well known that the Moore-Penrose inverse can be found via singular value decomposition. In this regard, there is the most effective algo- rithm which consists of two stages. In the first stage, through the use of the Householder reflections, an initial matrix is reduced to the upper bidiagonal form (the Golub-Kahan bidiagonalization algorithm). The second stage is known in scientific literature as the Golub-Reinsch algorithm. This is an itera- tive procedure which with the help of the Givens rotations generates a sequence of bidiagonal matrices converging to a diagonal form. This allows to obtain an iterative approximation to the singular value decomposition of the bidiagonal matrix. The principal intention of the present paper is to develop a method which can be considered as an alternative to the Golub-Reinsch iterative algorithm. Realizing the approach proposed in the study, the following two main results have been achieved. First, we obtain explicit expressions for the entries of the Moore-Penrose inverse of bidigonal matrices. Secondly, based on the closed form formulas, we get a finite recursive numerical algorithm of optimal computational complexity. Thus, we can compute the Moore-Penrose inverse of bidiagonal matrices without using the singular value decomposition. Keywords: Moor-Penrose inverse, bidiagonal matrix, inversion formula, finite recursive algorithm.
    [Show full text]
  • Basic Matrices
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 373 (2003) 143–151 www.elsevier.com/locate/laa Basic matricesୋ Miroslav Fiedler Institute of Computer Science, Czech Academy of Sciences, 182 07 Prague 8, Czech Republic Received 30 April 2002; accepted 13 November 2002 Submitted by B. Shader Abstract We define a basic matrix as a square matrix which has both subdiagonal and superdiagonal rank at most one. We investigate, partly using additional restrictions, the relationship of basic matrices to factorizations. Special basic matrices are also mentioned. © 2003 Published by Elsevier Inc. AMS classification: 15A23; 15A33 Keywords: Structure rank; Tridiagonal matrix; Oscillatory matrix; Factorization; Orthogonal matrix 1. Introduction and preliminaries For m × n matrices, structure (cf. [2]) will mean any nonvoid subset of M × N, M ={1,...,m}, N ={1,...,n}. Given a matrix A (over a field, or a ring) and a structure, the structure rank of A is the maximum rank of any submatrix of A all entries of which are contained in the structure. The most important examples of structure ranks for square n × n matrices are the subdiagonal rank (S ={(i, k); n i>k 1}), the superdiagonal rank (S = {(i, k); 1 i<k n}), the off-diagonal rank (S ={(i, k); i/= k,1 i, k n}), the block-subdiagonal rank, etc. All these ranks enjoy the following property: Theorem 1.1 [2, Theorems 2 and 3]. If a nonsingular matrix has subdiagonal (su- perdiagonal, off-diagonal, block-subdiagonal, etc.) rank k, then the inverse also has subdiagonal (...)rank k.
    [Show full text]
  • LAPACK WORKING NOTE 195: SCALAPACK's MRRR ALGORITHM 1. Introduction. Since 2005, the National Science Foundation Has Been Fund
    LAPACK WORKING NOTE 195: SCALAPACK’S MRRR ALGORITHM CHRISTOF VOMEL¨ ∗ Abstract. The sequential algorithm of Multiple Relatively Robust Representations, MRRR, can compute numerically orthogonal eigenvectors of an unreduced symmetric tridiagonal matrix T ∈ Rn×n with O(n2) cost. This paper describes the design of ScaLAPACK’s parallel MRRR algorithm. One emphasis is on the critical role of the representation tree in achieving both numerical accuracy and parallel scalability. A second point concerns the favorable properties of this code: subset computation, the use of static memory, and scalability. Unlike ScaLAPACK’s Divide & Conquer and QR, MRRR can compute subsets of eigenpairs at reduced cost. And in contrast to inverse iteration which can fail, it is guaranteed to produce a numerically satisfactory answer while maintaining memory scalability. ParEig, the parallel MRRR algorithm for PLAPACK, uses dynamic memory allocation. This is avoided by our code at marginal additional cost. We also use a different representation tree criterion that allows for more accurate computation of the eigenvectors but can make parallelization more difficult. AMS subject classifications. 65F15, 65Y15. Key words. Symmetric eigenproblem, multiple relatively robust representations, ScaLAPACK, numerical software. 1. Introduction. Since 2005, the National Science Foundation has been funding an initiative [19] to improve the LAPACK [3] and ScaLAPACK [14] libraries for Numerical Linear Algebra computations. One of the ambitious goals of this initiative is to put more of LAPACK into ScaLAPACK, recognizing the gap between available sequential and parallel software support. In particular, parallelization of the MRRR algorithm [24, 50, 51, 26, 27, 28, 29], the topic of this paper, is identified as one key improvement to ScaLAPACK.
    [Show full text]
  • Guida Junior Eurovision 2016
    JUNIOR EUROVISION SONG CONTEST: LA FESTA EUROPEA DELLA MUSICA, A MISURA DI BAMBINO Cos’è lo Junior Eurovision Song Contest? E’ la versione “junior” dell’Eurovision Song Contest, ovvero il più grande concorso musicale d’Europa ed è organizzato, come il festival degli adulti dalla EBU, European Broadcasting Union, l’ente che riunisce le tv e radio pubbliche d’Europa e del bacino del Mediterraneo. Lo Junior Eurovision si rivolge ai bambini e ragazzi dai 9 ai 14 anni (età abbassata da questa edizione, fino al 2015 era 10-16), che abbiano avuto o meno esperienze canore precedenti (regola introdotta nel 2008: prima dovevano essere esordienti assoluti) L’idea è nata nel 2003 prendendo spunto da concorsi per bambini organizzati nei paesi Scandinavi, dove l’Eurovision Song Contest (quello dei grandi) è seguito quasi come una religione. Le prime due edizioni furono infatti ospitate proprio da Danimarca e Norvegia. Curiosamente però, dopo le prime edizioni, i paesi Scandinavi si sono fatti da parte, eccezion fatta per la Svezia. Come funziona lo Junior Eurovision Song Contest? Esattamente come allo Eurovision dei grandi, possiamo dunque dire che sono “le televisioni” a concorrere, ciascuna con un proprio rappresentante. Rispetto alla rassegna degli adulti, ci sono alcune sostanziali differenze: Il cantante che viene selezionato (o il gruppo) deve essere rigorosamente della nazionalità del paese che rappresenta. L’unica eccezione è stata consentita per la Repubblica di San Marino (quest’anno assente). Nella rassegna dei “grandi” non ci sono invece paletti in tal senso ma piena libertà. Le canzoni devono essere eseguite obbligatoriamente in una delle lingue nazionali almeno per il 70% della propria durata, che deve essere compresa fra 2’45” e 3’ e completamente inedite al momento della presentazione ufficiale sul sito della rassegna o della partecipazione al concorso di selezione.
    [Show full text]
  • Numerical Linear Algebra Texts in Applied Mathematics 55
    Grégoire Allaire 55 Sidi Mahmoud Kaber TEXTS IN APPLIED MATHEMATICS Numerical Linear Algebra Texts in Applied Mathematics 55 Editors J.E. Marsden L. Sirovich S.S. Antman Advisors G. Iooss P. Holmes D. Barkley M. Dellnitz P. Newton Texts in Applied Mathematics 1. Sirovich: Introduction to Applied Mathematics. 2. Wiggins: Introduction to Applied Nonlinear Dynamical Systems and Chaos. 3. Hale/Koc¸ak: Dynamics and Bifurcations. 4. Chorin/Marsden: A Mathematical Introduction to Fluid Mechanics, 3rd ed. 5. Hubbard/West: Differential Equations: A Dynamical Systems Approach: Ordinary Differential Equations. 6. Sontag: Mathematical Control Theory: Deterministic Finite Dimensional Systems, 2nd ed. 7. Perko: Differential Equations and Dynamical Systems, 3rd ed. 8. Seaborn: Hypergeometric Functions and Their Applications. 9. Pipkin: A Course on Integral Equations. 10. Hoppensteadt/Peskin: Modeling and Simulation in Medicine and the Life Sciences, 2nd ed. 11. Braun: Differential Equations and Their Applications, 4th ed. 12. Stoer/Bulirsch: Introduction to Numerical Analysis, 3rd ed. 13. Renardy/Rogers: An Introduction to Partial Differential Equations. 14. Banks: Growth and Diffusion Phenomena: Mathematical Frameworks and Applications. 15. Brenner/Scott: The Mathematical Theory of Finite Element Methods, 2nd ed. 16. Van de Velde: Concurrent Scientific Computing. 17. Marsden/Ratiu: Introduction to Mechanics and Symmetry, 2nd ed. 18. Hubbard/West: Differential Equations: A Dynamical Systems Approach: Higher-Dimensional Systems. 19. Kaplan/Glass: Understanding Nonlinear Dynamics. 20. Holmes: Introduction to Perturbation Methods. 21. Curtain/Zwart: An Introduction to Infinite-Dimensional Linear Systems Theory. 22. Thomas: Numerical Partial Differential Equations: Finite Difference Methods. 23. Taylor: Partial Differential Equations: Basic Theory. 24. Merkin: Introduction to the Theory of Stability of Motion.
    [Show full text]