BACK TO INDEX

Publications of year 2012
Conference articles
  1. G. Bosilca, M. Faverge, X. Lacoste, I. Yamazaki, and P. Ramet. Toward a supernodal sparse direct solver over DAG runtimes. In Proceedings of PMAA'2012, Londres, UK, June 2012. Keyword(s): Sparse.
    Abstract:
    The current trend in the high performance computing shows a dramatic increase in the number of cores on the shared memory compute nodes. Algorithms, especially those related to linear algebra, need to be adapted to these new computer architectures in order to be efficient. PaStiX is a sparse parallel direct solver, that incorporates a dynamic scheduler for strongly hierarchical modern architectures. In this work, we study the replacement of this internal highly integrated scheduling strategy by two generic runtime frameworks: DAGuE and StarPU. Those runtimes will give the opportunity to execute the factorization tasks graph on emerging computers equipped with accelerators. As for previous work done in dense linear algebra, we will present the kernels used for GPU computations inspired by the MAGMA library and the DAG algorithm used with those two runtimes. A comparative study of the performances of the supernodal solver with the three different schedulers is performed on ma nycore architectures and the improvements obtained with accelerators will be presented with the StarPU runtime. These results demonstrate that these DAG runtimes provide uniform programming interfaces to obtain high performance on different architectures on irregular problems as sparse direct factorizations.

    @InProceedings{C:LaBRI::PMAA2012,
    author = "Bosilca, G. and Faverge, M. and Lacoste, X. and Yamazaki, I. and Ramet, P.",
    title = "Toward a supernodal sparse direct solver over {DAG} runtimes",
    booktitle = "Proceedings of {PMAA}'2012",
    OPTcrossref = {},
    OPTkey = {},
    OPTeditor = {},
    OPTvolume = {},
    OPTnumber = {},
    OPTseries = {},
    year = "2012",
    OPTorganization = {},
    OPTpublisher = {},
    address = {Londres, UK},
    month = jun,
    OPTpages = {},
    OPTnote = {},
    OPTannote = {},
    OPTURL = {},
    KEYWORDS = "Sparse",
    ABSTRACT = { The current trend in the high performance computing shows a dramatic increase in the number of cores on the shared memory compute nodes. Algorithms, especially those related to linear algebra, need to be adapted to these new computer architectures in order to be efficient. PaStiX is a sparse parallel direct solver, that incorporates a dynamic scheduler for strongly hierarchical modern architectures. In this work, we study the replacement of this internal highly integrated scheduling strategy by two generic runtime frameworks: DAGuE and StarPU. Those runtimes will give the opportunity to execute the factorization tasks graph on emerging computers equipped with accelerators. As for previous work done in dense linear algebra, we will present the kernels used for GPU computations inspired by the MAGMA library and the DAG algorithm used with those two runtimes. A comparative study of the performances of the supernodal solver with the three different schedulers is performed on ma nycore architectures and the improvements obtained with accelerators will be presented with the StarPU runtime. These results demonstrate that these DAG runtimes provide uniform programming interfaces to obtain high performance on different architectures on irregular problems as sparse direct factorizations.} 
    }
    


  2. A. Casadei and P. Ramet. Memory Optimization to Build a Schur Complement. In SIAM Conference on Applied Linear Algebra, Valence, Spain, June 2012. Keyword(s): Sparse.
    @InProceedings{C:LaBRI::la12a,
    author = {Casadei, A. and Ramet, P.},
    title = {Memory Optimization to Build a Schur Complement},
    OPTcrossref = {},
    OPTkey = {},
    booktitle = {SIAM Conference on Applied Linear Algebra},
    OPTpages = {},
    year = {2012},
    OPTeditor = {},
    OPTvolume = {},
    OPTnumber = {},
    OPTseries = {},
    address = {Valence, Spain},
    month = jun,
    OPTorganization = {},
    OPTpublisher = {},
    OPTnote = {},
    OPTannote = {},
    KEYWORDS = "Sparse",
    URL = {http://www.labri.fr/~ramet/restricted/la12a.pdf},
    
    }
    


  3. M. Faverge and P. Ramet. Fine Grain Scheduling for Sparse Solver on Manycore Architectures. In 15th SIAM Conference on Parallel Processing for Scientific Computing, Savannah, USA, February 2012. Keyword(s): Sparse.
    Abstract:
    The emergence of many-cores architectures introduces variations in computation costs, which makes precise cost models hard to realize. Static schedulers based on cost models, like the one used in the sparse direct solver extsc{PaStiX}, are no longer adapted. We describe the dynamic scheduler developed for the super-nodal method of extsc{PaStiX} to correct the imperfections of the static model. The solution presented exploit the elimination tree of the problem to keep the data locality during the execution.

    @InProceedings{C:LaBRI::siam2012,
    author = {Faverge, M. and Ramet, P.},
    title = {Fine Grain Scheduling for Sparse Solver on Manycore Architectures},
    OPTcrossref = {},
    OPTkey = {},
    booktitle = {15th {SIAM} Conference on Parallel Processing for Scientific Computing},
    OPTpages = {},
    year = {2012},
    OPTeditor = {},
    OPTvolume = {},
    OPTnumber = {},
    OPTseries = {},
    address = {Savannah, USA},
    month = feb,
    OPTorganization = {},
    OPTpublisher = {},
    OPTnote = {},
    OPTannote = {},
    KEYWORDS = "Sparse",
    ABSTRACT = {The emergence of many-cores architectures introduces variations in computation costs, which makes precise cost models hard to realize. Static schedulers based on cost models, like the one used in the sparse direct solver 	extsc{PaStiX}, are no longer adapted. We describe the dynamic scheduler developed for the super-nodal method of 	extsc{PaStiX} to correct the imperfections of the static model. The solution presented exploit the elimination tree of the problem to keep the data locality during the execution.} 
    }
    


  4. X. Lacoste and P. Ramet. Sparse direct solver on top of large-scale multicore systems with GPU accelerators. In SIAM Conference on Applied Linear Algebra, Valence, Spain, June 2012. Keyword(s): Sparse.
    @InProceedings{C:LaBRI::la12b,
    author = {Lacoste, X. and Ramet, P.},
    title = {Sparse direct solver on top of large-scale multicore systems with GPU accelerators},
    OPTcrossref = {},
    OPTkey = {},
    booktitle = {SIAM Conference on Applied Linear Algebra},
    OPTpages = {},
    year = {2012},
    OPTeditor = {},
    OPTvolume = {},
    OPTnumber = {},
    OPTseries = {},
    address = {Valence, Spain},
    month = jun,
    OPTorganization = {},
    OPTpublisher = {},
    OPTnote = {},
    OPTannote = {},
    KEYWORDS = "Sparse",
    URL = {http://www.labri.fr/~ramet/restricted/la12b.pdf},
    
    }
    


Internal reports
  1. A. Casadei and P. Ramet. Memory Optimization to Build a Schur Complement in an Hybrid Solver. Research Report RR-7971, INRIA, 2012. Keyword(s): Sparse.
    @techreport{astrid:hal-00700053,
    AUTHOR = {Casadei, A. and Ramet, P.},
    TITLE = {{Memory Optimization to Build a Schur Complement in an Hybrid Solver}},
    TYPE = {Research Report},
    PAGES = {11},
    YEAR = {2012},
    INSTITUTION = {INRIA},
    NUMBER = {RR-7971},
    keywords = {Sparse},
    URL = {http://hal.inria.fr/hal-00700053} 
    }
    


  2. X. Lacoste, P. Ramet, M. Faverge, Y. Ichitaro, and J. Dongarra. Sparse direct solvers with accelerators over DAG runtimes. Research Report RR-7972, INRIA, 2012. Keyword(s): Sparse.
    @techreport{lacoste:hal-00700066,
    AUTHOR = {Lacoste, X. and Ramet, P. and Faverge, M. and Ichitaro, Y. and Dongarra, J.},
    TITLE = {{Sparse direct solvers with accelerators over DAG runtimes}},
    TYPE = {Research Report},
    PAGES = {11},
    YEAR = {2012},
    INSTITUTION = {INRIA},
    NUMBER = {RR-7972},
    keywords = {Sparse},
    URL = {http://hal.inria.fr/hal-00700066} 
    }
    


Miscellaneous
  1. E. Agullo, G. Bosilca, B. Bramas, C. Castagnede, O. Coulaud, E. Darve, J. Dongarra, M. Faverge, N. Furmento, G. Giraud, X. Lacoste, J. Langou, H. Ltaief, M. Messner, R. Namyst, P. Ramet, T. Takahashi, S Thibault, S. Tomov, and I. Yamazaki. Matrices over Runtime Systems at Exascale. SuperComputing'2012, Salt Lake City, USA, November 2012. Keyword(s): Sparse.
    @Misc{c:LaBRI::SC2012,
    AUTHOR = "Agullo, E. and Bosilca, G. and Bramas, B. and Castagnede, C. and Coulaud, O. and Darve, E. and Dongarra, J. and Faverge, M. and Furmento, N. and Giraud, G. and Lacoste, X. and Langou, J. and Ltaief, H. and Messner, M. and Namyst, R. and Ramet, P. and Takahashi, T. and Thibault, S and Tomov, S. and Yamazaki, I.",
    TITLE = "Matrices over Runtime Systems at Exascale",
    BOOKTITLE = "{SuperComputing}'2012",
    PAGES = {1332-1332},
    YEAR = {2012},
    howpublished = {{SuperComputing}'2012, Salt Lake City, USA},
    month = nov,
    URL = {http://www.labri.fr/~ramet/restricted/poster_sc2012.pdf},
    KEYWORDS = "Sparse" 
    }
    


  2. M. Boulet, G. Meurant, D. Goudin, J.-J. Pesque, M. Chanaud, L. Giraud, P. Hénon, P. Ramet, and J. Roman. Résolution des systèmes linéaires sur calculateurs pétaflopiques. CHOCS volume 41: revue scientifique et technique de la Direction des Applications Militaires, January 2012.
    @Misc{c:LaBRI::CHOCS,
    OPTkey = {},
    author = {Boulet, M. and Meurant, G. and Goudin, D. and Pesque, J.-J. and Chanaud, M. and Giraud, L. and H\'enon, P. and Ramet, P. and Roman, J.},
    title = {R\'esolution des syst\`emes lin\'eaires sur calculateurs p\'etaflopiques},
    howpublished = {CHOCS volume 41: revue scientifique et technique de la Direction des Applications Militaires},
    month = jan,
    year = 2012,
    OPTnote = {},
    OPTannote = {} 
    }
    


  3. X. Lacoste, M. Faverge, and P. Ramet. Scheduling for Sparse Solver on Manycore Architectures. Workshop INRIA-CNPq, HOSCAR meeting, Petropolis, Brazil, September 2012. Keyword(s): Sparse.
    Abstract:
    The emergence of many-cores architectures introduces variations in computation costs, which makes precise cost models hard to realize. Static schedulers based on cost models, like the one used in the sparse direct solver PaStiX, are no longer adapted. We describe the dynamic scheduler developed for the super-nodal method of PaStiX to correct the imperfections of the static model. The solution presented exploit the elimination tree of the problem to keep the data locality during the execution.

    @Misc{c:LaBRI::HOSCAR2012b,
    author = {Lacoste, X. and Faverge, M. and Ramet, P.},
    title = {Scheduling for Sparse Solver on Manycore Architectures},
    howpublished = {Workshop INRIA-CNPq, HOSCAR meeting, Petropolis, Brazil},
    OPTcrossref = {},
    OPTkey = {},
    OPTpages = {},
    year = {2012},
    OPTeditor = {},
    OPTvolume = {},
    OPTnumber = {},
    OPTseries = {},
    month = sep,
    OPTorganization = {},
    OPTpublisher = {},
    OPTannote = {},
    KEYWORDS = "Sparse",
    ABSTRACT = { The emergence of many-cores architectures introduces variations in computation costs, which makes precise cost models hard to realize. Static schedulers based on cost models, like the one used in the sparse direct solver PaStiX, are no longer adapted. We describe the dynamic scheduler developed for the super-nodal method of PaStiX to correct the imperfections of the static model. The solution presented exploit the elimination tree of the problem to keep the data locality during the execution.} 
    }
    


  4. X. Lacoste, M. Faverge, and P. Ramet. Sparse direct solvers with accelerators over DAG runtimes. Workshop INRIA-CNPq, HOSCAR meeting, Sophia-Antipolis, France, July 2012. Keyword(s): Sparse.
    Abstract:
    The current trend in the high performance computing shows a dramatic increase in the number of cores on the shared memory compute nodes. Algorithms, especially those related to linear algebra, need to be adapted to these new computer architectures in order to be efficient. PaStiX is a sparse parallel direct solver, that incorporates a dynamic scheduler for strongly hierarchical modern architectures. In this work, we study the replacement of this internal highly integrated scheduling strategy by two generic runtime frameworks: DAGuE and StarPU. Those runtimes will give the opportunity to execute the factorization tasks graph on emerging computers equipped with accelerators. As for previous work done in dense linear algebra, we will present the kernels used for GPU computations inspired by the MAGMA library and the DAG algorithm used with those two runtimes. A comparative study of the performances of the supernodal solver with the three different schedulers is performed on manycore architectures and the improvements obtained with accelerators will be presented with the StarPU runtime. These results demonstrate that these DAG runtimes provide uniform programming interfaces to obtain high performance on different architectures on irregular problems as sparse direct factorizations.

    @Misc{c:LaBRI::HOSCAR2012a,
    author = {Lacoste, X. and Faverge, M. and Ramet, P.},
    title = {Sparse direct solvers with accelerators over {DAG} runtimes},
    howpublished = {Workshop INRIA-CNPq, HOSCAR meeting, Sophia-Antipolis, France},
    OPTcrossref = {},
    OPTkey = {},
    OPTpages = {},
    year = {2012},
    OPTeditor = {},
    OPTvolume = {},
    OPTnumber = {},
    OPTseries = {},
    month = jul,
    OPTorganization = {},
    OPTpublisher = {},
    OPTannote = {},
    KEYWORDS = "Sparse",
    ABSTRACT = { The current trend in the high performance computing shows a dramatic increase in the number of cores on the shared memory compute nodes. Algorithms, especially those related to linear algebra, need to be adapted to these new computer architectures in order to be efficient. PaStiX is a sparse parallel direct solver, that incorporates a dynamic scheduler for strongly hierarchical modern architectures. In this work, we study the replacement of this internal highly integrated scheduling strategy by two generic runtime frameworks: DAGuE and StarPU. Those runtimes will give the opportunity to execute the factorization tasks graph on emerging computers equipped with accelerators. As for previous work done in dense linear algebra, we will present the kernels used for GPU computations inspired by the MAGMA library and the DAG algorithm used with those two runtimes. A comparative study of the performances of the supernodal solver with the three different schedulers is performed on manycore architectures and the improvements obtained with accelerators will be presented with the StarPU runtime. These results demonstrate that these DAG runtimes provide uniform programming interfaces to obtain high performance on different architectures on irregular problems as sparse direct factorizations.} 
    }
    


  5. P. Ramet. Sparse direct solver on top of large-scale multicore systems with GPU accelerators. CEMRACS'2012, Méthodes numériques et algorithmes pour architectures pétaflopiques, Marseille, France, August 2012.
    @Misc{c:LaBRI::CEMRACS12,
    author = {Ramet, P.},
    title = {Sparse direct solver on top of large-scale multicore systems with GPU accelerators},
    month = aug,
    year = {2012},
    howpublished = {CEMRACS'2012, M\'ethodes num\'eriques et algorithmes pour architectures p\'etaflopiques, Marseille, France} 
    }
    



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces répertoires sont rendus disponibles par les auteurs qui y ont contribué en vue d'assurer la diffusion à temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gardés par les auteurs et par les détenteurs du copyright, en dépit du fait qu'ils présentent ici leurs travaux sous forme électronique. Les personnes copiant ces informations doivent adhérer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas être rendus disponibles ailleurs sans la permission explicite du détenteur du copyright.




Last modified: Tue Apr 4 11:58:35 2023
Author: ramet.


This document was translated from BibTEX by bibtex2html