• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 640
Next 10 →

Hybrid mpi/openmp parallel linear support vector machine training

by Kristian Woodsend, Jacek Gondzio, Sören Sonnenburg, Vojtech Franc, Elad Yom-tov, Michele Sebag - JMLR
"... Support vector machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally challenging. A parallel implementation of linear Support Vector Machine training has been developed, using a combination of MPI and Open ..."
Abstract - Cited by 7 (1 self) - Add to MetaCart
problems from the PASCAL Challenge on Large-scale Learning. We show that our approach is competitive, and is able to solve problems in the Challenge many times faster than other parallel approaches. We also demonstrate that the hybrid version performs more efficiently than the version using pure MPI.

Performance Analysis of Matrix-Vector Multiplication in Hybrid (MPI + OpenMP)

by Vivek N. Waghmare, Ip V. Kendre, Sanket G. Chordiya
"... Computing of multiple tasks simultaneously on multiple processors is called Parallel Computing. The parallel program consists of multiple active processes simultaneously solving a given problem. Parallel computers can be roughly classified as Multi-Processor and Multi-Core. In both these classificat ..."
Abstract - Add to MetaCart
and complex task to achieve. Out of many two different approaches used in parallel environment are MPI and OpenMP, each one of them having their own merits and demerits. Hybrid model combines both approaches in the pursuit of reducing the weaknesses in individual. In proposed approach takes a pair of

. Reduced MHD with diamagnetic, neoclassical and toroidal rotation

by M. Hoelzl, F. Orain, A. Lessig, M. Becoulet, A. Lessig
"... . Non-linear MHD in realistic tokamak X-point geometry. Bezier finite elements + toroidal Fourier decomposition. Fully implicit time integration. Hybrid MPI + OpenMP parallelization. Supercomputers like HELIOS and HYDRA ..."
Abstract - Add to MetaCart
. Non-linear MHD in realistic tokamak X-point geometry. Bezier finite elements + toroidal Fourier decomposition. Fully implicit time integration. Hybrid MPI + OpenMP parallelization. Supercomputers like HELIOS and HYDRA

Detecting threadsafety violations in hybrid openmp/mpi programs.

by Hongyi Ma , Liqiang Wang , Krishanthan Krishnamoorthy - In Proceedings of the 2015 IEEE International Conference on Cluster Computing, , 2015
"... Abstract-We propose an approach by integrating static and dynamic program analyses to detect threadsafety violations in hybrid MPI/OpenMP programs. We innovatively transform the thread-safety violation problems to race conditions problems. In our approach, the static analysis identifies a list of M ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract-We propose an approach by integrating static and dynamic program analyses to detect threadsafety violations in hybrid MPI/OpenMP programs. We innovatively transform the thread-safety violation problems to race conditions problems. In our approach, the static analysis identifies a list

Intra node parallelization of MPI programs with OpenMP

by Franck Cappello, Olivier Richard , 1998
"... The availability of multiprocessors and high performance networks offer the opportunity to construct CLUMPs (Cluster of Multiprocessors) and use them as paxallel computing platforms. The main distinctive feature of the CLUMP axchitecture over the usual paxallel computers is its hybrid memory model ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
movements inside the CLUMP 3) to limit the effort of the programmer while ensuring the portability of the codes on a wide vaxiety of CLUMP configurations. We investigate an approach based on the MPI and OpenMP standaxds. The approach consists in the intra-node paxallelization of the MPI programs

Comparing the OpenMP, MPI, and Hybrid Programming Paradigms on an SMP Cluster 1

by Gabriele Jost, Haoqiang Jin, Dieter An Mey, Ferhat F. Hatay
"... Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parall ..."
Abstract - Cited by 13 (0 self) - Add to MetaCart
Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels

PERFORMANCE ANALYSIS AND COMPARISON OF MPI, OPENMP AND HYBRID NPB-MZ 1 Performance Analysis and Comparison of MPI, OpenMP and Hybrid NPB-MZ

by Héctor J. Machín Machín
"... Abstract—Chip multiprocessors (CMP) are w idely used for high performance computing and are being configured in a hierarchical manner to compose a node in a parallel system. CMP clusters provide a natural programming paradigm for hybrid programs. Can current hybrid parallel programming paradigms suc ..."
Abstract - Add to MetaCart
such as hybrid MPI/OpenMP eff iciently exploit the potential offered by such CMP clusters? In this research, w ith increasing the number of processors and problem sizes, we systematically analyze and compare the performance of MPI, OpenMP and hybrid NAS Parallel Benchmark Mult i–Zone (NPB–MZ) on tw o

Scalable Hybrid Implementation of Graph Coloring using MPI and OpenMP

by Ahmet Erdem Sarıyüce, Erik Saule, Ümit V. Çatalyürek - PROC OF PCO , 2012
"... Abstract—Graph coloring algorithms are commonly used in large scientific parallel computing either for identifying parallelism or as a tool to reduce computation, such as compressing Hessian matrices. Large scientific computations are nowadays either run on commodity clusters or on large computing p ..."
Abstract - Add to MetaCart
platforms. In both cases, the current target platform is hierarchical with distributed memory at the node level and shared memory at the processor level. In this paper, we present a novel hybrid graph coloring algorithm and discuss how to obtain the best performance on such systems from algorithmic, system

1. OpenMP and MPI

by Steven Gottlieb, Sonali Tamhankar , 2000
"... A trend in high performance computers that is becoming increasingly popular is the use of symmetric multiprocessing (SMP) rather than the older paradigm of MPP. MPI codes that ran and scaled well on MPP machines can often be run on an SMP machine using the vendor’s version of MPI. However, this appr ..."
Abstract - Add to MetaCart
to be able to use OpenMP parallelism on the node, and MPI between nodes. We describe the challenges of converting MILC MPI code to using a second level of OpenMP parallelism, and benchmarks on IBM and Sun computers.

Performance evaluation of MPI, UPC and OpenMP on multicore architectures

by Guillermo L. Taboada, Carlos Teijeiro, Basilio B. Fraguela, J. Carlos Mouriño - Proceedings of the 16th European Users’ Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface , 2009
"... Abstract. The current trend to multicore architectures underscores the need of parallelism. While new languages and alternatives for supporting more efficiently these systems are proposed, MPI faces this new challenge. Therefore, up-to-date performance evaluations of current options for pro-gramming ..."
Abstract - Cited by 6 (0 self) - Add to MetaCart
-gramming multicore systems are needed. This paper evaluates MPI per-formance against Unified Parallel C (UPC) and OpenMP on multicore architectures. From the analysis of the results, it can be concluded that MPI is generally the best choice on multicore systems with both shared and hybrid shared/distributed memory
Next 10 →
Results 1 - 10 of 640
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University