Browse Results

Showing 41,926 through 41,950 of 61,392 results

Parallel Programming with Microsoft® .NET

by Ralph Johnson Colin Campbell Ade Miller Stephen Toub

The CPU meter shows the problem. One core is running at 100 percent, but all the other cores are idle. Your application is CPU-bound, but you are using only a fraction of the computing power of your multicore system. What next? The answer, in a nutshell, is parallel programming. Where you once would have written the kind of sequential code that is familiar to all programmers, you now find that this no longer meets your performance goals. To use your system's CPU resources efficiently, you need to split your application into pieces that can run at the same time. This is easier said than done. Parallel programming has a reputation for being the domain of experts and a minefield of subtle, hard-to-reproduce software defects. Everyone seems to have a favorite story about a parallel program that did not behave as expected because of a mysterious bug. These stories should inspire a healthy respect for the difficulty of the problems you face in writing your own parallel programs. Fortunately, help has arrived. Microsoft Visual Studio® 2010 introduces a new programming model for parallelism that significantly simplifies the job. Behind the scenes are supporting libraries with sophisticated algorithms that dynamically distribute computations on multicore architectures. Proven design patterns are another source of help. A Guide to Parallel Programming introduces you to the most important and frequently used patterns of parallel programming and gives executable code samples for them, using the Task Parallel Library (TPL) and Parallel LINQ (PLINQ).

Parallel Programming with Microsoft® Visual C++®

by Colin Campbell Ade Miller

Your CPU meter shows a problem. One core is running at 100 percent, but all the other cores are idle. Your application is CPU-bound, but you are using only a fraction of the computing power of your multicore system. Is there a way to get better performance? The answer, in a nutshell, is parallel programming. Where you once would have written the kind of sequential code that is familiar to all programmers, you now find that this no longer meets your performance goals. To use your system's CPU resources efficiently, you need to split your application into pieces that can run at the same time. Of course, this is easier said than done. Parallel programming has a reputation for being the domain of experts and a minefield of subtle, hard-to-reproduce software defects. Everyone seems to have a favorite story about a parallel program that did not behave as expected because of a mysterious bug. These stories should inspire a healthy respect for the difficulty of the problems you will face in writing your own parallel programs. Fortunately, help has arrived. The Parallel Patterns Library (PPL) and the Asynchronous Agents Library introduce a new programming model for parallelism that significantly simplifies the job. Behind the scenes are sophisticated algorithms that dynamically distribute computations on multicore architectures. In addition, Microsoft® Visual Studio® 2010 development system includes debugging and analysis tools to support the new parallel programming model. Proven design patterns are another source of help. This guide introduces you to the most important and frequently used patterns of parallel programming and provides executable code samples for them, using PPL. When thinking about where to begin, a good place to start is to review the patterns in this book. See if your problem has any attributes that match the six patterns presented in the following chapters. If it does, delve more deeply into the relevant pattern or patterns and study the sample code.

Parallel Programming with Microsoft® Visual Studio® 2010 Step by Step

by Donis Marshall

Your hands-on, step-by-step guide to the fundamentals of parallel programming Teach yourself how to help improve application performance by using parallel programming techniques in Visual Studio 2010--one step at a time. Ideal for experienced programmers with little or no parallel programming experience, this tutorial provides practical, learn-by-doing exercises for creating applications that optimize the use of multicore processors. Discover how to: Apply techniques to help increase your application's speed and efficiency Simplify the process of adding parallelism with the Task Parallel Library (TPL) Execute several tasks concurrently with various scheduling techniques Perform data queries in parallel with PLINQ Use concurrent collections in Microsoft .NET Framework 4 for data items Extend classes in the TPL to meet the specific requirements of your application Perform live debugging of an application with parallel code

Parallel Programming with Python

by Jan Palach

A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

Parallel Programming: for Multicore and Cluster Systems

by Thomas Rauber Gudula Rünger

Innovations in hardware architecture, like hyper-threading or multicore processors, mean that parallel computing resources are available for inexpensive desktop computers. In only a few years, many standard software products will be based on concepts of parallel programming implemented on such hardware, and the range of applications will be much broader than that of scientific computing, up to now the main application area for parallel computing. Rauber and Rünger take up these recent developments in processor architecture by giving detailed descriptions of parallel programming techniques that are necessary for developing efficient programs for multicore processors as well as for parallel cluster systems and supercomputers. Their book is structured in three main parts, covering all areas of parallel computing: the architecture of parallel systems, parallel programming models and environments, and the implementation of efficient application algorithms. The emphasis lies on parallel programming techniques needed for different architectures. For this second edition, all chapters have been carefully revised. The chapter on architecture of parallel systems has been updated considerably, with a greater emphasis on the architecture of multicore systems and adding new material on the latest developments in computer architecture. Lastly, a completely new chapter on general-purpose GPUs and the corresponding programming techniques has been added. The main goal of the book is to present parallel programming techniques that can be used in many situations for a broad range of application areas and which enable the reader to develop correct and efficient parallel programs. Many examples and exercises are provided to show how to apply the techniques. The book can be used as both a textbook for students and a reference book for professionals. The material presented has been used for courses in parallel programming at different universities for many years.

Parallel R: Data Analysis in the Distributed World

by Stephen Weston Q. Ethan McCallum

It’s tough to argue with R as a high-quality, cross-platform, open source statistical software product—unless you’re in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets, including three chapters on using R and Hadoop together. You’ll learn the basics of Snow, Multicore, Parallel, Segue, RHIPE, and Hadoop Streaming, including how to find them, how to use them, when they work well, and when they don’t.With these packages, you can overcome R’s single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R’s memory barrier.Snow: works well in a traditional cluster environmentMulticore: popular for multiprocessor and multicore computersParallel: part of the upcoming R 2.14.0 releaseR+Hadoop: provides low-level access to a popular form of cluster computingRHIPE: uses Hadoop’s power with R’s language and interactive shellSegue: lets you use Elastic MapReduce as a backend for lapply-style operations

Parallel Scientific Computing

by Guillaume Houzeaux François-Xavier Roux Frédéric Magoules

Scientific computing has become an indispensable tool in numerous fields, such as physics, mechanics, biology,finance and industry. For example, it enables us, thanks to efficient algorithms adapted to current computers, tosimulate, without the help of models or experimentations, the deflection of beams in bending, the sound level in a theater room or a fluid flowing around an aircraft wing.This book presents the scientific computing techniques applied to parallel computing for the numerical simulation of large-scale problems; these problems result from systems modeled by partial differential equations. Computing concepts will be tackled via examples.Implementation and programming techniques resulting from the finite element method will be presented for direct solvers, iterative solvers and domain decomposition methods, along with an introduction to MPI and OpenMP.

Parallel Scientific Computing

by Roman Trobec Gregor Kosec

This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interested in solving complex problems in real computational domains.

Parallel Scientific Computing in C++ and MPI

by George Em Karniadakis

Numerical algorithms, modern programming techniques, and parallel computing are often taught serially across different courses and different textbooks. The need to integrate concepts and tools usually comes only in employment or in research - after the courses are concluded - forcing the student to synthesise what is perceived to be three independent subfields into one. This book provides a seamless approach to stimulate the student simultaneously through the eyes of multiple disciplines, leading to enhanced understanding of scientific computing as a whole. The book includes both basic as well as advanced topics and places equal emphasis on the discretization of partial differential equations and on solvers. Some of the advanced topics include wavelets, high-order methods, non-symmetric systems, and parallelization of sparse systems. The material covered is suited to students from engineering, computer science, physics and mathematics.

Parallel Services: Intelligent Systems of Digital Twins and Metaverses for Services Science (SpringerBriefs in Service Science)

by Fei-Yue Wang Lefei Li

By incorporating the latest advancement in complex system modeling and simulation into the service system research, this book makes a valuable contribution to this field that will lead service innovation and service management toward the digital twin and metaverse. It covers important topics such as computational experiments and parallel execution of a parallel service system, the modeling of artificial service systems, semi-parallel service systems, parallel service, and digital twin/metaverse. It also provides a unified framework for realizing a parallel service system that demonstrates the capabilities or potentials of adopting digital twin and metaverse. In addition, the book contains numerous solutions to real-world problems, through which both academic readers and practitioners will gain new perspectives on service systems, and learn how to model a parallel service system or how to use the model to analyze and understand the behaviors of the system. For academic readers, it sheds light on a new research direction within the service science/engineering domain made possible by the latest technologies. For practitioners, with the help of methods such as Agent-based Modeling and Simulation, the book will enable them to enhance their skills in designing or analyzing a service system.

Parallel Supercomputing in MIMD Architectures

by R.Michael Hord

Parallel Supercomputing in MIMD Architectures is devoted to supercomputing on a wide variety of Multiple-Instruction-Multiple-Data (MIMD)-class parallel machines. This book describes architectural concepts, commercial and research hardware implementations, major programming concepts, algorithmic methods, representative applications, and benefits and drawbacks. Commercial machines described include Connection Machine 5, NCUBE, Butterfly, Meiko, Intel iPSC, iPSC/2 and iWarp, DSP3, Multimax, Sequent, and Teradata. Research machines covered include the J-Machine, PAX, Concert, and ASP. Operating systems, languages, translating sequential programs to parallel, and semiautomatic parallelizing are aspects of MIMD software addressed in Parallel Supercomputing in MIMD Architectures. MIMD issues such as scalability, partitioning, processor utilization, and heterogenous networks are discussed as well.This book is packed with important information and richly illustrated with diagrams and tables, Parallel Supercomputing in MIMD Architectures is an essential reference for computer professionals, program managers, applications system designers, scientists, engineers, and students in the computer sciences.

Parallel System Interconnections and Communications

by Miltos D. Grammatikakis D. Frank Hsu Miroslav Kraetzl

This introduction to networking large scale parallel computer systems acts as a primary resource for a wide readership, including network systems engineers, electronics engineers, systems designers, computer scientists involved in systems design and implementation of parallel algorithms development, graduate students in systems architecture, design, or engineering.

Parallel and Concurrent Programming in Haskell: Techniques for Multicore and Multithreaded Programming

by Simon Marlow

If you have a working knowledge of Haskell, this hands-on book shows you how to use the language’s many APIs and frameworks for writing both parallel and concurrent programs. You’ll learn how parallelism exploits multicore processors to speed up computation-heavy programs, and how concurrency enables you to write programs with threads for multiple interactions.Author Simon Marlow walks you through the process with lots of code examples that you can run, experiment with, and extend. Divided into separate sections on Parallel and Concurrent Haskell, this book also includes exercises to help you become familiar with the concepts presented:Express parallelism in Haskell with the Eval monad and Evaluation StrategiesParallelize ordinary Haskell code with the Par monadBuild parallel array-based computations, using the Repa libraryUse the Accelerate library to run computations directly on the GPUWork with basic interfaces for writing concurrent codeBuild trees of threads for larger and more complex programsLearn how to build high-speed concurrent network serversWrite distributed programs that run on multiple machines in a network

Parallel and Distributed Computing, Applications and Technologies: 19th International Conference, Pdcat 2018, Jeju Island, South Korea, August 20-22, 2018, Revised Selected Papers (Communications in Computer and Information Science #931)

by Hong Shen Yunsick Sung Jong Hyuk Park Hui Tian

This book constitutes the refereed proceedings of the 19th International Conference on CParallel and Distributed Computing, Applications and Technologies, PDCAT 2018, held in Jeju Island, South Korea, in August 2018. The 35 revised full papers presented along with the 14 short papers and were carefully reviewed and selected from 150 submissions. The papers of this volume are organized in topical sections on wired and wireless communication systems, high dimensional data representation and processing, networks and information security, computing techniques for efficient networks design, electronic circuits for communication systems.

Parallel and Distributed Computing, Applications and Technologies: 21st International Conference, PDCAT 2020, Shenzhen, China, December 28–30, 2020, Proceedings (Lecture Notes in Computer Science #12606)

by Yong Zhang Hui Tian Yicheng Xu

This book constitutes the proceedings of the 21st International Conference on Parallel and Distributed Computing, Applications, and Technologies, PDCAT 2020, which took place in Shenzhen, China, during December 28-30, 2020.The 34 full papers included in this volume were carefully reviewed and selected from 109 submissions. They deal with parallel and distributed computing of networking and architectures, software systems and technologies, algorithms and applications, and security and privacy.

Parallel and Distributed Computing, Applications and Technologies: 22nd International Conference, PDCAT 2021, Guangzhou, China, December 17–19, 2021, Proceedings (Lecture Notes in Computer Science #13148)

by Ajay Gupta Hamid R. Arabnia Hong Shen Yong Zhang Nong Xiao Geoffrey Fox Yingpeng Sang Manu Malek

This book constitutes the proceedings of the 22nd International Conference on Parallel and Distributed Computing, Applications, and Technologies, PDCAT 2021, which took place in Guangzhou, China, during December 17-19, 2021. The 24 full papers and 34 short papers included in this volume were carefully reviewed and selected from 97 submissions. The papers are categorized into the following topical sub-headings: networking and architectures, software systems and technologies, algorithms and applications, and security and privacy.

Parallel and Distributed Computing, Applications and Technologies: 23rd International Conference, PDCAT 2022, Sendai, Japan, December 7–9, 2022, Proceedings (Lecture Notes in Computer Science #13798)

by Hong Shen Jong Hyuk Park Hui Tian Toshihiro Hanawa Hiroyuki Takizawa Ryusuke Egawa

This book constitutes the proceedings of the 23rd International Conference on Parallel and Distributed Computing, Applications, and Technologies, PDCAT 2022, which took place in Sendai, Japan, during December 7-9, 2022.The 24 full papers and 16 short papers included in this volume were carefully reviewed and selected from 95 submissions. The papers are categorized into the following topical sub-headings: Heterogeneous System (1; HPC & AI; Embedded systems & Communication; Blockchain; Deep Learning; Quantum Computing & Programming Language; Best Papers; Heterogeneous System (2); Equivalence Checking & Model checking; Interconnect; Optimization (1); Optimization (2); Privacy; and Workflow.

Parallel and Distributed Computing, Applications and Technologies: 25th International Conference, PDCAT 2024, Hong Kong, China, December 13–15, 2024, Proceedings (Lecture Notes in Computer Science #15502)

by Jianliang Xu Yong Zhang Yupeng Li

This book constitutes the refereed proceedings of the 25th International Conference on Parallel and Distributed Computing, Applications and Technologies, PDCAT 2024, held in Hong Kong, China, during December 14–16, 2024. The 47 full papers and 8 short papers included in this book were carefully reviewed and selected from 114 submissions. They focus on advances in parallel and distributed computing, including parallel architectures, algorithms, and programming techniques.

Parallel and Distributed Computing, Applications and Technologies: Proceedings of PDCAT 2023 (Lecture Notes in Electrical Engineering #1112)

by Hong Shen James J. Park Hiroyuki Takizawa Ji Su Park

This book constitutes the refereed proceedings of the International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT) which was held in Jeju, Korea in August, 2023. The papers of this volume are organized in topical sections on wired and wireless communication systems, high dimensional data representation and processing, networks and information security, computing techniques for efficient networks design, electronic circuits for communication systems.

Parallel and Distributed Processing Techniques: 30th International Conference, PDPTA 2024, Held as Part of the World Congress in Computer Science, Computer Engineering and Applied Computing, CSCE 2024, Las Vegas, NV, USA, July 22–25, 2024, Revised Selected Papers (Communications in Computer and Information Science #2256)

by Hamid R. Arabnia Leonidas Deligiannidis Masami Takata Pablo Rivas Masahito Ohue Nobuaki Yasuo

This book constitutes the proceedings of the 30th International Conference on Parallel and Distributed Processing Techniques, PDPTA 2024, held as part of the 2024 World Congress in Computer Science, Computer Engineering and Applied Computing, in Las Vegas, USA, during July 22 to July 25, 2024. The 24 papers included in this book were carefully reviewed and selected from 143 submissions. They have been organized in topical sections as follows: Parallel and distributed processing techniques and applications and HPC and Workshop on Mathematical Modeling and Problem Solving.

Parallel and High Performance Computing

by Robert Robey Yuliana Zamora

Parallel and High Performance Computing offers techniques guaranteed to boost your code&’s effectiveness. Summary Complex calculations, like training deep learning models or running large-scale simulations, can take an extremely long time. Efficient parallel programming can save hours—or even days—of computing time. Parallel and High Performance Computing shows you how to deliver faster run-times, greater scalability, and increased energy efficiency to your programs by mastering parallel techniques for multicore processor and GPU hardware. About the technology Write fast, powerful, energy efficient programs that scale to tackle huge volumes of data. Using parallel programming, your code spreads data processing tasks across multiple CPUs for radically better performance. With a little help, you can create software that maximizes both speed and efficiency. About the book Parallel and High Performance Computing offers techniques guaranteed to boost your code&’s effectiveness. You&’ll learn to evaluate hardware architectures and work with industry standard tools such as OpenMP and MPI. You&’ll master the data structures and algorithms best suited for high performance computing and learn techniques that save energy on handheld devices. You&’ll even run a massive tsunami simulation across a bank of GPUs. What's inside Planning a new parallel project Understanding differences in CPU and GPU architecture Addressing underperforming kernels and loops Managing applications with batch scheduling About the reader For experienced programmers proficient with a high-performance computing language like C, C++, or Fortran. About the author Robert Robey works at Los Alamos National Laboratory and has been active in the field of parallel computing for over 30 years. Yuliana Zamora is currently a PhD student and Siebel Scholar at the University of Chicago, and has lectured on programming modern hardware at numerous national conferences. Table of Contents PART 1 INTRODUCTION TO PARALLEL COMPUTING 1 Why parallel computing? 2 Planning for parallelization 3 Performance limits and profiling 4 Data design and performance models 5 Parallel algorithms and patterns PART 2 CPU: THE PARALLEL WORKHORSE 6 Vectorization: FLOPs for free 7 OpenMP that performs 8 MPI: The parallel backbone PART 3 GPUS: BUILT TO ACCELERATE 9 GPU architectures and concepts 10 GPU programming model 11 Directive-based GPU programming 12 GPU languages: Getting down to basics 13 GPU profiling and tools PART 4 HIGH PERFORMANCE COMPUTING ECOSYSTEMS 14 Affinity: Truce with the kernel 15 Batch schedulers: Bringing order to chaos 16 File operations for a parallel world 17 Tools and resources for better code

Parallel and High-Performance Computing in Artificial Intelligence (Advances in Computational Collective Intelligence)

by Roshani Raut Pradnya Borkar Rutvij H. Jhaveri Mukesh Raghuwanshi

Parallel and High-Performance Computing in Artificial Intelligence explores high-performance architectures for data-intensive applications as well as efficient analytical strategies to speed up data processing and applications in automation, machine learning, deep learning, healthcare, bioinformatics, natural language processing (NLP), and vision intelligence.The book’s two major themes are high-performance computing (HPC) architecture and techniques and their application in artificial intelligence. Highlights include: HPC use cases, application programming interfaces (APIs), and applications Parallelization techniques HPC for machine learning Implementation of parallel computing with AI in big data analytics HPC with AI in healthcare systems AI in industrial automation Coverage of HPC architecture and techniques includes multicore architectures, parallel-computing techniques, and APIs, as well as dependence analysis for parallel computing. The book also covers hardware acceleration techniques, including those for GPU acceleration to power big data systems.As AI is increasingly being integrated into HPC applications, the book explores emerging and practical applications in such domains as healthcare, agriculture, bioinformatics, and industrial automation. It illustrates technologies and methodologies to boost the velocity and scale of AI analysis for fast discovery. Data scientists and researchers can benefit from the book’s discussion on AI-based HPC applications that can process higher volumes of data, provide more realistic simulations, and guide more accurate predictions. The book also focuses on deep learning and edge computing methodologies with HPC and presents recent research on methodologies and applications of HPC in AI.

Parallele Programmierung

by Thomas Rauber Gudula Rünger

Multiprozessor-Desktoprechner, Cluster von PCs und Innovationen wie Hyperthreading oder Multicore-Prozessoren machen parallele Rechenressourcen allgegenwärtig. Die Ausnutzung dieser Rechenleistung ist jedoch nur durch parallele Programmiertechniken möglich. Das Buch stellt diese Techniken für herkömmliche Parallelrechner und für neuartige Plattformen umfassend dar. Neben den Grundlagen der parallelen Programmierung werden Programmierumgebungen wie Pthreads, Java-Threads, OpenMP, MPI oder PVM sowie die zugehörigen Programmiermodelle behandelt.

Parallelism in Matrix Computations

by Bernard Philippe Efstratios Gallopoulos Ahmed H. Sameh

This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of parallel iterative linear system solvers with emphasis on scalable preconditioners, (b) parallel schemes for obtaining a few of the extreme eigenpairs or those contained in a given interval in the spectrum of a standard or generalized symmetric eigenvalue problem, and (c) parallel methods for computing a few of the extreme singular triplets. Part IV focuses on the development of parallel algorithms for matrix functions and special characteristics such as the matrix pseudospectrum and the determinant. The book also reviews the theoretical and practical background necessary when designing these algorithms and includes an extensive bibliography that will be useful to researchers and students alike. The book brings together many existing algorithms for the fundamental matrix computations that have a proven track record of efficient implementation in terms of data locality and data transfer on state-of-the-art systems, as well as several algorithms that are presented for the first time, focusing on the opportunities for parallelism and algorithm robustness.

Parallels and Responses to Curricular Innovation: The Possibilities of Posthumanistic Education (Routledge International Studies in the Philosophy of Education #36)

by Brad Petitfils

This volume explores two radical shifts in history and subsequent responses in curricular spaces: the move from oral to print culture during the transition between the 15th and 16th centuries and the rise of the Jesuits, and the move from print to digital culture during the transition between the 20th and 21st centuries and the rise of what the philosopher Jean Baudrillard called "hyperreality." The curricular innovation that accompanied the first shift is considered through the rise of the Society of Jesus (the Jesuits). These men created the first "global network" of education, and developed a humanistic curriculum designed to help students navigate a complicated era of the known (human-centered) and unknown (God-centered) universe. The curricular innovation that is proposed for the current shift is guided by the question: What should be the role of undergraduate education become in the 21st century? Today, the tension between the known and unknown universe is concentrated on the interrelationships between our embodied spaces and our digitally mediated ones. As a result, today’s undergraduate students should be challenged to understand how—in the objectively focused, commodified, STEM-centric landscape of higher education—the human subject is decentered by the forces of hyperreality, and in turn, how the human subject might be recentered to balance our humanness with the new realities of digital living. Therein, one finds the possibility of posthumanistic education.

Refine Search

Showing 41,926 through 41,950 of 61,392 results