Browse Results

Showing 39,526 through 39,550 of 53,688 results

Deep Learning: A Comprehensive Guide

by Shriram K Vasudevan Sini Raj Pulari Subashri Vasudevan

Deep Learning: A Comprehensive Guide provides comprehensive coverage of Deep Learning (DL) and Machine Learning (ML) concepts. DL and ML are the most sought-after domains, requiring a deep understanding – and this book gives no less than that. This book enables the reader to build innovative and useful applications based on ML and DL. Starting with the basics of neural networks, and continuing through the architecture of various types of CNNs, RNNs, LSTM, and more till the end of the book, each and every topic is given the utmost care and shaped professionally and comprehensively. Key Features Includes the smooth transition from ML concepts to DL concepts Line-by-line explanations have been provided for all the coding-based examples Includes a lot of real-time examples and interview questions that will prepare the reader to take up a job in ML/DL right away Even a person with a non-computer-science background can benefit from this book by following the theory, examples, case studies, and code snippets Every chapter starts with the objective and ends with a set of quiz questions to test the reader’s understanding Includes references to the related YouTube videos that provide additional guidance AI is a domain for everyone. This book is targeted toward everyone irrespective of their field of specialization. Graduates and researchers in deep learning will find this book useful.

Deep Learners and Deep Learner Descriptors for Medical Applications (Intelligent Systems Reference Library #186)

by Loris Nanni Sheryl Brahnam Rick Brattin Stefano Ghidoni Lakhmi C. Jain

This book introduces readers to the current trends in using deep learners and deep learner descriptors for medical applications. It reviews the recent literature and presents a variety of medical image and sound applications to illustrate the five major ways deep learners can be utilized: 1) by training a deep learner from scratch (chapters provide tips for handling imbalances and other problems with the medical data); 2) by implementing transfer learning from a pre-trained deep learner and extracting deep features for different CNN layers that can be fed into simpler classifiers, such as the support vector machine; 3) by fine-tuning one or more pre-trained deep learners on an unrelated dataset so that they are able to identify novel medical datasets; 4) by fusing different deep learner architectures; and 5) by combining the above methods to generate a variety of more elaborate ensembles. This book is a value resource for anyone involved in engineering deep learners for medical applications as well as to those interested in learning more about the current techniques in this exciting field. A number of chapters provide source code that can be used to investigate topics further or to kick-start new projects.

Deep Inside osCommerce: The Cookbook

by Monika Mathé

This book is aimed at people with existing online stores, built using osCommerce. The book follows a systematic approach whereby users can modify and extend features on their already existing osCommerce site. Each chapter deals with a different aspect , and provides ready-made recipes for modifying code to your requirements. The author starts by explaining basic changes one can make to the design of your store, and then covers features like navigation, images, shipping and payment modules, and even explains how to make changes on the administratorâ TMs side and keeping your own recipes private. This book is for people who are already familiar with osCommerce. It presumes a working knowledge of PHP and HTML, as well as basic understanding of phpMyAdmin for database inserts.

Deep In-memory Architectures for Machine Learning

by Mingu Kang Sujan Gonugondla Naresh R. Shanbhag

This book describes the recent innovation of deep in-memory architectures for realizing AI systems that operate at the edge of energy-latency-accuracy trade-offs. From first principles to lab prototypes, this book provides a comprehensive view of this emerging topic for both the practicing engineer in industry and the researcher in academia. The book is a journey into the exciting world of AI systems in hardware.

Deep Generative Models, and Data Augmentation, Labelling, and Imperfections: First Workshop, DGM4MICCAI 2021, and First Workshop, DALI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings (Lecture Notes in Computer Science #13003)

by Sandy Engelhardt Ilkay Oksuz Dajiang Zhu Yixuan Yuan Anirban Mukhopadhyay Nicholas Heller Sharon Xiaolei Huang Hien Nguyen Raphael Sznitman Yuan Xue

This book constitutes the refereed proceedings of the First MICCAI Workshop on Deep Generative Models, DG4MICCAI 2021, and the First MICCAI Workshop on Data Augmentation, Labelling, and Imperfections, DALI 2021, held in conjunction with MICCAI 2021, in October 2021. The workshops were planned to take place in Strasbourg, France, but were held virtually due to the COVID-19 pandemic.DG4MICCAI 2021 accepted 12 papers from the 17 submissions received. The workshop focusses on recent algorithmic developments, new results, and promising future directions in Deep Generative Models. Deep generative models such as Generative Adversarial Network (GAN) and Variational Auto-Encoder (VAE) are currently receiving widespread attention from not only the computer vision and machine learning communities, but also in the MIC and CAI community.For DALI 2021, 15 papers from 32 submissions were accepted for publication. They focus on rigorous study of medical data related to machine learning systems.

Deep Generative Models: Second MICCAI Workshop, DGM4MICCAI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings (Lecture Notes in Computer Science #13609)

by Anirban Mukhopadhyay Ilkay Oksuz Sandy Engelhardt Dajiang Zhu Yixuan Yuan

This book constitutes the refereed proceedings of the Second MICCAI Workshop on Deep Generative Models, DG4MICCAI 2022, held in conjunction with MICCAI 2022, in September 2022. The workshops took place in Singapore. DG4MICCAI 2022 accepted 12 papers from the 15 submissions received. The workshop focusses on recent algorithmic developments, new results, and promising future directions in Deep Generative Models. Deep generative models such as Generative Adversarial Network (GAN) and Variational Auto-Encoder (VAE) are currently receiving widespread attention from not only the computer vision and machine learning communities, but also in the MIC and CAI community.

Deep Generative Models: Third MICCAI Workshop, DGM4MICCAI 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 8, 2023, Proceedings (Lecture Notes in Computer Science #14533)

by Anirban Mukhopadhyay Ilkay Oksuz Sandy Engelhardt Dajiang Zhu Yixuan Yuan

This LNCS conference volume constitutes the proceedings of the third MICCAI Workshop, DGM4MICCAI 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 2023. The 23 full papers included in this volume were carefully reviewed and selected from 38 submissions.The conference presents topics ranging from methodology, causal inference, latent interpretation, generative factor analysis to applications such as mammography, vessel imaging, and surgical Videos.

Deep Generative Modeling

by Jakub M. Tomczak

This textbook tackles the problem of formulating AI systems by combining probabilistic modeling and deep learning. Moreover, it goes beyond typical predictive modeling and brings together supervised learning and unsupervised learning. The resulting paradigm, called deep generative modeling, utilizes the generative perspective on perceiving the surrounding world. It assumes that each phenomenon is driven by an underlying generative process that defines a joint distribution over random variables and their stochastic interactions, i.e., how events occur and in what order. The adjective "deep" comes from the fact that the distribution is parameterized using deep neural networks. There are two distinct traits of deep generative modeling. First, the application of deep neural networks allows rich and flexible parameterization of distributions. Second, the principled manner of modeling stochastic dependencies using probability theory ensures rigorous formulation and prevents potential flaws in reasoning. Moreover, probability theory provides a unified framework where the likelihood function plays a crucial role in quantifying uncertainty and defining objective functions. Deep Generative Modeling is designed to appeal to curious students, engineers, and researchers with a modest mathematical background in undergraduate calculus, linear algebra, probability theory, and the basics in machine learning, deep learning, and programming in Python and PyTorch (or other deep learning libraries). It will appeal to students and researchers from a variety of backgrounds, including computer science, engineering, data science, physics, and bioinformatics, who wish to become familiar with deep generative modeling. To engage the reader, the book introduces fundamental concepts with specific examples and code snippets. The full code accompanying the book is available on github. The ultimate aim of the book is to outline the most important techniques in deep generative modeling and, eventually, enable readers to formulate new models and implement them.

Deep Fakes and the Infocalypse: What You Urgently Need To Know

by Nina Schick

"Deep Fakes and the Infocalypse is an urgent, thoughtful and thoroughly-researched book that raises uncomfortable questions about the way that information is being distorted by states and individuals... A must-read." - Greg Williams, Editor in Chief of WIRED UK"Essential reading for any one interested about the shocking way information is and will be manipulated." - Lord Edward Vaizey"Schick's Deep Fakes and the Infocalypse is a short, sharp book that hits you like a punch in the stomach." - Nick Cohen, The Observer"Deep Fakes is an uncomfortable but gripping read, probing the way in which the internet has been flooded with disinformation and dark arts propaganda." - Jim Pickard, Chief Political Correspondent, Financial Times"A searing insight into a world so many of us find difficult to understand. I was gripped from the first page." - Iain Dale, Broadcaster"With this powerful book, Nina Schick has done us all a great public service...It's your civic duty to read it." - Jamie Susskind, author of Future Politics"Gripping, alarming and morally vital." - Ian Dunt, Host of Remainiacs PodcastIt will soon be impossible to tell what is real and what is fake. Recent advances in AI mean that by scanning images of a person (for example using Facebook), a powerful machine learning system can create new video images and place them in scenarios and situations which never actually happened. When combined with powerful voice AI, the results are utterly convincing.So-called 'Deep Fakes' are not only a real threat for democracy but they take the manipulation of voters to new levels. They will also affect ordinary people. This crisis of misinformation we are facing has been dubbed the 'Infocalypse'. Using her expertise from working in the field, Nina Schick reveals shocking examples of Deep Fakery and explains the dangerous political consequences of the Infocalypse, both in terms of national security and what it means for public trust in politics. She also unveils what it means for us as individuals, how Deep Fakes will be used to intimidate and to silence, for revenge and fraud, and how unprepared governments and tech companies are. As a political advisor to select technology firms, Schick tells us what we need to do to prepare and protect ourselves. Too often we build the cool technology and ignore what bad guys can do with it before we start playing catch-up. But when it comes to Deep Fakes, we urgently need to be on the front foot.

Deep Fakes and the Infocalypse: What You Urgently Need To Know

by Nina Schick

"Nina Schick is alerting us to a danger from the future that is already here." - Adam Boulton, Editor at Large, Sky News"Deep Fakes and the Infocalypse is an urgent, thoughtful and thoroughly-researched book that raises uncomfortable questions about the way that information is being distorted by states and individuals... A must-read." - Greg Williams, Editor in Chief of WIRED UK"Essential reading for any one interested about the shocking way information is and will be manipulated." - Lord Edward VaizeyDeep Fakes are coming, and we are not ready. Advanced AI technology is now able to create video of people doing things they never did, in places they have never been, saying things they never said. In the hands of rogue states, terrorists, criminals or crazed individuals, they represent a disturbing new threat to democracy and personal liberty. Deep Fakes can be misused to shift public opinion, swing Presidential elections, or blackmail, coerce, and silence individuals. And when combined with the destabilising overload of disinformation that has been dubbed 'the Infocalypse', we are potentially facing a danger of world-changing proportions.Deep Fakes and the Infocalypse is International Political Technology Advisor Nina Schick's stark warning about a future we all need to understand before it's too late.PLEASE NOTE: When you purchase this title, the accompanying PDF will be available in your Audible Library along with the audio.

Deep-Dive Terraform on Azure: Automated Delivery and Deployment of Azure Solutions

by Ritesh Modi

Get started with the foundations of Infrastructure as Code and learn how Terraform can automate the deployment and management of resources on Azure. This book covers all of the software engineering practices related to Terraform and Infrastructure as Code with Azure as a cloud provider.The book starts with an introduction to Infrastructure as Code and covers basic concepts, principles, and tools, followed by an overview of Azure and Terraform that shows you how Terraform can be used to provision and manage Azure resources. You will get started writing multiple Terraform scripts and explore its various concepts. Author Ritesh Modi takes a deep dive into Terraform and teaches you about deployment and multiple resource creation using loops. Writing a reusable script using modules is discussed as well as management and administration of secrets, sensitive data, and passwords within Terraform code. You will learn to store and version Terraform scripts and know how Terraform is used in Azure DevOps pipelines. And you will write unit and integration tests for Terraform and learn its best practices. The book also highlights and walks through the Terraform Azure Provider and shows you a simple way to create a new Terraform provider.After reading this book, you will be able to write quality Terraform scripts that are secure by design, modular, and reusable in Azure.What Will You LearnUnderstand implementation within infrastructure and application deploymentsProvision resources in Azure using TerraformUse unit and integration testingExplore concepts such as local vs remote, importing state, workspaces, and backends Who This Book Is ForSoftware engineers, DevOps professionals, and technology architects

Deep Dive into Power Automate: Learn by Example

by Goloknath Mishra

Understand the basics of flow and learn how to implement guidelines in real-life scenarios, including Robotic Process Automation (RPA) capabilities. This book covers the evolution of flow and how it is transformed to a full-fledged RPA (such as Power Automate).The book starts with an introduction to flow and its transformation to Process Automation. You will learn how to create a Power Automate environment and demonstrate different types of flows within it. Author Goloknath Mishra takes you through various types of cloud flows and their best practices. Desktop Flows (RPA) or Power Automate Desktop (PAD) is discussed and the author teaches you its architecture, installation steps, and how to manage, schedule, and share a desktop. You will learn about Business Process Flow, Process Advisors, and AI Builder. You also will go through licensing considerations in Power Automate and AI Builder, and demonstrate all of your learnings through a mini project.After reading the book, you will have gained expertise in Power Automate and be able to implement its guidelines and solve problems at your organization.What Will You LearnKnow the difference between Intelligent Process Automation (IPA) and Robotic Process Automation (RPA)Understand the different types of flows in Power AutomateCreate various types of cloud flows, Desktop flows, Business Process flows, and AI Builder modelsStudy common use cases and be aware of Power Automate best practicesWho This Book Is ForBusiness executives, citizen developers, IT professionals, and computer scientists who wish to efficiently automate monotonous work

Deep Dive: Exploring the Real-world Value of Open Source Intelligence

by Rae L. Baker

Learn to gather and analyze publicly available data for your intelligence needs In Deep Dive: Exploring the Real-world Value of Open Source Intelligence, veteran open-source intelligence analyst Rae Baker explains how to use publicly available data to advance your investigative OSINT skills and how your adversaries are most likely to use publicly accessible data against you. The author delivers an authoritative introduction to the tradecraft utilized by open-source intelligence gathering specialists while offering real-life cases that highlight and underline the data collection and analysis processes and strategies you can implement immediately while hunting for open-source info. In addition to a wide breadth of essential OSINT subjects, you’ll also find detailed discussions on ethics, traditional OSINT topics like subject intelligence, organizational intelligence, image analysis, and more niche topics like maritime and IOT. The book includes: Practical tips for new and intermediate analysts looking for concrete intelligence-gathering strategies Methods for data analysis and collection relevant to today’s dynamic intelligence environment Tools for protecting your own data and information against bad actors and potential adversariesAn essential resource for new intelligence analysts, Deep Dive: Exploring the Real-world Value of Open Source Intelligence is also a must-read for early-career and intermediate analysts, as well as intelligence teams seeking to improve the skills of their newest team members.

Deep Data Analytics for New Product Development

by Walter R. Paczkowski

This book presents and develops the deep data analytics for providing the information needed for successful new product development. Deep Data Analytics for New Product Development has a simple theme: information about what customers need and want must be extracted from data to effectively guide new product decisions regarding concept development, design, pricing, and marketing. The benefits of reading this book are twofold. The first is an understanding of the stages of a new product development process from ideation through launching and tracking, each supported by information about customers. The second benefit is an understanding of the deep data analytics for extracting that information from data. These analytics, drawn from the statistics, econometrics, market research, and machine learning spaces, are developed in detail and illustrated at each stage of the process with simulated data. The stages of new product development and the supporting deep data analytics at each stage are not presented in isolation of each other, but are presented as a synergistic whole. This book is recommended reading for analysts involved in new product development. Readers with an analytical bent or who want to develop analytical expertise would also greatly benefit from reading this book, as well as students in business programs.

Deep Comprehension: Multi-Disciplinary Approaches to Understanding, Enhancing, and Measuring Comprehension

by Keith K. Millis Debra Long Joseph Magliano Katja Wiemer

This volume provides an overview of research from the learning sciences into understanding, enhancing, and measuring "deep comprehension" from a psychological, educational, and psychometric perspective. It describes the characteristics of deep comprehension, what techniques may be used to improve it, and how deep levels of comprehension may be distinguished from shallow ones. It includes research on personal-level variables; how intelligent tutors promote comprehension; and the latest developments in psychometrics. The volume will be of interest to senior undergraduate and graduate students of cognitive psychology, learning, cognition and instruction, and educational technology.

Deep Cognitive Networks: Enhance Deep Learning by Modeling Human Cognitive Mechanism (SpringerBriefs in Computer Science)

by Yan Huang Liang Wang

Although deep learning models have achieved great progress in vision, speech, language, planning, control, and many other areas, there still exists a large performance gap between deep learning models and the human cognitive system. Many researchers argue that one of the major reasons accounting for the performance gap is that deep learning models and the human cognitive system process visual information in very different ways. To mimic the performance gap, since 2014, there has been a trend to model various cognitive mechanisms from cognitive neuroscience, e.g., attention, memory, reasoning, and decision, based on deep learning models. This book unifies these new kinds of deep learning models and calls them deep cognitive networks, which model various human cognitive mechanisms based on deep learning models. As a result, various cognitive functions are implemented, e.g., selective extraction, knowledge reuse, and problem solving, for more effective information processing. This book first summarizes existing evidence of human cognitive mechanism modeling from cognitive psychology and proposes a general framework of deep cognitive networks that jointly considers multiple cognitive mechanisms. Then, it analyzes related works and focuses primarily but not exclusively, on the taxonomy of four key cognitive mechanisms (i.e., attention, memory, reasoning, and decision) surrounding deep cognitive networks. Finally, this book studies two representative cases of applying deep cognitive networks to the task of image-text matching and discusses important future directions.

Deep Biometrics (Unsupervised and Semi-Supervised Learning)

by Richard Jiang Danny Crookes Chang-Tsun Li Weizhi Meng Christophe Rosenberger

This book highlights new advances in biometrics using deep learning toward deeper and wider background, deeming it “Deep Biometrics”. The book aims to highlight recent developments in biometrics using semi-supervised and unsupervised methods such as Deep Neural Networks, Deep Stacked Autoencoder, Convolutional Neural Networks, Generative Adversary Networks, and so on. The contributors demonstrate the power of deep learning techniques in the emerging new areas such as privacy and security issues, cancellable biometrics, soft biometrics, smart cities, big biometric data, biometric banking, medical biometrics, healthcare biometrics, and biometric genetics, etc. The goal of this volume is to summarize the recent advances in using Deep Learning in the area of biometric security and privacy toward deeper and wider applications.Highlights the impact of deep learning over the field of biometrics in a wide area;Exploits the deeper and wider background of biometrics, such as privacy versus security, biometric big data, biometric genetics, and biometric diagnosis, etc.;Introduces new biometric applications such as biometric banking, internet of things, cloud computing, and medical biometrics.

Deep Belief Nets in C++ and CUDA C: Restricted Boltzmann Machines And Supervised Feedforward Networks (Deep Belief Nets In C++ And Cuda C Ser.)

by Timothy Masters

Discover the essential building blocks of the most common forms of deep belief networks. At each step this book provides intuitive motivation, a summary of the most important equations relevant to the topic, and concludes with highly commented code for threaded computation on modern CPUs as well as massive parallel processing on computers with CUDA-capable video display cards. The first of three in a series on C++ and CUDA C deep learning and belief nets, Deep Belief Nets in C++ and CUDA C: Volume 1 shows you how the structure of these elegant models is much closer to that of human brains than traditional neural networks; they have a thought process that is capable of learning abstract concepts built from simpler primitives. As such, you’ll see that a typical deep belief net can learn to recognize complex patterns by optimizing millions of parameters, yet this model can still be resistant to overfitting. All the routines and algorithms presented in the book are available in the code download, which also contains some libraries of related routines. What You Will LearnEmploy deep learning using C++ and CUDA CWork with supervised feedforward networks Implement restricted Boltzmann machines Use generative samplingsDiscover why these are importantWho This Book Is ForThose who have at least a basic knowledge of neural networks and some prior programming experience, although some C++ and CUDA C is recommended.

Deep Belief Nets in C++ and CUDA C: Convolutional Nets

by Timothy Masters

Discover the essential building blocks of a common and powerful form of deep belief network: convolutional nets. This book shows you how the structure of these elegant models is much closer to that of human brains than traditional neural networks; they have a ‘thought process’ that is capable of learning abstract concepts built from simpler primitives. These models are especially useful for image processing applications. At each step Deep Belief Nets in C++ and CUDA C: Volume 3 presents intuitive motivation, a summary of the most important equations relevant to the topic, and concludes with highly commented code for threaded computation on modern CPUs as well as massive parallel processing on computers with CUDA-capable video display cards. Source code for all routines presented in the book, and the executable CONVNET program which implements these algorithms, are available for free download.What You Will LearnDiscover convolutional nets and how to use themBuild deep feedforward nets using locally connected layers, pooling layers, and softmax outputsMaster the various programming algorithms requiredCarry out multi-threaded gradient computations and memory allocations for this threadingWork with CUDA code implementations of all core computations, including layer activations and gradient calculationsMake use of the CONVNET program and manual to explore convolutional nets and case studiesWho This Book Is ForThose who have at least a basic knowledge of neural networks and some prior programming experience, although some C++ and CUDA C is recommended.

Deep Belief Nets in C++ and CUDA C: Autoencoding In The Complex Domain

by Timothy Masters

Discover the essential building blocks of a common and powerful form of deep belief net: the autoencoder. You’ll take this topic beyond current usage by extending it to the complex domain for signal and image processing applications. Deep Belief Nets in C++ and CUDA C: Volume 2 also covers several algorithms for preprocessing time series and image data. These algorithms focus on the creation of complex-domain predictors that are suitable for input to a complex-domain autoencoder. Finally, you’ll learn a method for embedding class information in the input layer of a restricted Boltzmann machine. This facilitates generative display of samples from individual classes rather than the entire data distribution. The ability to see the features that the model has learned for each class separately can be invaluable. At each step this book provides you with intuitive motivation, a summary of the most important equations relevant to the topic, and highly commented code for threaded computation on modern CPUs as well as massive parallel processing on computers with CUDA-capable video display cards. What You'll LearnCode for deep learning, neural networks, and AI using C++ and CUDA CCarry out signal preprocessing using simple transformations, Fourier transforms, Morlet wavelets, and moreUse the Fourier Transform for image preprocessingImplement autoencoding via activation in the complex domainWork with algorithms for CUDA gradient computationUse the DEEP operating manualWho This Book Is ForThose who have at least a basic knowledge of neural networks and some prior programming experience, although some C++ and CUDA C is recommended.

Deep and Shallow: Machine Learning in Music and Audio

by Shlomo Dubnov Ross Greer

Providing an essential and unique bridge between the theories of signal processing, machine learning, and artificial intelligence (AI) in music, this book provides a holistic overview of foundational ideas in music, from the physical and mathematical properties of sound to symbolic representations. Combining signals and language models in one place, this book explores how sound may be represented and manipulated by computer systems, and how our devices may come to recognize particular sonic patterns as musically meaningful or creative through the lens of information theory. Introducing popular fundamental ideas in AI at a comfortable pace, more complex discussions around implementations and implications in musical creativity are gradually incorporated as the book progresses. Each chapter is accompanied by guided programming activities designed to familiarize readers with practical implications of discussed theory, without the frustrations of free-form coding. Surveying state-of-the art methods in applications of deep neural networks to audio and sound computing, as well as offering a research perspective that suggests future challenges in music and AI research, this book appeals to both students of AI and music, as well as industry professionals in the fields of machine learning, music, and AI.

The Deductive Spreadsheet

by Iliano Cervesato

This book describes recent multidisciplinary research at the confluence of the fields of logic programming, database theory and human-computer interaction. The goal of this effort was to develop the basis of a deductive spreadsheet, a user productivity application that allows users without formal training in computer science to make decisions about generic data in the same simple way they currently use spreadsheets to make decisions about numerical data. The result is an elegant design supported by the most recent developments in the above disciplines. The first half of the book focuses on the deductive engine that underlies this application, the foundations that users do not see. After giving a mathematical model of traditional spreadsheet applications, we extend them with operators to perform a number of relational tasks, similar to the user view of a database but in a spreadsheet context. Expressing this extension in a logic programming framework is a natural step towards giving it powerful deductive capabilities. The second half of the book deals with the user interface, the part of the application with which the user actually interacts. We review the elements of the graphical user interface of traditional spreadsheet applications and describe practical methodologies for designing user interfaces borrowed from the field of cognitive psychology. We then propose a design that conservatively integrates mechanisms for a user to take advantage of the new deductive capabilities. This is followed by the results of some preliminary usability experiments. The book will appeal to researchers and practitioners in the various areas underlying this work. Researchers will not only find interesting new developments in their domains, but will also learn how to achieve a multidisciplinary focus. Practitioners will find fully developed solutions to numerous problems that are not easily solvable using traditional spreadsheet applications.

Deductive Software Verification: Reflections on the Occasion of 20 Years of KeY (Lecture Notes in Computer Science #12345)

by Wolfgang Ahrendt Bernhard Beckert Richard Bubel Reiner Hähnle Mattias Ulbrich

This book presents reflections on the occasion of 20 years on the KeY project that focuses on deductive software verification.Since the inception of the KeY project two decades ago, the area of deductive verification has evolved considerably. Support for real world programming languages by deductive program verification tools has become prevalent. This required to overcome significant theoretical and technical challenges to support advanced software engineering and programming concepts. The community became more interconnected with a competitive, but friendly and supportive environment. We took the 20-year anniversary of KeY as an opportunity to invite researchers, inside and outside of the project, to contribute to a book capturing some state-of-the-art developments in the field. We received thirteen contributions from recognized experts of the field addressing the latest challenges. The topics of the contributions range from tool development, effciency and usability considerations to novel specification and verification methods. This book should offer the reader an up-to-date impression of the current state of art in deductive verification, and we hope, inspire her to contribute to the field and to join forces. We are looking forward to meeting you at the next conference, to listen to your research talks and the resulting fruitful discussions and collaborations.

Decrypting the Encryption Debate: A Framework For Decision Makers

by Engineering Medicine National Academies of Sciences

Encryption protects information stored on smartphones, laptops, and other devices - in some cases by default. Encrypted communications are provided by widely used computing devices and services - such as smartphones, laptops, and messaging applications - that are used by hundreds of millions of users. Individuals, organizations, and governments rely on encryption to counter threats from a wide range of actors, including unsophisticated and sophisticated criminals, foreign intelligence agencies, and repressive governments. Encryption on its own does not solve the challenge of providing effective security for data and systems, but it is an important tool. At the same time, encryption is relied on by criminals to avoid investigation and prosecution, including criminals who may unknowingly benefit from default settings as well as those who deliberately use encryption. Thus, encryption complicates law enforcement and intelligence investigations. When communications are encrypted "end-to-end," intercepted messages cannot be understood. When a smartphone is locked and encrypted, the contents cannot be read if the phone is seized by investigators. Decrypting the Encryption Debate reviews how encryption is used, including its applications to cybersecurity; its role in protecting privacy and civil liberties; the needs of law enforcement and the intelligence community for information; technical and policy options for accessing plaintext; and the international landscape. This book describes the context in which decisions about providing authorized government agencies access to the plaintext version of encrypted information would be made and identifies and characterizes possible mechanisms and alternative means of obtaining information.

Decoupled Drupal in Practice: Architect and Implement Decoupled Drupal Architectures Across the Stack

by Preston So

Gain a clear understanding of the most important concepts in the decoupled CMS landscape. You will learn how to architect and implement decoupled Drupal architectures across the stack—from building the back end and designing APIs to integrating with front-end technologies. You'll also review presenting data through consumer applications in widely adopted technologies such as Angular, Ember, React, and Vue.js.Featuring a foreword by Drupal founder and project lead Dries Buytaert, the first part of this book chronicles the history of the CMS and the server–client divide, analyzes the risks and rewards of decoupled CMS architectures, and presents architectural patterns. From there, the book explores the core and contributed landscape for decoupled Drupal, authentication mechanisms, and the surrounding tooling ecosystem before delving into consumer implementations in a variety of technologies. Finally, a series of chapters on advanced topics feature the Drupal REST plugin system, schemas and generated documentation, and caching. Several projects point to a decoupled future for Drupal, including the Contenta CMS and work to modernize Drupal's JavaScript using React. Begin learning about these and other exciting developments with Decoupled Drupal today. What You’ll Learn Evaluate the risks and rewards of decoupled Drupal and classify its architecturesAuthenticate requests to Drupal using OAuth, JWT, and Basic AuthenticationConsume and manipulate Drupal content via API through HTTP requestsIntegrate with other consumer applications for native mobile and desktop as well as set-top boxes (Roku, Apple TV, Samsung TV)Add new resources to Drupal's REST API using the REST plugin systemGenerate API documentation that complies with the OpenAPI (Swagger) standardWho This Book Is For Those with some exposure to CMSes like WordPress and Drupal and those who wish to follow along with JavaScript application development will benefit. A familiarity with API-first or services-oriented architectures is helpful but not presumed.

Refine Search

Showing 39,526 through 39,550 of 53,688 results