Browse Results

Showing 18,801 through 18,825 of 53,274 results

Explainable AI and Other Applications of Fuzzy Techniques: Proceedings of the 2021 Annual Conference of the North American Fuzzy Information Processing Society, NAFIPS 2021 (Lecture Notes in Networks and Systems #258)

by Julia Rayz Victor Raskin Scott Dick Vladik Kreinovich

This book focuses on an overview of the AI techniques, their foundations, their applications, and remaining challenges and open problems. Many artificial intelligence (AI) techniques do not explain their recommendations. Providing natural-language explanations for numerical AI recommendations is one of the main challenges of modern AI. To provide such explanations, a natural idea is to use techniques specifically designed to relate numerical recommendations and natural-language descriptions, namely fuzzy techniques. This book is of interest to practitioners who want to use fuzzy techniques to make AI applications explainable, to researchers who may want to extend the ideas from these papers to new application areas, and to graduate students who are interested in the state-of-the-art of fuzzy techniques and of explainable AI—in short, to anyone who is interested in problems involving fuzziness and AI in general.

Explainable AI for Cybersecurity

by Zhixin Pan Prabhat Mishra

This book provides a comprehensive overview of security vulnerabilities and state-of-the-art countermeasures using explainable artificial intelligence (AI). Specifically, it describes how explainable AI can be effectively used for detection and mitigation of hardware vulnerabilities (e.g., hardware Trojans) as well as software attacks (e.g., malware and ransomware). It provides insights into the security threats towards machine learning models and presents effective countermeasures. It also explores hardware acceleration of explainable AI algorithms. The reader will be able to comprehend a complete picture of cybersecurity challenges and how to detect them using explainable AI. This book serves as a single source of reference for students, researchers, engineers, and practitioners for designing secure and trustworthy systems.

Explainable AI: Foundations, Methodologies and Applications (Intelligent Systems Reference Library #232)

by Mayuri Mehta Vasile Palade Indranath Chatterjee

This book presents an overview and several applications of explainable artificial intelligence (XAI). It covers different aspects related to explainable artificial intelligence, such as the need to make the AI models interpretable, how black box machine/deep learning models can be understood using various XAI methods, different evaluation metrics for XAI, human-centered explainable AI, and applications of explainable AI in health care, security surveillance, transportation, among other areas.The book is suitable for students and academics aiming to build up their background on explainable AI and can guide them in making machine/deep learning models more transparent. The book can be used as a reference book for teaching a graduate course on artificial intelligence, applied machine learning, or neural networks. Researchers working in the area of AI can use this book to discover the recent developments in XAI. Besides its use in academia, this book could be used by practitioners in AI industries, healthcare industries, medicine, autonomous vehicles, and security surveillance, who would like to develop AI techniques and applications with explanations.

Explainable AI in Healthcare: Unboxing Machine Learning for Biomedicine (Analytics and AI for Healthcare)

by Mehul S Raval Mohendra Roy Tolga Kaya Rupal Kapdi

This book combines technology and the medical domain. It covers advances in computer vision (CV) and machine learning (ML) that facilitate automation in diagnostics and therapeutic and preventive health care. The special focus on eXplainable Artificial Intelligence (XAI) uncovers the black box of ML and bridges the semantic gap between the technologists and the medical fraternity. Explainable AI in Healthcare: Unboxing Machine Learning for Biomedicine intends to be a premier reference for practitioners, researchers, and students at basic, intermediary levels and expert levels in computer science, electronics and communications, information technology, instrumentation and control, and electrical engineering. This book will benefit readers in the following ways: Explores state of art in computer vision and deep learning in tandem to develop autonomous or semi-autonomous algorithms for diagnosis in health care Investigates bridges between computer scientists and physicians being built with XAI Focuses on how data analysis provides the rationale to deal with the challenges of healthcare and making decision-making more transparent Initiates discussions on human-AI relationships in health care Unites learning for privacy preservation in health care

Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability (Studies in Computational Intelligence #914)

by Arash Shaban-Nejad Martin Michalowski David L. Buckeridge

This book highlights the latest advances in the application of artificial intelligence and data science in health care and medicine. Featuring selected papers from the 2020 Health Intelligence Workshop, held as part of the Association for the Advancement of Artificial Intelligence (AAAI) Annual Conference, it offers an overview of the issues, challenges, and opportunities in the field, along with the latest research findings. Discussing a wide range of practical applications, it makes the emerging topics of digital health and explainable AI in health care and medicine accessible to a broad readership. The availability of explainable and interpretable models is a first step toward building a culture of transparency and accountability in health care. As such, this book provides information for scientists, researchers, students, industry professionals, public health agencies, and NGOs interested in the theory and practice of computational models of public and personalized health intelligence.

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (Lecture Notes in Computer Science #11700)

by Klaus-Robert Müller Grégoire Montavon Wojciech Samek Andrea Vedaldi Lars Kai Hansen

The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.

Explainable AI Recipes: Implement Solutions to Model Explainability and Interpretability with Python

by Pradeepta Mishra

Understand how to use Explainable AI (XAI) libraries and build trust in AI and machine learning models. This book utilizes a problem-solution approach to explaining machine learning models and their algorithms. The book starts with model interpretation for supervised learning linear models, which includes feature importance, partial dependency analysis, and influential data point analysis for both classification and regression models. Next, it explains supervised learning using non-linear models and state-of-the-art frameworks such as SHAP values/scores and LIME for local interpretation. Explainability for time series models is covered using LIME and SHAP, as are natural language processing-related tasks such as text classification, and sentiment analysis with ELI5, and ALIBI. The book concludes with complex model classification and regression-like neural networks and deep learning models using the CAPTUM framework that shows feature attribution, neuron attribution, and activation attribution. After reading this book, you will understand AI and machine learning models and be able to put that knowledge into practice to bring more accuracy and transparency to your analyses.What You Will LearnCreate code snippets and explain machine learning models using PythonLeverage deep learning models using the latest code with agile implementationsBuild, train, and explain neural network models designed to scaleUnderstand the different variants of neural network models Who This Book Is ForAI engineers, data scientists, and software developers interested in XAI

Explainable AI with Python

by Leonida Gianfagna Antonio Di Cecco

This book provides a full presentation of the current concepts and available techniques to make “machine learning” systems more explainable. The approaches presented can be applied to almost all the current “machine learning” models: linear and logistic regression, deep learning neural networks, natural language processing and image recognition, among the others.Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these agents are understood, most of the current deep-learning models are "opaque" to human understanding. Explainable AI with Python fills the current gap in literature on this emerging topic by taking both a theoretical and a practical perspective, making the reader quickly capable of working with tools and code for Explainable AI.Beginning with examples of what Explainable AI (XAI) is and why it is needed in the field, the book details different approaches to XAI depending on specific context and need. Hands-on work on interpretable models with specific examples leveraging Python are then presented, showing how intrinsic interpretable models can be interpreted and how to produce “human understandable” explanations. Model-agnostic methods for XAI are shown to produce explanations without relying on ML models internals that are “opaque.” Using examples from Computer Vision, the authors then look at explainable models for Deep Learning and prospective methods for the future. Taking a practical perspective, the authors demonstrate how to effectively use ML and XAI in science. The final chapter explains Adversarial Machine Learning and how to do XAI with adversarial examples.

Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications

by Moamar Sayed-Mouchaweh

This book presents Explainable Artificial Intelligence (XAI), which aims at producing explainable models that enable human users to understand and appropriately trust the obtained results. The authors discuss the challenges involved in making machine learning-based AI explainable. Firstly, that the explanations must be adapted to different stakeholders (end-users, policy makers, industries, utilities etc.) with different levels of technical knowledge (managers, engineers, technicians, etc.) in different application domains. Secondly, that it is important to develop an evaluation framework and standards in order to measure the effectiveness of the provided explanations at the human and the technical levels. This book gathers research contributions aiming at the development and/or the use of XAI techniques in order to address the aforementioned challenges in different applications such as healthcare, finance, cybersecurity, and document summarization. It allows highlighting the benefits and requirements of using explainable models in different application domains in order to provide guidance to readers to select the most adapted models to their specified problem and conditions.Includes recent developments of the use of Explainable Artificial Intelligence (XAI) in order to address the challenges of digital transition and cyber-physical systems;Provides a textual scientific description of the use of XAI in order to address the challenges of digital transition and cyber-physical systems;Presents examples and case studies in order to increase transparency and understanding of the methodological concepts.

Explainable Ambient Intelligence: Explainable Artificial Intelligence Applications in Smart Life (SpringerBriefs in Applied Sciences and Technology)

by Tin-Chih Toly Chen

This book systematically reviews the progress of Explainable Ambient Intelligence (XAmI) and introduces its methods, tools, and applications. Ambient intelligence (AmI) is a vision in which an environment supports the people inhabiting it in an unobtrusive, interconnected, adaptable, dynamic, embedded, and intelligent way. So far, artificial intelligence (AI) technologies have been widely applied in AmI. However, some advanced AI methods are not easy to understand or communicate, especially for users with insufficient background knowledge of AI, which undoubtedly limits the practicability of these methods. To address this issue, explainable AI (XAI) has been considered a viable strategy. Although XAI technologies and tools applied in other fields can also be applied to explain AI technology applications in AmI, users should be the main body in the application of AmI, which is slightly different from the application of AI technologies in other fields. This book containsreal case studies of the application of XAml and is a valuable resource for students and researchers.

Explainable and Interpretable Models in Computer Vision and Machine Learning (The Springer Series on Challenges in Machine Learning)

by Hugo Jair Escalante Sergio Escalera Isabelle Guyon Xavier Baró Yağmur Güçlütürk Umut Güçlü Marcel Van Gerven

This book compiles leading research on the development of explainable and interpretable machine learning methods in the context of computer vision and machine learning. Research progress in computer vision and pattern recognition has led to a variety of modeling techniques with almost human-like performance. Although these models have obtained astounding results, they are limited in their explainability and interpretability: what is the rationale behind the decision made? what in the model structure explains its functioning? Hence, while good performance is a critical required characteristic for learning machines, explainability and interpretability capabilities are needed to take learning machines to the next step to include them in decision support systems involving human supervision. This book, written by leading international researchers, addresses key topics of explainability and interpretability, including the following: · Evaluation and Generalization in Interpretable Machine Learning · Explanation Methods in Deep Learning · Learning Functional Causal Models with Generative Neural Networks · Learning Interpreatable Rules for Multi-Label Classification · Structuring Neural Networks for More Explainable Predictions · Generating Post Hoc Rationales of Deep Visual Classification Decisions · Ensembling Visual Explanations · Explainable Deep Driving by Visualizing Causal Attention · Interdisciplinary Perspective on Algorithmic Job Candidate Search · Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions · Inherent Explainability Pattern Theory-based Video Event Interpretations

Explainable and Interpretable Reinforcement Learning for Robotics (Synthesis Lectures on Artificial Intelligence and Machine Learning)

by Aaron M. Roth Dinesh Manocha Ram D. Sriram Elham Tabassi

This book surveys the state of the art in explainable and interpretable reinforcement learning (RL) as relevant for robotics. While RL in general has grown in popularity and been applied to increasingly complex problems, several challenges have impeded the real-world adoption of RL algorithms for robotics and related areas. These include difficulties in preventing safety constraints from being violated and the issues faced by systems operators who desire explainable policies and actions. Robotics applications present a unique set of considerations and result in a number of opportunities related to their physical, real-world sensory input and interactions. The authors consider classification techniques used in past surveys and papers and attempt to unify terminology across the field. The book provides an in-depth exploration of 12 attributes that can be used to classify explainable/interpretable techniques. These include whether the RL method is model-agnostic or model-specific, self-explainable or post-hoc, as well as additional analysis of the attributes of scope, when-produced, format, knowledge limits, explanation accuracy, audience, predictability, legibility, readability, and reactivity. The book is organized around a discussion of these methods broken down into 42 categories and subcategories, where each category can be classified according to some of the attributes. The authors close by identifying gaps in the current research and highlighting areas for future investigation.

Explainable and Transparent AI and Multi-Agent Systems: 5th International Workshop, EXTRAAMAS 2023, London, UK, May 29, 2023, Revised Selected Papers (Lecture Notes in Computer Science #14127)

by Davide Calvaresi Amro Najjar Andrea Omicini Reyhan Aydogan Rachele Carli Giovanni Ciatto Yazan Mualla Kary Främling

This volume LNCS 14127 constitutes the refereed proceedings of the 5th International Workshop, EXTRAAMAS 2023, held in London, UK, in May 2023. The 15 full papers presented together with 1 short paper were carefully reviewed and selected from 26 submissions. The workshop focuses on Explainable Agents and multi-agent systems; Explainable Machine Learning; and Cross-domain applied XAI.

Explainable and Transparent AI and Multi-Agent Systems: Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers (Lecture Notes in Computer Science #12688)

by Davide Calvaresi Amro Najjar Michael Winikoff Kary Främling

This book constitutes the proceedings of the Third International Workshop on Explainable, Transparent AI and Multi-Agent Systems, EXTRAAMAS 2021, which was held virtually due to the COVID-19 pandemic.The 19 long revised papers and 1 short contribution were carefully selected from 32 submissions. The papers are organized in the following topical sections: XAI & machine learning; XAI vision, understanding, deployment and evaluation; XAI applications; XAI logic and argumentation; decentralized and heterogeneous XAI.

Explainable and Transparent AI and Multi-Agent Systems: 4th International Workshop, EXTRAAMAS 2022, Virtual Event, May 9–10, 2022, Revised Selected Papers (Lecture Notes in Computer Science #13283)

by Davide Calvaresi Amro Najjar Michael Winikoff Kary Främling

This book constitutes the refereed proceedings of the 4th International Workshop on Explainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2022, held virtually during May 9–10, 2022. The 14 full papers included in this book were carefully reviewed and selected from 25 submissions. They were organized in topical sections as follows: explainable machine learning; explainable neuro-symbolic AI; explainable agents; XAI measures and metrics; and AI & law.

Explainable Artificial Intelligence: Methodology, Tools, and Applications (SpringerBriefs in Applied Sciences and Technology)

by Tin-Chih Toly Chen

This book provides a comprehensive overview of the latest developments in Explainable AI (XAI) and its applications in manufacturing. It covers the various methods, tools, and technologies that are being used to make AI more understandable and communicable for factory workers. With the increasing use of AI in manufacturing, there is a growing need to address the limitations of advanced AI methods that are difficult to understand or explain to those without a background in AI. This book addresses this need by providing a systematic review of the latest research and advancements in XAI specifically tailored for the manufacturing industry.The book includes real-world case studies and examples to illustrate the practical applications of XAI in manufacturing. It is a valuable resource for researchers, engineers, and practitioners working in the field of AI and manufacturing.

Explainable Artificial Intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part III (Communications in Computer and Information Science #1903)

by Luca Longo

This three-volume set constitutes the refereed proceedings of the First World Conference on Explainable Artificial Intelligence, xAI 2023, held in Lisbon, Portugal, in July 2023. The 94 papers presented were thoroughly reviewed and selected from the 220 qualified submissions. They are organized in the following topical sections: ​Part I: Interdisciplinary perspectives, approaches and strategies for xAI; Model-agnostic explanations, methods and techniques for xAI, Causality and Explainable AI; Explainable AI in Finance, cybersecurity, health-care and biomedicine.Part II: Surveys, benchmarks, visual representations and applications for xAI; xAI for decision-making and human-AI collaboration, for Machine Learning on Graphs with Ontologies and Graph Neural Networks; Actionable eXplainable AI, Semantics and explainability, and Explanations for Advice-Giving Systems.Part III: xAI for time series and Natural Language Processing; Human-centered explanations and xAI for Trustworthy and Responsible AI; Explainable and Interpretable AI with Argumentation, Representational Learning and concept extraction for xAI.

Explainable Artificial Intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part I (Communications in Computer and Information Science #1901)

by Luca Longo

This three-volume set constitutes the refereed proceedings of the First World Conference on Explainable Artificial Intelligence, xAI 2023, held in Lisbon, Portugal, in July 2023. The 94 papers presented were thoroughly reviewed and selected from the 220 qualified submissions. They are organized in the following topical sections: ​Part I: Interdisciplinary perspectives, approaches and strategies for xAI; Model-agnostic explanations, methods and techniques for xAI, Causality and Explainable AI; Explainable AI in Finance, cybersecurity, health-care and biomedicine.Part II: Surveys, benchmarks, visual representations and applications for xAI; xAI for decision-making and human-AI collaboration, for Machine Learning on Graphs with Ontologies and Graph Neural Networks; Actionable eXplainable AI, Semantics and explainability, and Explanations for Advice-Giving Systems.Part III: xAI for time series and Natural Language Processing; Human-centered explanations and xAI for Trustworthy and Responsible AI; Explainable and Interpretable AI with Argumentation, Representational Learning and concept extraction for xAI.

Explainable Artificial Intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part II (Communications in Computer and Information Science #1902)

by Luca Longo

Chapters “Finding Spurious Correlations with Function-Semantic Contrast Analysis” and “Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) Pandemic” are available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.

Explainable Artificial Intelligence and Process Mining Applications for Healthcare: Third International Workshop, XAI-Healthcare 2023, and First International Workshop, PM4H 2023, Portoroz, Slovenia, June 15, 2023, Proceedings (Communications in Computer and Information Science #2020)

by Jose M. Juarez Carlos Fernandez-Llatas Concha Bielza Owen Johnson Primoz Kocbek Pedro Larrañaga Niels Martin Jorge Munoz-Gama Gregor Štiglic Marcos Sepulveda Alfredo Vellido

This book constitutes the proceedings of the Third International Workshop on Explainable Artificial Intelligence in Healthcare, XAI-Healthcare 2023, and the First International Workshop on Process Mining Applications for Healthcare, PM4H 2023, which took place in conjunction with AIME 2023 in Portoroz, Slovenia, on June 15, 2023.The 7 full papers included from XAI-Healthcare were carefully reviewed and selected from 11 submissions. They focus on all aspects of eXplainable Artificial Intelligence (XAI) in the medical and healthcare field. For PM4H 5 papers have been accepted from 17 submissions. They deal with data-driven process analysis techniques in healthcare.

Explainable Artificial Intelligence Based on Neuro-Fuzzy Modeling with Applications in Finance (Studies in Computational Intelligence #964)

by Tom Rutkowski

The book proposes techniques, with an emphasis on the financial sector, which will make recommendation systems both accurate and explainable. The vast majority of AI models work like black box models. However, in many applications, e.g., medical diagnosis or venture capital investment recommendations, it is essential to explain the rationale behind AI systems decisions or recommendations. Therefore, the development of artificial intelligence cannot ignore the need for interpretable, transparent, and explainable models. First, the main idea of the explainable recommenders is outlined within the background of neuro-fuzzy systems. In turn, various novel recommenders are proposed, each characterized by achieving high accuracy with a reasonable number of interpretable fuzzy rules. The main part of the book is devoted to a very challenging problem of stock market recommendations. An original concept of the explainable recommender, based on patterns from previous transactions, is developed; it recommends stocks that fit the strategy of investors, and its recommendations are explainable for investment advisers.

Explainable Artificial Intelligence for Biomedical Applications (River Publishers Series in Biomedical Engineering)

by Utku Kose Deepak Gupta Xi Chen

Since its first appearance, artificial intelligence has been ensuring revolutionary outcomes in the context of real-world problems. At this point, it has strong relations with biomedical and today’s intelligent systems compete with human capabilities in medical tasks. However, advanced use of artificial intelligence causes intelligent systems to be black-box. That situation is not good for building trustworthy intelligent systems in medical applications. For a remarkable amount of time, researchers have tried to solve the black-box issue by using modular additions, which have led to the rise of the term: interpretable artificial intelligence. As the literature matured (as a result of, in particular, deep learning), that term transformed into explainable artificial intelligence (XAI). This book provides an essential edited work regarding the latest advancements in explainable artificial intelligence (XAI) for biomedical applications. It includes not only introductive perspectives but also applied touches and discussions regarding critical problems as well as future insights. Topics discussed in the book include: XAI for the applications with medical images XAI use cases for alternative medical data/task Different XAI methods for biomedical applications Reviews for the XAI research for critical biomedical problems. Explainable Artificial Intelligence for Biomedical Applications is ideal for academicians, researchers, students, engineers, and experts from the fields of computer science, biomedical, medical, and health sciences. It also welcomes all readers of different fields to be informed about use cases of XAI in black-box artificial intelligence. In this sense, the book can be used for both teaching and reference source purposes.

Explainable Artificial Intelligence for Cyber Security: Next Generation Artificial Intelligence (Studies in Computational Intelligence #1025)

by Mohiuddin Ahmed Sheikh Rabiul Islam Adnan Anwar Nour Moustafa Al-Sakib Khan Pathan

This book presents that explainable artificial intelligence (XAI) is going to replace the traditional artificial, machine learning, deep learning algorithms which work as a black box as of today. To understand the algorithms better and interpret the complex networks of these algorithms, XAI plays a vital role. In last few decades, we have embraced AI in our daily life to solve a plethora of problems, one of the notable problems is cyber security. In coming years, the traditional AI algorithms are not able to address the zero-day cyber attacks, and hence, to capitalize on the AI algorithms, it is absolutely important to focus more on XAI. Hence, this book serves as an excellent reference for those who are working in cyber security and artificial intelligence.

Explainable Artificial Intelligence for Intelligent Transportation Systems

by Amina Adadi Afaf Bouhoute

Artificial Intelligence (AI) and Machine Learning (ML) are set to revolutionize all industries, and the Intelligent Transportation Systems (ITS) field is no exception. While ML, especially deep learning models, achieve great performance in terms of accuracy, the outcomes provided are not amenable to human scrutiny and can hardly be explained. This can be very problematic, especially for systems of a safety-critical nature such as transportation systems. Explainable AI (XAI) methods have been proposed to tackle this issue by producing human interpretable representations of machine learning models while maintaining performance. These methods hold the potential to increase public acceptance and trust in AI-based ITS. FEATURES: Provides the necessary background for newcomers to the field (both academics and interested practitioners) Presents a timely snapshot of explainable and interpretable models in ITS applications Discusses ethical, societal, and legal implications of adopting XAI in the context of ITS Identifies future research directions and open problems

Explainable Artificial Intelligence for Intelligent Transportation Systems: Ethics and Applications

by Loveleen Gaur Biswa Mohan Sahoo

Transportation typically entails crucial “life-death” choices, delegating crucial decisions to an AI algorithm without any explanation poses a serious threat. Hence, explainability and responsible AI is crucial in the context of intelligent transportation. In Intelligence Transportation System (ITS) implementations such as traffic management systems and autonomous driving applications, AI-based control mechanisms are gaining prominence.Explainable artificial intelligence for intelligent transportation system tackling certain challenges in the field of autonomous vehicle, traffic management system, data integration and analytics and monitor the surrounding environment.The book discusses and inform researchers on explainable Intelligent Transportation system. It also discusses prospective methods and techniques for enabling the interpretability of transportation systems. The book further focuses on ethical considerations apart from technical considerations.

Refine Search

Showing 18,801 through 18,825 of 53,274 results