- Table View
- List View
Multimedia Tools and Applications for Environmental & Biodiversity Informatics (Multimedia Systems and Applications)
by Alexis Joly Stefanos Vrochidis Kostas Karatzas Ari Karppinen Pierre BonnetThis edited volume focuses on the latest and most impactful advancements of multimedia data globally available for environmental and earth biodiversity. The data reflects the status, behavior, change as well as human interests and concerns which are increasingly crucial for understanding environmental issues and phenomena. This volume addresses the need for the development of advanced methods, techniques and tools for collecting, managing, analyzing, understanding and modeling environmental & biodiversity data, including the automated or collaborative species identification, the species distribution modeling and their environment, such as the air quality or the bio-acoustic monitoring. Researchers and practitioners in multimedia and environmental topics will find the chapters essential to their continued studies.
Multimedia Watermarking: Latest Developments and Trends
by Aditya Kumar SahuMultimedia watermarking is a key ingredient for integrity verification, transaction tracking, copyright protection, authentication, copy control, and forgery detection. This book provides an extensive survey from the fundamentals to cutting-edge digital watermarking techniques. One of the crucial aspects of multimedia security is the ability to detect forged/tampered regions from the multimedia object. In this book, we emphasized how tampering detection, localization, and recovery of manipulated information not only limits but also eliminates the scope of unauthorized usage. Finally, this book provides the groundwork for understanding the role of intelligent machines and blockchain in achieving better security in multimedia watermarking. Readers will find it easy to comprehend the wide variety of applications, theoretical principles, and effective solutions for protecting intellectual rights soon after reading this book.
Multimedia Watermarking Techniques and Applications (Internet And Communications Ser.)
by Darko KirovskiIntellectual property owners must continually exploit new ways of reproducing, distributing, and marketing their products. However, the threat of piracy looms as a major problem with digital distribution and storage technologies. Multimedia Watermarking Techniques and Applications covers all current and future trends in the design of modern
The Multimediated Rhetoric of the Internet: Digital Fusion (Routledge Studies in Rhetoric and Communication #10)
by Carolyn HandaThis project is a critical, rhetorical study of the digital text we call the Internet, in particular the style and figurative surface of its many pages as well as the conceptual, design patterns structuring the content of those same pages. Handa argues that as our lives become increasingly digital, we must consider rhetoric applicable to more than just printed text or to images. Digital analysis demands our acknowledgement of digital fusion, a true merging of analytic skills in many media and dimensions. CDs, DVDs, and an Internet increasingly capable of streaming audio and video prove that literacy today means more than it used to, namely the ability to understand information, however presented. Handa considers pedagogy, professional writing, hypertext theory, rhetorical studies, and composition studies, moving analysis beyond merely "using" the web towards "thinking" rhetorically about its construction and its impact on culture. This book shows how analyzing the web rhetorically helps us to understand the inescapable fact that culture is reflected through all media fused within the parameters of digital technology.
Multimodal Affective Computing: Technologies and Applications in Learning Environments
by Ramón Zatarain Cabada Héctor Manuel López Hugo Jair EscalanteThis book explores AI methodologies for the implementation of affective states in intelligent learning environments. Divided into four parts, Multimodal Affective Computing: Technologies and Applications in Learning Environments begins with an overview of Affective Computing and Intelligent Learning Environments, from their fundamentals and essential theoretical support up to their fusion and some successful practical applications. The basic concepts of Affective Computing, Machine Learning, and Pattern Recognition in Affective Computing, and Affective Learning Environments are presented in a comprehensive and easy-to-read manner. In the second part, a review on the emerging field of Sentiment Analysis for Learning Environments is introduced, including a systematic descriptive tour through topics such as building resources for sentiment detection, methods for data representation, designing and testing the classification models, and model integration into a learning system. The methodologies corresponding to Multimodal Recognition of Learning-Oriented Emotions are presented in the third part of the book, where topics such as building resources for emotion detection, methods for data representation, multimodal recognition systems, and multimodal emotion recognition in learning environments are presented. The fourth and last part of the book is devoted to a wide application field of the combination of methodologies, such as Automatic Personality Recognition, dealing with issues such as building resources for personality recognition, methods for data representation, personality recognition models, and multimodal personality recognition for affective computing. This book can be very useful not only for beginners who are interested in affective computing and intelligent learning environments, but also for advanced and experts in the practice and developments of the field. It complies an end-to-end treatment on these subjects, especially with educational applications, making it easy for researchers and students to get on track with fundamentals, established methodologies, conventional evaluation protocols, and the latest progress on these subjects.
Multimodal Agents for Ageing and Multicultural Societies: Communications of NII Shonan Meetings
by Juliana Miehle Wolfgang Minker Elisabeth André Koichiro YoshinoThis book aims to explore and discuss theories and technologies for the development of socially competent and culture-aware embodied conversational agents for elderly care. To tackle the challenges in ageing societies, this book was written by experts who have a background in assistive technologies for elderly care, culture-aware computing, multimodal dialogue, social robotics and synthetic agents. Chapter 1 presents a vision of an intelligent agent to illustrate the current challenges for the design and development of adaptive systems. Chapter 2 examines how notions of trust and empathy may be applied to human–robot interaction and how it can be used to create the next generation of emphatic agents, which address some of the pressing issues in multicultural ageing societies. Chapter 3 discusses multimodal machine learning as an approach to enable more effective and robust modelling technologies and to develop socially competent and culture-aware embodied conversational agents for elderly care. Chapter 4 explores the challenges associated with real-world field tests and deployments. Chapter 5 gives a short introduction to socio-cognitive language processing that describes the idea of coping with everyday language, irony, sarcasm, humor, paralinguistic information such as the physical and mental state and traits of the dialogue partner, and social aspects. This book grew out of the Shonan Meeting seminar entitled “Multimodal Agents for Ageing and Multicultural Societies” held in 2018 in Japan. Researchers and practitioners will be helped to understand the emerging field and the identification of promising approaches from a variety of disciplines such as human–computer interaction, artificial intelligence, modelling, and learning.
Multimodal AI in Healthcare: A Paradigm Shift in Health Intelligence (Studies in Computational Intelligence #1060)
by Arash Shaban-Nejad Martin Michalowski Simone BiancoThis book aims to highlight the latest achievements in the use of AI and multimodal artificial intelligence in biomedicine and healthcare. Multimodal AI is a relatively new concept in AI, in which different types of data (e.g. text, image, video, audio, and numerical data) are collected, integrated, and processed through a series of intelligence processing algorithms to improve performance. The edited volume contains selected papers presented at the 2022 Health Intelligence workshop and the associated Data Hackathon/Challenge, co-located with the Thirty-Sixth Association for the Advancement of Artificial Intelligence (AAAI) conference, and presents an overview of the issues, challenges, and potentials in the field, along with new research results. This book provides information for researchers, students, industry professionals, clinicians, and public health agencies interested in the applications of AI and Multimodal AI in public health and medicine.
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
by Ronald Böck Francesca Bonin Nick Campbell Ronald PoppeThis book constitutes the thoroughly refereed post-workshop proceedings of the Second Workshop on Multimodal Analyses Enabling Artificial Agents in Human Interaction, MA3HMI 2014, held in Conjunction with INTERSPEECH 2014, in Singapore, Singapore, on September 14th, 2014. The 9 revised papers presented together with a keynote talk were carefully reviewed and selected from numerous submissions. They are organized in two sections: human-machine interaction and dialogs and speech recognition.
Multimodal Analytics for Next-Generation Big Data Technologies and Applications
by Kah Phooi Seng Li-Minn Ang Alan Wee-Chung Liew Junbin GaoThis edited book will serve as a source of reference for technologies and applications for multimodality data analytics in big data environments. After an introduction, the editors organize the book into four main parts on sentiment, affect and emotion analytics for big multimodal data; unsupervised learning strategies for big multimodal data; supervised learning strategies for big multimodal data; and multimodal big data processing and applications. The book will be of value to researchers, professionals and students in engineering and computer science, particularly those engaged with image and speech processing, multimodal information processing, data science, and artificial intelligence.
Multimodal and Tensor Data Analytics for Industrial Systems Improvement (Springer Optimization and Its Applications #211)
by Panos M. Pardalos Nathan Gaw Mostafa Reisi GahrooeiThis volume covers the latest methodologies for using multimodal data fusion and analytics across several applications. The curated content presents recent developments and challenges in multimodal data analytics and shines a light on a pathway toward new research developments. Chapters are composed by eminent researchers and practitioners who present their research results and ideas based on their expertise. As data collection instruments have improved in quality and quantity for many applications, there has been an unprecedented increase in the availability of data from multiple sources, known as modalities. Modalities express a large degree of heterogeneity in their form, scale, resolution, and accuracy. Determining how to optimally combine the data for prediction and characterization is becoming increasingly important. Several research studies have investigated integrating multimodality data and discussed the challenges and limitations of multimodal data fusion. This volume provides a topical overview of various methods in multimodal data fusion for industrial engineering and operations research applications, such as manufacturing and healthcare.Advancements in sensing technologies and the shift toward the Internet of Things (IoT) has transformed and will continue to transform data analytics by producing new requirements and more complex forms of data. The abundance of data creates an unprecedented opportunity to design more efficient systems and make near-optimal operational decisions. On the other hand, the structural complexity and heterogeneity of the generated data pose a significant challenge to extracting useful features and patterns for making use of the data and facilitating decision-making. Therefore, continual research is needed to develop new statistical and analytical methodologies that overcome these data challenges and turn them into opportunities.
Multimodal Biometric and Machine Learning Technologies: Applications for Computer Vision
by Sandeep Kumar Deepika Ghai Arpit Jain Suman Lata Tripathi Shilpa RaniMULTIMODAL BIOMETRIC AND MACHINE LEARNING TECHNOLOGIES With an increasing demand for biometric systems in various industries, this book on multimodal biometric systems, answers the call for increased resources to help researchers, developers, and practitioners. Multimodal biometric and machine learning technologies have revolutionized the field of security and authentication. These technologies utilize multiple sources of information, such as facial recognition, voice recognition, and fingerprint scanning, to verify an individual???s identity. The need for enhanced security and authentication has become increasingly important, and with the rise of digital technologies, cyber-attacks and identity theft have increased exponentially. Traditional authentication methods, such as passwords and PINs, have become less secure as hackers devise new ways to bypass them. In this context, multimodal biometric and machine learning technologies offer a more secure and reliable approach to authentication. This book provides relevant information on multimodal biometric and machine learning technologies and focuses on how humans and computers interact to ever-increasing levels of complexity and simplicity. The book provides content on the theory of multimodal biometric design, evaluation, and user diversity, and explains the underlying causes of the social and organizational problems that are typically devoted to descriptions of rehabilitation methods for specific processes. Furthermore, the book describes new algorithms for modeling accessible to scientists of all varieties. Audience Researchers in computer science and biometrics, developers who are designing and implementing biometric systems, and practitioners who are using biometric systems in their work, such as law enforcement personnel or healthcare professionals.
Multimodal Biometric Identification System: Case Study of Real-Time Implementation
by Sampada Dhole Vinayak BairagiThis book presents a novel method of multimodal biometric fusion using a random selection of biometrics, which covers a new method of feature extraction, a new framework of sensor-level and feature-level fusion. Most of the biometric systems presently use unimodal systems, which have several limitations. Multimodal systems can increase the matching accuracy of a recognition system. This monograph shows how the problems of unimodal systems can be dealt with efficiently, and focuses on multimodal biometric identification and sensor-level, feature-level fusion. It discusses fusion in biometric systems to improve performance.• Presents a random selection of biometrics to ensure that the system is interacting with a live user.• Offers a compilation of all techniques used for unimodal as well as multimodal biometric identification systems, elaborated with required justification and interpretation with case studies, suitable figures, tables, graphs, and so on. • Shows that for feature-level fusion using contourlet transform features with LDA for dimension reduction attains more accuracy compared to that of block variance features.• Includes contribution in feature extraction and pattern recognition for an increase in the accuracy of the system.• Explains contourlet transform as the best modality-specific feature extraction algorithms for fingerprint, face, and palmprint.This book is for researchers, scholars, and students of Computer Science, Information Technology, Electronics and Electrical Engineering, Mechanical Engineering, and people working on biometric applications.
Multimodal Biometric Systems: Security and Applications (Internet of Everything (IoE))
by Prof. Dr. Rashmi Gupta Dr Manju KhariMany governments around the world are calling for the use of biometric systems to provide crucial societal functions, consequently making it an urgent area for action. The current performance of some biometric systems in terms of their error rates, robustness, and system security may prove to be inadequate for large-scale applications to process millions of users at a high rate of throughput. This book focuses on fusion in biometric systems. It discusses the present level, the limitations, and proposed methods to improve performance. It describes the fundamental concepts, current research, and security-related issues. The book will present a computational perspective, identify challenges, and cover new problem-solving strategies, offering solved problems and case studies to help with reader comprehension and deep understanding. This book is written for researchers, practitioners, both undergraduate and post-graduate students, and those working in various engineering fields such as Systems Engineering, Computer Science, Information Technology, Electronics, and Communications.
Multimodal Brain Image Analysis and Mathematical Foundations of Computational Anatomy: 4th International Workshop, MBIA 2019, and 7th International Workshop, MFCA 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Proceedings (Lecture Notes in Computer Science #11846)
by Dajiang Zhu Jingwen Yan Heng Huang Li Shen Paul M. Thompson Carl-Fredrik Westin Xavier Pennec Sarang Joshi Mads Nielsen Tom Fletcher Stanley Durrleman Stefan SommerThis book constitutes the refereed joint proceedings of the 4th International Workshop on Multimodal Brain Image Analysis, MBAI 2019, and the 7th International Workshop on Mathematical Foundations of Computational Anatomy, MFCA 2019, held in conjunction with the 22nd International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2019, in Shenzhen, China, in October 2019. The 16 full papers presented at MBAI 2019 and the 7 full papers presented at MFCA 2019 were carefully reviewed and selected. The MBAI papers intend to move forward the state of the art in multimodal brain image analysis, in terms of analysis methodologies, algorithms, software systems, validation approaches, benchmark datasets, neuroscience, and clinical applications. The MFCA papers are devoted to statistical and geometrical methods for modeling the variability of biological shapes. The goal is to foster the interactions between the mathematical community around shapes and the MICCAI community around computational anatomy applications.
Multimodal Computational Attention for Scene Understanding and Robotics
by Boris SchauerteThis book presents state-of-the-art computationalattention models that have been successfully tested in diverse applicationareas and can build the foundation for artificial systems to efficientlyexplore, analyze, and understand natural scenes. It gives a comprehensive overview of the most recentcomputational attention models for processing visual and acoustic input. Itcovers the biological background of visual and auditory attention, as well as bottom-up and top-down attentionalmechanisms and discusses various applications. In the first part new approachesfor bottom-up visual and acoustic saliency models are presented and applied tothe task of audio-visual scene exploration of a robot. In the second part theinfluence of top-down cues for attention modeling is investigated.
Multimodal Data Fusion for Bioinformatics Artificial Intelligence
by Umesh Kumar Lilhore Abhishek Kumar Narayan Vyas Sarita Simaiya Vishal DuttMultimodal Data Fusion for Bioinformatics Artificial Intelligence is a must-have for anyone interested in the intersection of AI and bioinformatics, as it delves into innovative data fusion methods and their applications in ‘omics’ research while addressing the ethical implications and future developments shaping the field today. Multimodal Data Fusion for Bioinformatics Artificial Intelligence is an indispensable resource for those exploring how cutting-edge data fusion methods interact with the rapidly developing field of bioinformatics. Beginning with the basics of integrating different data types, this book delves into the use of AI for processing and understanding complex “omics” data, ranging from genomics to metabolomics. The revolutionary potential of AI techniques in bioinformatics is thoroughly explored, including the use of neural networks, graph-based algorithms, single-cell RNA sequencing, and other cutting-edge topics. The second half of the book focuses on the ethical and practical implications of using AI in bioinformatics. The tangible benefits of these technologies in healthcare and research are highlighted in chapters devoted to precision medicine, drug development, and biomedical literature. The book addresses a wide range of ethical concerns, from data privacy to model interpretability, providing readers with a well-rounded education on the subject. Finally, the book explores forward-looking developments such as quantum computing and augmented reality in bioinformatics AI. This comprehensive resource offers a bird’s-eye view of the intersection of AI, data fusion, and bioinformatics, catering to readers of all experience levels.
A Multimodal End-2-End Approach to Accessible Computing (Human–Computer Interaction Series)
by Pradipta Biswas Patrick Langdon Luis Almeida Carlos DuarteThis book illustrates how Interactive Systems can help elderly and disabled populations engage with the world around them by finding methods of overcoming the difficulties these communities face when using such systems by presenting the latest in state-of-the-art technology and providing a vision for accessibility for the near future. The challenges faced by accessibility practitioners are discussed and the different phases of delivering accessible products and services are explored. A collection of eminent researchers from around the world cover topics on developing and standardizing user models for inclusive design, adaptable multimodal system development for digital TV and ubiquitous devices, presenting research on intelligent voice recognition, adaptable pointing, browsing and navigation, and affect and gesture recognition. The research not only focuses on how these can be hugely beneficial to primary users, but often finding useful applications for their able-bodied counterparts. For this new edition, new chapters have been added focusing on the latest developments in games for the visually impaired, inclusive interfaces for the agricultural industry in India and technologies to improve accessibility in broadcasting in Japan. A Multimodal End-2-End Approach to Accessible Computing will be an invaluable resource for both researchers and practitioners alike.
A Multimodal End-2-End Approach to Accessible Computing (Human–Computer Interaction Series)
by Pradipta Biswas, Carlos Duarte, Patrick Langdon, Luis Almeida and Christoph JungResearch in intelligent interactive systems can offer valuable assistance to elderly and disabled populations by helping them to achieve greater levels of engagement with the world. Many users find it difficult to use existing interaction devices, either for physical or age-related impairments. However, research on intelligent voice recognition, adaptable pointing, browsing and navigation, and affect and gesture recognition can hugely benefit such users. Additionally, systems and services developed for elderly or disabled people often find useful applications for their able-bodied counterparts.A Multimodal End-2-End Approach to Accessible Computing illustrates the state-of-the-art of technology and presents a vision for accessibility in the near future. It considers challenges faced by accessibility practitioners at research institutes, industries and legislative institutions throughout the world, and explores the different phases of delivering accessible products and services through design, development, deployment and maintenance. A collection of eminent researchers cover topics on developing and standardizing user models for inclusive design, adaptable multimodal system development for digital TV and ubiquitous devices.With a foreword from the BBC’s Head of Technology and organiser of the Switchover Help Scheme, and an End Note from the chairman of the ITU-T’s Focus Group on Audiovisual Media Accessibility (presenting a vision for accessible computing), this book will be an invaluable resource for researchers and practitioners.
Multimodal Generative AI
by Akansha Singh Krishna Kant SinghThis book stands at the forefront of AI research, offering a comprehensive examination of multimodal generative technologies. Readers are taken on a journey through the evolution of generative models, from early neural networks to contemporary marvels like GANs and VAEs, and their transformative application in synthesizing realistic images and videos. In parallel, the text delves into the intricacies of language models, with a particular on revolutionary transformer-based designs. A core highlight of this work is its detailed discourse on integrating visual and textual models, laying out state-of-the-art techniques for creating cohesive, multimodal AI systems. “Multimodal Generative AI” is more than a mere academic text; it’s a visionary piece that speculates on the future of AI, weaving through case studies in autonomous systems, content creation, and human-computer interaction. The book also fosters a dialogue on responsible innovation in this dynamic field. Tailored for postgraduates, researchers, and professionals, this book is a must-read for anyone vested in the future of AI. It empowers its readers with the knowledge to harness the potential of multimodal systems in solving complex problems, merging visual understanding with linguistic prowess. This book can be used as a reference for postgraduates and researchers in related areas.
Multimodal Intelligent Sensing in Modern Applications
by Masood Ur Rehman Ahmed Zoha Muhammad Ali Jamshed Naeem RamzanDiscover the design, implementation, and analytical techniques for multi-modal intelligent sensing in this cutting-edge text The Internet of Things (IoT) is becoming ever more comprehensively integrated into everyday life. The intelligent systems that power smart technologies rely on increasingly sophisticated sensors in order to monitor inputs and respond dynamically. Multi-modal sensing offers enormous benefits for these technologies, but also comes with greater challenges; it has never been more essential to offer energy-efficient, reliable, interference-free sensing systems for use with the modern Internet of Things. Multimodal Intelligent Sensing in Modern Applications provides an introduction to systems which incorporate multiple sensors to produce situational awareness and process inputs. It is divided into three parts—physical design aspects, data acquisition and analysis techniques, and security and energy challenges—which together cover all the major topics in multi-modal sensing. The result is an indispensable volume for engineers and other professionals looking to design the smart devices of the future. Multimodal Intelligent Sensing in Modern Applications readers will also find: Contributions from multidisciplinary contributors in wireless communications, signal processing, and sensor design Coverage of both software and hardware solutions to sensing challenges Detailed treatment of advanced topics such as efficient deployment, data fusion, machine learning, and more Multimodal Intelligent Sensing in Modern Applications is ideal for experienced engineers and designers who need to apply their skills to Internet of Things and 5G/6G networks. It can also act as an introductory text for graduate researchers into understanding the background, design, and implementation of various sensor types and data analytics tools.
Multimodal Interaction Technologies for Training Affective Social Skills
by Satoshi NakamuraThis book focuses on how interactive, multimodal technology such as virtual agents can be used in training and treatment (social skills training, cognitive behavioral therapy). People with socio-affective deficits have difficulties controlling their social behavior and also suffer from interpreting others’ social behavior. Behavioral training, such as social skills training, is used in clinical settings. Patients are trained by a coach to experience social interaction and reduce social stress. In addition to behavioral training, cognitive behavioral therapy is also useful for understanding better and training social-affective interaction. All these methods are effective but expensive and difficult to access. This book describes how multimodal interactive technology can be used in healthcare for measuring and training social-affective interactions. Sensing technology analyzes users’ behaviors and eye-gaze, and various machine learning methods can be used for prediction tasks. This bookfocuses on analyzing human behaviors and implementing training methods (e.g., by virtual agents, virtual reality, dialogue modeling, personalized feedback, and evaluations). Target populations include depression, schizophrenia, autism spectrum disorder, and a much larger group of social pathological phenomena.
Multimodal Interaction with W3C Standards
by Deborah A. DahlThis book presents new standards for multimodal interaction published by the W3C and other standards bodies in straightforward and accessible language, while also illustrating the standards in operation through case studies and chapters on innovative implementations. The book illustrates how, as smart technology becomes ubiquitous, and appears in more and more different shapes and sizes, vendor-specific approaches to multimodal interaction become impractical, motivating the need for standards. This book covers standards for voice, emotion, natural language understanding, dialog, and multimodal architectures. The book describes the standards in a practical manner, making them accessible to developers, students, and researchers. · Comprehensive resource that explains the W3C standards for multimodal interaction clear and straightforward way; · Includes case studies of the use of the standards on a wide variety of devices, including mobile devices, tablets, wearables and robots, in applications such as assisted living, language learning, and health care; · Features illustrative examples of implementations that use the standards, to help spark innovative ideas for future applications.
Multimodal Interactive Pattern Recognition and Applications
by Enrique Vidal Francisco Casacuberta Alejandro Héctor ToselliThis book presents a different approach to pattern recognition (PR) systems, in which users of a system are involved during the recognition process. This can help to avoid later errors and reduce the costs associated with post-processing. The book also examines a range of advanced multimodal interactions between the machine and the users, including handwriting, speech and gestures. Features: presents an introduction to the fundamental concepts and general PR approaches for multimodal interaction modeling and search (or inference); provides numerous examples and a helpful Glossary; discusses approaches for computer-assisted transcription of handwritten and spoken documents; examines systems for computer-assisted language translation, interactive text generation and parsing, relevance-based image retrieval, and interactive document layout analysis; reviews several full working prototypes of multimodal interactive PR applications, including live demonstrations that can be publicly accessed on the Internet.
The Multimodal Learning Analytics Handbook
by Michail Giannakos Daniel Spikol Daniele Di Mitri Kshitij Sharma Xavier Ochoa Rawad HammadThis handbook is the first book ever covering the area of Multimodal Learning Analytics (MMLA). The field of MMLA is an emerging domain of Learning Analytics and plays an important role in expanding the Learning Analytics goal of understanding and improving learning in all the different environments where it occurs. The challenge for research and practice in this field is how to develop theories about the analysis of human behaviors during diverse learning processes and to create useful tools that could augment the capabilities of learners and instructors in a way that is ethical and sustainable. Behind this area, the CrossMMLA research community exchanges ideas on how we can analyze evidence from multimodal and multisystem data and how we can extract meaning from this increasingly fluid and complex data coming from different kinds of transformative learning situations and how to best feed back the results of these analyses to achieve positive transformative actions on those learning processes. This handbook also describes how MMLA uses the advances in machine learning and affordable sensor technologies to act as a virtual observer/analyst of learning activities. The book describes how this “virtual nature” allows MMLA to provide new insights into learning processes that happen across multiple contexts between stakeholders, devices and resources. Using such technologies in combination with machine learning, Learning Analytics researchers can now perform text, speech, handwriting, sketches, gesture, affective, or eye-gaze analysis, improve the accuracy of their predictions and learned models and provide automated feedback to enable learner self-reflection. However, with this increased complexity in data, new challenges also arise. Conducting the data gathering, pre-processing, analysis, annotation and sense-making, in a way that is meaningful for learning scientists and other stakeholders (e.g., students or teachers), still pose challenges in this emergent field. This handbook aims to serve as a unique resource for state of the art methods and processes. Chapter 11 of this book is available open access under a CC BY 4.0 license at link.springer.com.
Multimodal Learning for Clinical Decision Support: 11th International Workshop, ML-CDS 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings (Lecture Notes in Computer Science #13050)
by Tanveer Syeda-Mahmood Xiang Li Anant Madabhushi Hayit Greenspan Quanzheng Li Richard Leahy Bin Dong Hongzhi WangThis book constitutes the refereed joint proceedings of the 11th International Workshop on Multimodal Learning for Clinical Decision Support, ML-CDS 2021, held in conjunction with the 24th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2021, in Strasbourg, France, in October 2021. The workshop was held virtually due to the COVID-19 pandemic.The 10 full papers presented at ML-CDS 2021 were carefully reviewed and selected from numerous submissions. The ML-CDS papers discuss machine learning on multimodal data sets for clinical decision support and treatment planning.