- Table View
- List View
Multimedia and Network Information Systems
by Aleksander Zgrzywa Kazimierz Choroś Andrzej SiemińskiDiscusses a broad scope of subject matters including multimedia systems in their widest sense, web systems and network technologies. This monograph also includes texts which deals with traditional information systems that draw on the experience of the multimedia and network systems.
Multimedia and Network Information Systems: Proceedings of the 11th International Conference MISSI 2018 (Advances in Intelligent Systems and Computing #833)
by Kazimierz Choroś Andrzej Siemiński Marek Kopel Elżbieta KuklaThese proceedings collect papers presented at the 11th International Conference on Multimedia & Network Information Systems (MISSI 2018), held from 12 to 14 September 2018 in Wrocław, Poland. The keynote lectures, given by four outstanding scientists, are also included here. The Conference attracted a great number of scientists from across Europe and beyond, and hosted the 6th International Workshop on Computational Intelligence for Multimedia Understanding as well as four special sessions. The majority of the papers describe various artificial intelligence (AI) methods applied to multimedia and natural language (NL) processing; they address hot topics such as virtual and augmented reality, identity recognition, video summarization, intelligent audio processing, accessing multilingual information and opinions, video games, and innovations in Web technologies. Accordingly, the proceedings provide a cutting-edge update on work being pursued in the rapidly evolving field of Multimedia and Internet Information Systems.
Multimedia and Ubiquitous Engineering
by James J. Park Hwa Young Jeong Borgy Waluyo Joseph Kee-Yin NgThe new multimedia standards (for example, MPEG-21) facilitate the seamless integration of multiple modalities into interoperable multimedia frameworks, transforming the way people work and interact with multimedia data. These key technologies and multimedia solutions interact and collaborate with each other in increasingly effective ways, contributing to the multimedia revolution and having a significant impact across a wide spectrum of consumer, business, healthcare, education, and governmental domains. Multimedia and Ubiquitous Engineering provides an opportunity for academic and industry professionals to discuss recent progress in the area of multimedia and ubiquitous environment including models and systems, new directions, novel applications associated with the utilization and acceptance of ubiquitous computing devices and systems.
Multimedia and Virtual Reality: Designing Multisensory User Interfaces
by Alistair SutcliffeThis book is primarily a summary of research done over 10 years in multimedia and virtual reality, which fits within a wider interest of exploiting psychological theory to improve the process of designing interactive systems. The subject matter lies firmly within the field of HCI, with some cross-referencing to software engineering. Extending Sutcliffe's views on the design process to more complex interfaces that have evolved in recent years, this book: *introduces the background to multisensory user interfaces and surveys the design issues and previous HCI research in these areas; *explains the basic psychology for design of multisensory user interfaces, including the Interactive Cognitive Subsystems cognitive model; *describes elaborations of Norman's models of action for multimedia and VR, relates these models to the ICS cognitive model, and explains how the models can be applied to predict the design features necessary for successful interaction; *provides a design process from requirements, user and domain analysis, to design of representation in media or virtual worlds and facilities for user interaction therein; *covers usability evaluation for multisensory interfaces by extending existing well-known HCI approaches of heuristic evaluation and observational usability testing; and *presents two special application areas for multisensory interfaces: educational applications and virtual prototyping for design refinement. To download images and figures free of charge that enhance and clarify materials discussed in chapters 1-7 go to http://www.co.umist.ac.uk/centreULhci/MMVRbook.htm
Multimedia for Accessible Human Computer Interfaces
by Xueliang Liu Troy McDanielThe book Multimedia for Accessible Human Computer Interfaces is to be the first resource to provide in-depth coverage on topical areas of multimedia computing (images, video, audio, speech, haptics, VR/AR, etc.) for accessible and inclusive human computer interfaces. Topics are grouped into thematic areas spanning the human senses: Vision, Hearing, Touch, as well as Multimodal applications. Each chapter is written by different multimedia researchers to provide complementary and multidisciplinary perspectives. Unlike other related books, which focus on guidelines for designing accessible interfaces, or are dated in their coverage of cutting edge multimedia technologies, Multimedia for Accessible Human Computer Interfaces takes an application-oriented approach to present a tour of how the field of multimedia is advancing access to human computer interfaces for individuals with disabilities.Under Theme 1 “Vision-based Technologies for Accessible Human Computer Interfaces”, multimedia technologies to enhance access to interfaces through vision will be presented including: “A Framework for Gaze-contingent Interfaces”, “Sign Language Recognition”, “Fusion-based Image Enhancement and its Applications in Mobile Devices”, and “Open-domain Textual Question Answering Systems”. Under Theme 2 “Auditory Technologies for Accessible Human Computer Interfaces”, multimedia technologies to enhance access to interfaces through hearing will be presented including: “Speech Recognition for Individuals with Voice Disorders” and “Socially Assistive Robots for Storytelling and Other Activities to Support Aging in Place”. Under Theme 3 “Haptic Technologies for Accessible Human Computer Interfaces”, multimedia technologies to enhance access to interfaces through haptics will be presented including: “Accessible Smart Coaching Technologies Inspired by Elderly Requisites” and “Haptic Mediators for Remote Interpersonal Communication”. Under Theme 4 “Multimodal Technologies for Accessible Human Computer Interfaces”, multimedia technologies to enhance access to interfaces through multiple modalities will be presented including: “Human-Machine Interfaces for Socially Connected Devices: From Smart Households to Smart Cities” and “Enhancing Situational Awareness and Kinesthetic Assistance for Clinicians via Augmented-Reality and Haptic Shared-Control Technologies”.
Multimedia in the College Classroom: Improve Learning and Connect with Students in Online and Hybrid Courses
by Heidi Skurat Harris Michael GreerThis practical guide to multimedia in online college instruction provides easy-to-follow instructions for designing multimedia assignments that maximize student learning while reducing cognitive load.This book presents the learning process as a complex, multidimensional experience that includes texts as well as auditory and visual elements. Each chapter includes research-based activities to develop instructors’ multimedia skills. The book leverages cutting edge cognitive research to improve accessibility and design, while also providing practical asynchronous and synchronous activities that engage learners.Multimedia in the College Classroom is the ideal resource for any higher education instructor, administrator, or leader who wishes to learn about, reflect on, and implement research-based learning strategies through the targeted use of multimedia.
Multimedia over Cognitive Radio Networks: Algorithms, Protocols, and Experiments
by Sunil Kumar Fei HuWith nearly 7 billion mobile phone subscriptions worldwide, mobility and computing have become pervasive in our society and business. Moreover, new mobile multimedia communication services are challenging telecommunication operators. To support the significant increase in multimedia traffic-especially video-over wireless networks, new technological
Multimedia, Communication and Computing Application: Proceedings of the 2014 International Conference on Multimedia, Communication and Computing Application (MCCA 2014), Xiamen, China, October 16-17, 2014
by Ally Leung2014 International Conference on Multimedia, Communication and Computing Application (MCCA2014), Xiamen, China, Oct 16-17, 2014, provided a forum for experts and scholars of excellence from all over the world to present their latest work in the area of multimedia, communication and computing applications. In recent years, the multimedia techno
Multimedia-enabled Sensors in IoT: Data Delivery and Traffic Modelling
by Fadi Al-TurjmanThis book gives an overview of best effort data and real-time multipath routing protocols in WMSN. It provides results of recent research in design issues affecting the development of strategic multipath routing protocols that support multimedia data traffic in WMSN from an IoT perspective, plus detailed analysis on the appropriate traffic models.
Multimodal AI in Healthcare: A Paradigm Shift in Health Intelligence (Studies in Computational Intelligence #1060)
by Simone Bianco Arash Shaban-Nejad Martin MichalowskiThis book aims to highlight the latest achievements in the use of AI and multimodal artificial intelligence in biomedicine and healthcare. Multimodal AI is a relatively new concept in AI, in which different types of data (e.g. text, image, video, audio, and numerical data) are collected, integrated, and processed through a series of intelligence processing algorithms to improve performance. The edited volume contains selected papers presented at the 2022 Health Intelligence workshop and the associated Data Hackathon/Challenge, co-located with the Thirty-Sixth Association for the Advancement of Artificial Intelligence (AAAI) conference, and presents an overview of the issues, challenges, and potentials in the field, along with new research results. This book provides information for researchers, students, industry professionals, clinicians, and public health agencies interested in the applications of AI and Multimodal AI in public health and medicine.
Multimodal Affective Computing: Technologies and Applications in Learning Environments
by Hugo Jair Escalante Ramón Zatarain Cabada Héctor Manuel LópezThis book explores AI methodologies for the implementation of affective states in intelligent learning environments. Divided into four parts, Multimodal Affective Computing: Technologies and Applications in Learning Environments begins with an overview of Affective Computing and Intelligent Learning Environments, from their fundamentals and essential theoretical support up to their fusion and some successful practical applications. The basic concepts of Affective Computing, Machine Learning, and Pattern Recognition in Affective Computing, and Affective Learning Environments are presented in a comprehensive and easy-to-read manner. In the second part, a review on the emerging field of Sentiment Analysis for Learning Environments is introduced, including a systematic descriptive tour through topics such as building resources for sentiment detection, methods for data representation, designing and testing the classification models, and model integration into a learning system. The methodologies corresponding to Multimodal Recognition of Learning-Oriented Emotions are presented in the third part of the book, where topics such as building resources for emotion detection, methods for data representation, multimodal recognition systems, and multimodal emotion recognition in learning environments are presented. The fourth and last part of the book is devoted to a wide application field of the combination of methodologies, such as Automatic Personality Recognition, dealing with issues such as building resources for personality recognition, methods for data representation, personality recognition models, and multimodal personality recognition for affective computing. This book can be very useful not only for beginners who are interested in affective computing and intelligent learning environments, but also for advanced and experts in the practice and developments of the field. It complies an end-to-end treatment on these subjects, especially with educational applications, making it easy for researchers and students to get on track with fundamentals, established methodologies, conventional evaluation protocols, and the latest progress on these subjects.
Multimodal Agents for Ageing and Multicultural Societies: Communications of NII Shonan Meetings
by Wolfgang Minker Elisabeth André Juliana Miehle Koichiro YoshinoThis book aims to explore and discuss theories and technologies for the development of socially competent and culture-aware embodied conversational agents for elderly care. To tackle the challenges in ageing societies, this book was written by experts who have a background in assistive technologies for elderly care, culture-aware computing, multimodal dialogue, social robotics and synthetic agents. Chapter 1 presents a vision of an intelligent agent to illustrate the current challenges for the design and development of adaptive systems. Chapter 2 examines how notions of trust and empathy may be applied to human–robot interaction and how it can be used to create the next generation of emphatic agents, which address some of the pressing issues in multicultural ageing societies. Chapter 3 discusses multimodal machine learning as an approach to enable more effective and robust modelling technologies and to develop socially competent and culture-aware embodied conversational agents for elderly care. Chapter 4 explores the challenges associated with real-world field tests and deployments. Chapter 5 gives a short introduction to socio-cognitive language processing that describes the idea of coping with everyday language, irony, sarcasm, humor, paralinguistic information such as the physical and mental state and traits of the dialogue partner, and social aspects. This book grew out of the Shonan Meeting seminar entitled “Multimodal Agents for Ageing and Multicultural Societies” held in 2018 in Japan. Researchers and practitioners will be helped to understand the emerging field and the identification of promising approaches from a variety of disciplines such as human–computer interaction, artificial intelligence, modelling, and learning.
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
by Ronald Böck Francesca Bonin Nick Campbell Ronald PoppeThis book constitutes the thoroughly refereed post-workshop proceedings of the Second Workshop on Multimodal Analyses Enabling Artificial Agents in Human Interaction, MA3HMI 2014, held in Conjunction with INTERSPEECH 2014, in Singapore, Singapore, on September 14th, 2014. The 9 revised papers presented together with a keynote talk were carefully reviewed and selected from numerous submissions. They are organized in two sections: human-machine interaction and dialogs and speech recognition.
Multimodal Analytics for Next-Generation Big Data Technologies and Applications
by Li-Minn Ang Kah Phooi Seng Alan Wee-Chung Liew Junbin GaoThis edited book will serve as a source of reference for technologies and applications for multimodality data analytics in big data environments. After an introduction, the editors organize the book into four main parts on sentiment, affect and emotion analytics for big multimodal data; unsupervised learning strategies for big multimodal data; supervised learning strategies for big multimodal data; and multimodal big data processing and applications. The book will be of value to researchers, professionals and students in engineering and computer science, particularly those engaged with image and speech processing, multimodal information processing, data science, and artificial intelligence.
Multimodal Biometric Identification System: Case Study of Real-Time Implementation
by Vinayak Bairagi Sampada DholeThis book presents a novel method of multimodal biometric fusion using a random selection of biometrics, which covers a new method of feature extraction, a new framework of sensor-level and feature-level fusion. Most of the biometric systems presently use unimodal systems, which have several limitations. Multimodal systems can increase the matching accuracy of a recognition system. This monograph shows how the problems of unimodal systems can be dealt with efficiently, and focuses on multimodal biometric identification and sensor-level, feature-level fusion. It discusses fusion in biometric systems to improve performance.• Presents a random selection of biometrics to ensure that the system is interacting with a live user.• Offers a compilation of all techniques used for unimodal as well as multimodal biometric identification systems, elaborated with required justification and interpretation with case studies, suitable figures, tables, graphs, and so on. • Shows that for feature-level fusion using contourlet transform features with LDA for dimension reduction attains more accuracy compared to that of block variance features.• Includes contribution in feature extraction and pattern recognition for an increase in the accuracy of the system.• Explains contourlet transform as the best modality-specific feature extraction algorithms for fingerprint, face, and palmprint.This book is for researchers, scholars, and students of Computer Science, Information Technology, Electronics and Electrical Engineering, Mechanical Engineering, and people working on biometric applications.
Multimodal Biometric Systems: Security and Applications (Internet of Everything (IoE))
by Prof. Dr. Rashmi Gupta Dr Manju KhariMany governments around the world are calling for the use of biometric systems to provide crucial societal functions, consequently making it an urgent area for action. The current performance of some biometric systems in terms of their error rates, robustness, and system security may prove to be inadequate for large-scale applications to process millions of users at a high rate of throughput. This book focuses on fusion in biometric systems. It discusses the present level, the limitations, and proposed methods to improve performance. It describes the fundamental concepts, current research, and security-related issues. The book will present a computational perspective, identify challenges, and cover new problem-solving strategies, offering solved problems and case studies to help with reader comprehension and deep understanding. This book is written for researchers, practitioners, both undergraduate and post-graduate students, and those working in various engineering fields such as Systems Engineering, Computer Science, Information Technology, Electronics, and Communications.
Multimodal Biometric and Machine Learning Technologies: Applications for Computer Vision
by Sandeep Kumar Suman Lata Tripathi Shilpa Rani Deepika Ghai Arpit JainMULTIMODAL BIOMETRIC AND MACHINE LEARNING TECHNOLOGIES With an increasing demand for biometric systems in various industries, this book on multimodal biometric systems, answers the call for increased resources to help researchers, developers, and practitioners. Multimodal biometric and machine learning technologies have revolutionized the field of security and authentication. These technologies utilize multiple sources of information, such as facial recognition, voice recognition, and fingerprint scanning, to verify an individual???s identity. The need for enhanced security and authentication has become increasingly important, and with the rise of digital technologies, cyber-attacks and identity theft have increased exponentially. Traditional authentication methods, such as passwords and PINs, have become less secure as hackers devise new ways to bypass them. In this context, multimodal biometric and machine learning technologies offer a more secure and reliable approach to authentication. This book provides relevant information on multimodal biometric and machine learning technologies and focuses on how humans and computers interact to ever-increasing levels of complexity and simplicity. The book provides content on the theory of multimodal biometric design, evaluation, and user diversity, and explains the underlying causes of the social and organizational problems that are typically devoted to descriptions of rehabilitation methods for specific processes. Furthermore, the book describes new algorithms for modeling accessible to scientists of all varieties. Audience Researchers in computer science and biometrics, developers who are designing and implementing biometric systems, and practitioners who are using biometric systems in their work, such as law enforcement personnel or healthcare professionals.
Multimodal Brain Image Analysis and Mathematical Foundations of Computational Anatomy: 4th International Workshop, MBIA 2019, and 7th International Workshop, MFCA 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Proceedings (Lecture Notes in Computer Science #11846)
by Tom Fletcher Carl-Fredrik Westin Xavier Pennec Sarang Joshi Mads Nielsen Li Shen Stanley Durrleman Stefan Sommer Dajiang Zhu Jingwen Yan Heng Huang Paul M. ThompsonThis book constitutes the refereed joint proceedings of the 4th International Workshop on Multimodal Brain Image Analysis, MBAI 2019, and the 7th International Workshop on Mathematical Foundations of Computational Anatomy, MFCA 2019, held in conjunction with the 22nd International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2019, in Shenzhen, China, in October 2019. The 16 full papers presented at MBAI 2019 and the 7 full papers presented at MFCA 2019 were carefully reviewed and selected. The MBAI papers intend to move forward the state of the art in multimodal brain image analysis, in terms of analysis methodologies, algorithms, software systems, validation approaches, benchmark datasets, neuroscience, and clinical applications. The MFCA papers are devoted to statistical and geometrical methods for modeling the variability of biological shapes. The goal is to foster the interactions between the mathematical community around shapes and the MICCAI community around computational anatomy applications.
Multimodal Computational Attention for Scene Understanding and Robotics
by Boris SchauerteThis book presents state-of-the-art computationalattention models that have been successfully tested in diverse applicationareas and can build the foundation for artificial systems to efficientlyexplore, analyze, and understand natural scenes. It gives a comprehensive overview of the most recentcomputational attention models for processing visual and acoustic input. Itcovers the biological background of visual and auditory attention, as well as bottom-up and top-down attentionalmechanisms and discusses various applications. In the first part new approachesfor bottom-up visual and acoustic saliency models are presented and applied tothe task of audio-visual scene exploration of a robot. In the second part theinfluence of top-down cues for attention modeling is investigated.
Multimodal Data Fusion for Bioinformatics Artificial Intelligence
by Abhishek Kumar Vishal Dutt Narayan Vyas Umesh Kumar Lilhore Sarita SimaiyaMultimodal Data Fusion for Bioinformatics Artificial Intelligence is a must-have for anyone interested in the intersection of AI and bioinformatics, as it delves into innovative data fusion methods and their applications in ‘omics’ research while addressing the ethical implications and future developments shaping the field today. Multimodal Data Fusion for Bioinformatics Artificial Intelligence is an indispensable resource for those exploring how cutting-edge data fusion methods interact with the rapidly developing field of bioinformatics. Beginning with the basics of integrating different data types, this book delves into the use of AI for processing and understanding complex “omics” data, ranging from genomics to metabolomics. The revolutionary potential of AI techniques in bioinformatics is thoroughly explored, including the use of neural networks, graph-based algorithms, single-cell RNA sequencing, and other cutting-edge topics. The second half of the book focuses on the ethical and practical implications of using AI in bioinformatics. The tangible benefits of these technologies in healthcare and research are highlighted in chapters devoted to precision medicine, drug development, and biomedical literature. The book addresses a wide range of ethical concerns, from data privacy to model interpretability, providing readers with a well-rounded education on the subject. Finally, the book explores forward-looking developments such as quantum computing and augmented reality in bioinformatics AI. This comprehensive resource offers a bird’s-eye view of the intersection of AI, data fusion, and bioinformatics, catering to readers of all experience levels.
Multimodal Generative AI
by Krishna Kant Singh Akansha SinghThis book stands at the forefront of AI research, offering a comprehensive examination of multimodal generative technologies. Readers are taken on a journey through the evolution of generative models, from early neural networks to contemporary marvels like GANs and VAEs, and their transformative application in synthesizing realistic images and videos. In parallel, the text delves into the intricacies of language models, with a particular on revolutionary transformer-based designs. A core highlight of this work is its detailed discourse on integrating visual and textual models, laying out state-of-the-art techniques for creating cohesive, multimodal AI systems. “Multimodal Generative AI” is more than a mere academic text; it’s a visionary piece that speculates on the future of AI, weaving through case studies in autonomous systems, content creation, and human-computer interaction. The book also fosters a dialogue on responsible innovation in this dynamic field. Tailored for postgraduates, researchers, and professionals, this book is a must-read for anyone vested in the future of AI. It empowers its readers with the knowledge to harness the potential of multimodal systems in solving complex problems, merging visual understanding with linguistic prowess. This book can be used as a reference for postgraduates and researchers in related areas.
Multimodal Intelligent Sensing in Modern Applications
by Naeem Ramzan Masood Ur Rehman Ahmed Zoha Muhammad Ali JamshedDiscover the design, implementation, and analytical techniques for multi-modal intelligent sensing in this cutting-edge text The Internet of Things (IoT) is becoming ever more comprehensively integrated into everyday life. The intelligent systems that power smart technologies rely on increasingly sophisticated sensors in order to monitor inputs and respond dynamically. Multi-modal sensing offers enormous benefits for these technologies, but also comes with greater challenges; it has never been more essential to offer energy-efficient, reliable, interference-free sensing systems for use with the modern Internet of Things. Multimodal Intelligent Sensing in Modern Applications provides an introduction to systems which incorporate multiple sensors to produce situational awareness and process inputs. It is divided into three parts—physical design aspects, data acquisition and analysis techniques, and security and energy challenges—which together cover all the major topics in multi-modal sensing. The result is an indispensable volume for engineers and other professionals looking to design the smart devices of the future. Multimodal Intelligent Sensing in Modern Applications readers will also find: Contributions from multidisciplinary contributors in wireless communications, signal processing, and sensor design Coverage of both software and hardware solutions to sensing challenges Detailed treatment of advanced topics such as efficient deployment, data fusion, machine learning, and more Multimodal Intelligent Sensing in Modern Applications is ideal for experienced engineers and designers who need to apply their skills to Internet of Things and 5G/6G networks. It can also act as an introductory text for graduate researchers into understanding the background, design, and implementation of various sensor types and data analytics tools.
Multimodal Interaction Technologies for Training Affective Social Skills
by Satoshi NakamuraThis book focuses on how interactive, multimodal technology such as virtual agents can be used in training and treatment (social skills training, cognitive behavioral therapy). People with socio-affective deficits have difficulties controlling their social behavior and also suffer from interpreting others’ social behavior. Behavioral training, such as social skills training, is used in clinical settings. Patients are trained by a coach to experience social interaction and reduce social stress. In addition to behavioral training, cognitive behavioral therapy is also useful for understanding better and training social-affective interaction. All these methods are effective but expensive and difficult to access. This book describes how multimodal interactive technology can be used in healthcare for measuring and training social-affective interactions. Sensing technology analyzes users’ behaviors and eye-gaze, and various machine learning methods can be used for prediction tasks. This bookfocuses on analyzing human behaviors and implementing training methods (e.g., by virtual agents, virtual reality, dialogue modeling, personalized feedback, and evaluations). Target populations include depression, schizophrenia, autism spectrum disorder, and a much larger group of social pathological phenomena.
Multimodal Interaction with W3C Standards
by Deborah A. DahlThis book presents new standards for multimodal interaction published by the W3C and other standards bodies in straightforward and accessible language, while also illustrating the standards in operation through case studies and chapters on innovative implementations. The book illustrates how, as smart technology becomes ubiquitous, and appears in more and more different shapes and sizes, vendor-specific approaches to multimodal interaction become impractical, motivating the need for standards. This book covers standards for voice, emotion, natural language understanding, dialog, and multimodal architectures. The book describes the standards in a practical manner, making them accessible to developers, students, and researchers. · Comprehensive resource that explains the W3C standards for multimodal interaction clear and straightforward way; · Includes case studies of the use of the standards on a wide variety of devices, including mobile devices, tablets, wearables and robots, in applications such as assisted living, language learning, and health care; · Features illustrative examples of implementations that use the standards, to help spark innovative ideas for future applications.
Multimodal Interactive Pattern Recognition and Applications
by Enrique Vidal Francisco Casacuberta Alejandro Héctor ToselliThis book presents a different approach to pattern recognition (PR) systems, in which users of a system are involved during the recognition process. This can help to avoid later errors and reduce the costs associated with post-processing. The book also examines a range of advanced multimodal interactions between the machine and the users, including handwriting, speech and gestures. Features: presents an introduction to the fundamental concepts and general PR approaches for multimodal interaction modeling and search (or inference); provides numerous examples and a helpful Glossary; discusses approaches for computer-assisted transcription of handwritten and spoken documents; examines systems for computer-assisted language translation, interactive text generation and parsing, relevance-based image retrieval, and interactive document layout analysis; reviews several full working prototypes of multimodal interactive PR applications, including live demonstrations that can be publicly accessed on the Internet.