Transformers for Natural Language Processing and Computer Vision: Explore Generative AI and Large Language Models with Hugging Face, ChatGPT, GPT-4V, and DALL-E 3
By:
Sign Up Now!
Already a Member? Log In
You must be logged into Bookshare to access this title.
Learn about membership options,
or view our freely available titles.
- Synopsis
- The definitive guide to LLMs, from architectures, pretraining, and fine-tuning to Retrieval Augmented Generation (RAG), multimodal AI, risk mitigation, and practical implementations with ChatGPT, Hugging Face, and Vertex AIKey FeaturesCompare and contrast 20+ models (including GPT, BERT, and Llama) and multiple platforms and libraries to find the right solution for your project Apply RAG with LLMs using customized texts and embeddings Mitigate LLM risks, such as hallucinations, using moderation models and knowledge bases Purchase of the print or Kindle book includes a free eBook in PDF formatBook DescriptionTransformers for Natural Language Processing and Computer Vision, Third Edition, explores Large Language Model (LLM) architectures, practical applications, and popular platforms (Hugging Face, OpenAI, and Google Vertex AI) used for Natural Language Processing (NLP) and Computer Vision (CV). The book guides you through a range of transformer architectures from foundation models and generative AI. You’ll pretrain and fine-tune LLMs and work through different use cases, from summarization to question-answering systems leveraging embedding-based search. You'll also implement Retrieval Augmented Generation (RAG) to enhance accuracy and gain greater control over your LLM outputs. Additionally, you’ll understand common LLM risks, such as hallucinations, memorization, and privacy issues, and implement mitigation strategies using moderation models alongside rule-based systems and knowledge integration. Dive into generative vision transformers and multimodal architectures, and build practical applications, such as image and video classification. Go further and combine different models and platforms to build AI solutions and explore AI agent capabilities. This book provides you with an understanding of transformer architectures, including strategies for pretraining, fine-tuning, and LLM best practices. What you will learnBreakdown and understand the architectures of the Transformer, BERT, GPT, T5, PaLM, ViT, CLIP, and DALL-E Fine-tune BERT, GPT, and PaLM models Learn about different tokenizers and the best practices for preprocessing language data Pretrain a RoBERTa model from scratch Implement retrieval augmented generation and rules bases to mitigate hallucinations Visualize transformer model activity for deeper insights using BertViz, LIME, and SHAP Go in-depth into vision transformers with CLIP, DALL-E, and GPTWho this book is forThis book is ideal for NLP and CV engineers, data scientists, machine learning practitioners, software developers, and technical leaders looking to advance their expertise in LLMs and generative AI or explore latest industry trends. Familiarity with Python and basic machine learning concepts will help you fully understand the use cases and code examples. However, hands-on examples involving LLM user interfaces, prompt engineering, and no-code model building ensure this book remains accessible to anyone curious about the AI revolution.
- Copyright:
- 2024
Book Details
- Book Quality:
- Publisher Quality
- Book Size:
- 730 Pages
- ISBN-13:
- 9781805123743
- Publisher:
- Packt Publishing
- Date of Addition:
- 06/17/25
- Copyrighted By:
- Packt Publishing
- Adult content:
- No
- Language:
- English
- Has Image Descriptions:
- No
- Categories:
- Nonfiction, Computers and Internet, Technology
- Submitted By:
- Bookshare Staff
- Usage Restrictions:
- This is a copyrighted book.