GenAI OWASP Top 10 for LLM Applications: A Guide to Secure AI Secure your LLM applications! Learn about the OWASP Top 10 vulnerabilities, including prompt injection, data poisoning, and misinformation. Discover best practices to mitigate risks and build robust and ethical AI systems.
GenAI Mastering Inference Parameters in Transformer Language Models: A Step-by-Step Guide to Temperature, Top-k, and Nucleus Sampling Mastering Temperature, Top-p & Top-k: Control Language Model Outputs. Comprehensive guide on fine-tuning AI text generation with key inference parameters.
GenAI Unlocking the Power of RAG: Understanding Context Relevance, Groundedness, and Answer Relevance Unleashing the Power of Retrieval Augmented Generation (RAG) for Natural Language Processing: Leveraging External Knowledge to Enhance Coherence, Relevance, and Grounding in Generated Outputs - WowData.Science
Artificial Intelligence and Machine Learning Synthetic Talking Heads: Understanding the Artificial Humans of the Future Synthetic talking heads are emerging AI-powered synthetic media that create ultra-realistic footage of fake people.
Amazon Bedrock How to inference with streaming response from Amazon Bedrock LLMs Learn how to inference with streaming response from Amazon Bedrock LLMs with invoke_model_with_response_stream api.
MLOps The Secret Weapon of Cutting Edge AI Teams Learn about ML experiment tracking platforms boost productivity and help build better models, making them essential for cutting-edge AI teams.
GenAI Demystifying three types of Transformer Architectures powering your Foundation Models Let's evaluate the key differences between the three types of transformer architectures which power the Foundation Models.
SageMaker Deploy & Inference Mistral 7B Instruct on SageMaker JumpStart Learn how to Deploy & Inference Mistral 7B Instruct on SageMaker JumpStart on ml.g5.2xlarge instance.
Amazon Bedrock How to generate embeddings using Amazon Bedrock and LangChain Learn how to generate embeddings using Amazon Bedrock's foundation model Titan Embeddings G1 - Text whose base model ID is `amazon.titan-embed-text-v1` and LangChain.
LLM When should I go for Fine-tuning vs In-Context Learning (ICL) Learn when to go for Fine-tuning vs In-Context learning along with their definitions and various factors to consider when choosing between the two.
LLM Understanding chat and instruct modes in LLMs Understanding chat and instruct modes in Large Language Models
GenAI Getting Started with Amazon Bedrock Hands-on tutorial to get started with Amazon Bedrock, a new Generative AI Service from AWS.
Prompt Engineering How to overcome challenges in Prompt enginnering Challenges of Prompt engineering with LLMs and learn how to overcome the challenges associated with them.
SageMaker How to Inference SageMaker JumpStart LLaMa 2 Realtime Endpoint Programatically In this tutorial we will inference LLama 2 endpoint deployed via SageMaker JumpStart UI from SageMaker Notebook.
SageMaker Fine-tune Llama 2 model on SageMaker Jumpstart In this tutorial we will learn how to Fine-Tune Llama-2-7b model on Amazon SageMaker Jumpstart.
GenAI Gen AI Interview Questions Get ready to crack the toughest Generative AI Interviews using this Gen AI Interview Questions.
GenAI Crafting Effective Prompts for Large Language Models Large language models have impressive language abilities, but prompt engineering is key for quality responses. This post shares tips to craft effective prompts that tap the full potential of LLMs for useful, relevant answers. Learn techniques to generate insightful responses.