Self-paced course

Self-paced course

How Large Language Models (LLMs) Work

How Large Language Models (LLMs) Work

Rating 4.6

16 reviews

16 reviews

16 reviews

Course Description

Large language models (LLMs) like ChatGPT, Gemini, and Copilot have transformed how we interact with computers, and understanding how they work is key for anyone curious about modern artificial intelligence (AI).

In this course, we’ll introduce the core concepts that make LLMs so powerful, starting with deep learning and pretrained models. You’ll learn why Transformers have become the dominant architecture in natural language processing (NLP), and how they enable natural, human-like language generation.

Then we’ll dive into the three main components of a Transformer: embeddings, attention, and feedforward neural networks. You’ll see how encoder-only, decoder-only, and encoder-decoder models differ, and explore popular pretrained models like BERT, GPT, and T5.

By the end, you’ll have a solid conceptual grasp of how Transformers and large language models actually work, giving you the vocabulary and confidence to keep up with AI’s fast-evolving landscape.

If you want a visual, no-code introduction to the technology powering today’s most advanced language models, this is the course for you.

NOTE: This course is part of the Natural Language Processing in Python course, which is a more comprehensive overview of all the essential concepts for Natural Language Processing (NLP) in Python.

Who should take this course?
  • Anyone curious about how LLMs like ChatGPT and Copilot work behind the scenes

  • Beginners seeking an intuitive, visual explanation of Transformers and attention mechanisms

  • Data professionals or AI enthusiasts wanting a solid foundation before exploring coding or model development

What are the course requirements?
  • No coding experience required

  • Some prior machine learning knowledge is helpful, but not necessary — we explain everything step-by-step with clear visuals

Course Description

Large language models (LLMs) like ChatGPT, Gemini, and Copilot have transformed how we interact with computers, and understanding how they work is key for anyone curious about modern artificial intelligence (AI).

In this course, we’ll introduce the core concepts that make LLMs so powerful, starting with deep learning and pretrained models. You’ll learn why Transformers have become the dominant architecture in natural language processing (NLP), and how they enable natural, human-like language generation.

Then we’ll dive into the three main components of a Transformer: embeddings, attention, and feedforward neural networks. You’ll see how encoder-only, decoder-only, and encoder-decoder models differ, and explore popular pretrained models like BERT, GPT, and T5.

By the end, you’ll have a solid conceptual grasp of how Transformers and large language models actually work, giving you the vocabulary and confidence to keep up with AI’s fast-evolving landscape.

If you want a visual, no-code introduction to the technology powering today’s most advanced language models, this is the course for you.

NOTE: This course is part of the Natural Language Processing in Python course, which is a more comprehensive overview of all the essential concepts for Natural Language Processing (NLP) in Python.

Who should take this course?
  • Anyone curious about how LLMs like ChatGPT and Copilot work behind the scenes

  • Beginners seeking an intuitive, visual explanation of Transformers and attention mechanisms

  • Data professionals or AI enthusiasts wanting a solid foundation before exploring coding or model development

What are the course requirements?
  • No coding experience required

  • Some prior machine learning knowledge is helpful, but not necessary — we explain everything step-by-step with clear visuals

Course Content

2 video hours

2 assignments & solutions

Skills you'll learn in this course

Learn core LLM concepts, like deep learning, pretrained models, and Transformers.

Understand how Transformers work, including embeddings, attention, and feedforward layers.

Compare model types, such as BERT, GPT, and T5 architectures.

Build AI literacy, with the vocabulary to follow today’s LLM landscape.

Meet your instructors

Alice Zhao

Lead Data Science Instructor

Alice Zhao is a seasoned data scientist and author of the book, SQL Pocket Guide, 4th Edition (O'Reilly). She has taught numerous courses in Python, SQL, and R as a data science instructor at Maven Analytics and Metis, and as a co-founder of Best Fit Analytics.

Included learning paths

Course credential

You’ll earn the course certification by completing this course and passing the assessment requirements

How Large Language Models (LLMs) Work

How Large Language Models (LLMs) Work

CPE Accreditation

CPE Credits:

0

Field of Study:

Information Technology

Delivery Method:

QAS Self Study

Maven Analytics LLC is registered with the National Association of State Boards of Accountancy (NASBA) as a sponsor of continuing professional education on the National Registry of CPE Sponsors. State boards of accountancy have the final authority on the acceptance of individual courses for CPE credit. Complaints regarding registered sponsors may be submitted to the National Registry of CPE Sponsors through its website: www.nasbaregistry.org.

For more information regarding administrative policies such as complaints or refunds, please contact us at admin@mavenanalytics.io or (857) 256-1765.

*Last Updated: May 25, 2023

Course Description

Large language models (LLMs) like ChatGPT, Gemini, and Copilot have transformed how we interact with computers, and understanding how they work is key for anyone curious about modern artificial intelligence (AI).

In this course, we’ll introduce the core concepts that make LLMs so powerful, starting with deep learning and pretrained models. You’ll learn why Transformers have become the dominant architecture in natural language processing (NLP), and how they enable natural, human-like language generation.

Then we’ll dive into the three main components of a Transformer: embeddings, attention, and feedforward neural networks. You’ll see how encoder-only, decoder-only, and encoder-decoder models differ, and explore popular pretrained models like BERT, GPT, and T5.

By the end, you’ll have a solid conceptual grasp of how Transformers and large language models actually work, giving you the vocabulary and confidence to keep up with AI’s fast-evolving landscape.

If you want a visual, no-code introduction to the technology powering today’s most advanced language models, this is the course for you.

NOTE: This course is part of the Natural Language Processing in Python course, which is a more comprehensive overview of all the essential concepts for Natural Language Processing (NLP) in Python.

Who should take this course?
  • Anyone curious about how LLMs like ChatGPT and Copilot work behind the scenes

  • Beginners seeking an intuitive, visual explanation of Transformers and attention mechanisms

  • Data professionals or AI enthusiasts wanting a solid foundation before exploring coding or model development

What are the course requirements?
  • No coding experience required

  • Some prior machine learning knowledge is helpful, but not necessary — we explain everything step-by-step with clear visuals

Course Description

Large language models (LLMs) like ChatGPT, Gemini, and Copilot have transformed how we interact with computers, and understanding how they work is key for anyone curious about modern artificial intelligence (AI).

In this course, we’ll introduce the core concepts that make LLMs so powerful, starting with deep learning and pretrained models. You’ll learn why Transformers have become the dominant architecture in natural language processing (NLP), and how they enable natural, human-like language generation.

Then we’ll dive into the three main components of a Transformer: embeddings, attention, and feedforward neural networks. You’ll see how encoder-only, decoder-only, and encoder-decoder models differ, and explore popular pretrained models like BERT, GPT, and T5.

By the end, you’ll have a solid conceptual grasp of how Transformers and large language models actually work, giving you the vocabulary and confidence to keep up with AI’s fast-evolving landscape.

If you want a visual, no-code introduction to the technology powering today’s most advanced language models, this is the course for you.

NOTE: This course is part of the Natural Language Processing in Python course, which is a more comprehensive overview of all the essential concepts for Natural Language Processing (NLP) in Python.

Who should take this course?
  • Anyone curious about how LLMs like ChatGPT and Copilot work behind the scenes

  • Beginners seeking an intuitive, visual explanation of Transformers and attention mechanisms

  • Data professionals or AI enthusiasts wanting a solid foundation before exploring coding or model development

What are the course requirements?
  • No coding experience required

  • Some prior machine learning knowledge is helpful, but not necessary — we explain everything step-by-step with clear visuals

Curriculum

Meet your instructors

Alice Zhao

Lead Data Science Instructor

Alice Zhao is a seasoned data scientist and author of the book, SQL Pocket Guide, 4th Edition (O'Reilly). She has taught numerous courses in Python, SQL, and R as a data science instructor at Maven Analytics and Metis, and as a co-founder of Best Fit Analytics.

Included learning paths

Course credential

You’ll earn the course certification by completing this course and passing the assessment requirements

How Large Language Models (LLMs) Work

How Large Language Models (LLMs) Work

CPE Accreditation

CPE Credits:

0

Field of Study:

Information Technology

Delivery Method:

QAS Self Study

Maven Analytics LLC is registered with the National Association of State Boards of Accountancy (NASBA) as a sponsor of continuing professional education on the National Registry of CPE Sponsors. State boards of accountancy have the final authority on the acceptance of individual courses for CPE credit. Complaints regarding registered sponsors may be submitted to the National Registry of CPE Sponsors through its website: www.nasbaregistry.org.

For more information regarding administrative policies such as complaints or refunds, please contact us at admin@mavenanalytics.io or (857) 256-1765.

*Last Updated: May 25, 2023

More courses you may like

READY TO GET STARTED

Sign Up Today and Start Learning For Free

READY TO GET STARTED

Sign Up Today and Start Learning For Free

READY TO GET STARTED

Sign Up Today and Start Learning For Free