Are you curious about how chatbots hold conversations or how ChatGPT generates human-like responses? This course in Natural Language Processing (NLP) is your gateway into the fascinating world where language meets AI. Designed for students and professionals alike, the course blends essential theory with hands-on experience to equip you with the skills needed to build intelligent language systems.

Natural Language Processing

Recommended experience
Recommended experience
Intermediate level
Linear Algebra and Optimisation, Probability and Statistics, Introduction to Programming and Introduction to Data Analytics
Recommended experience
Recommended experience
Intermediate level
Linear Algebra and Optimisation, Probability and Statistics, Introduction to Programming and Introduction to Data Analytics
What you'll learn
Understand and recall core concepts and techniques in Natural Language Processing (NLP).
Analyse and evaluate NLP methods for varied tasks, considering performance, context, and suitability.
Design and develop real-world NLP applications by integrating multiple techniques.
Skills you'll gain
Tools you'll learn
Details to know

Add to your LinkedIn profile
January 2026
140 assignments
See how employees at top companies are mastering in-demand skills

There are 12 modules in this course
In this module, the learners will be introduced to the course and its syllabus, setting the foundation for their learning journey. The course's introductory video will provide them with insights into the valuable skills and knowledge they can expect to gain throughout the duration of this course. Additionally, the syllabus reading will comprehensively outline essential course components, including course values, assessment criteria, grading system, schedule, details of live sessions, and a recommended reading list that will enhance the learner’s understanding of the course concepts. Moreover, this module offers the learners the opportunity to connect with fellow learners as they participate in a discussion prompt designed to facilitate introductions and exchanges within the course community.
What's included
2 videos1 reading1 discussion prompt
2 videos• Total 5 minutes
- Course Introduction• 3 minutes
- Meet Your Instructor: Prof. Dr. Chetana Gavankar• 2 minutes
1 reading• Total 10 minutes
- Course Overview• 10 minutes
1 discussion prompt• Total 10 minutes
- Meet Your Peers • 10 minutes
This module introduces the fundamental concepts of Natural Language Processing (NLP). It begins with the definition of NLP and explores a variety of real-world applications. You will gain an understanding of Natural Language Understanding (NLU) and Natural Language Generation (NLG). The module also covers key evaluation metrics used to assess NLP systems. Additionally, a hands-on lab session will guide you through the implementation of basic NLP preprocessing techniques.
What's included
13 videos4 readings12 assignments1 discussion prompt
13 videos• Total 76 minutes
- NLP Definition• 3 minutes
- NLP Applications• 5 minutes
- Why NLP is Hard?• 10 minutes
- Natural Language Understanding • 4 minutes
- Levels of Language Understanding• 5 minutes
- Natural Language Generation• 4 minutes
- Organisation of NLP System• 6 minutes
- Intrinsic vs. Extrinsic Evaluation• 4 minutes
- Challenges in Evaluation• 4 minutes
- NLP Tools Overview• 7 minutes
- Demo of NLP Tools• 6 minutes
- Basic NLP Application Development Using NLP Tools• 13 minutes
- Module Wrap-Up• 6 minutes
4 readings• Total 60 minutes
- Recommended Reading: What is NLP?• 15 minutes
- Recommended Reading: NLP Fundamentals• 15 minutes
- Recommended Reading: Evaluation of NLP Systems• 15 minutes
- Recommended Reading: NLP Tools Introduction• 15 minutes
12 assignments• Total 45 minutes
- NLP Definition• 6 minutes
- NLP Applications• 3 minutes
- Why NLP is a Hard Problem• 3 minutes
- Natural Language Understanding • 3 minutes
- Levels of Language Understanding• 3 minutes
- Natural Language Generation• 3 minutes
- Organisation of NLP System• 3 minutes
- Intrinsic vs. Extrinsic Evaluation• 6 minutes
- Challenges in Evaluation• 3 minutes
- NLP Tools Overview• 6 minutes
- Demo of NLP Tools• 3 minutes
- Basic NLP Application Development Using NLP Tools• 3 minutes
1 discussion prompt• Total 30 minutes
- Real-World Challenges and Tools in Natural Language Processing• 30 minutes
This module introduces essential NLP preprocessing techniques. It begins with regular expressions for text pattern matching, followed by an overview of words and corpora as foundational data sources. Sentence segmentation and tokenization are then covered through practical demonstrations. Finally, the module explores normalization, lemmatization, and stemming as methods to standardise text, with a demo highlighting their differences and effects.
What's included
14 videos5 readings14 assignments1 discussion prompt
14 videos• Total 79 minutes
- Regular Expressions• 8 minutes
- Words and Corpora• 5 minutes
- Sentence Segmentation• 3 minutes
- Code Demo Segmentation• 5 minutes
- Tokenization• 5 minutes
- Tokenization Methods• 7 minutes
- Code Demo Tokenization• 14 minutes
- Normalization • 4 minutes
- Code Demo Normalization • 4 minutes
- Stemming• 6 minutes
- Code Demo Stemming• 5 minutes
- Lemmatization • 3 minutes
- Code Demo Lemmatization• 6 minutes
- Module Wrap-Up• 4 minutes
5 readings• Total 130 minutes
- Recommended Reading: Basic Text Preprocessing• 35 minutes
- Recommended Reading: Segmentation and Tokenization • 30 minutes
- Recommended Reading: Normalization• 20 minutes
- Recommended Reading: Stemming and Lemmatization• 30 minutes
- Instructional Document: Staff-Graded Assignment-1• 15 minutes
14 assignments• Total 99 minutes
- Regular Expressions• 3 minutes
- Words and Corpora• 3 minutes
- Sentence Segmentation• 3 minutes
- Code Demo Segmentation• 3 minutes
- Tokenization• 3 minutes
- Tokenization Methods• 3 minutes
- Code Demo Tokenization• 3 minutes
- Normalization • 3 minutes
- Code Demo Normalization• 3 minutes
- Stemming• 3 minutes
- Code Demo Stemming• 3 minutes
- Lemmatization• 3 minutes
- Code Demo Lemmatization• 3 minutes
- Graded Quiz: Modules 1 and 2• 60 minutes
1 discussion prompt• Total 30 minutes
- Building a Preprocessing Pipeline: Challenges and Solutions• 30 minutes
This module explores lexical and vector semantics, focusing on computational representations of word meaning. It covers word vectors, Bag of Words, and co-occurrence matrices to capture contextual relationships. Techniques such as TF-IDF are introduced to measure word importance, along with methods for computing word similarity. Practical examples and mathematical exercises on TF-IDF help reinforce these core NLP concepts.
What's included
13 videos3 readings10 assignments1 discussion prompt
13 videos• Total 72 minutes
- Lexical Semantics • 3 minutes
- Why Vectors?• 7 minutes
- Word and Vectors• 8 minutes
- Bag of Words• 4 minutes
- Computing Word Similarity• 3 minutes
- Cosine Similarity• 4 minutes
- Cosine Similarity Example• 7 minutes
- Term Frequency• 4 minutes
- Inverse Document Frequency• 11 minutes
- TF-IDF• 7 minutes
- Demo of Words as Vectors• 4 minutes
- Demo of TF-IDF• 8 minutes
- Module Wrap-Up• 4 minutes
3 readings• Total 45 minutes
- Recommended Reading: Foundations of Lexical and Vector Semantics • 15 minutes
- Recommended Reading: Representing Text Using Vectors • 15 minutes
- Recommended Reading: Term and Inverse Document Frequency • 15 minutes
10 assignments• Total 30 minutes
- Lexical Semantics • 3 minutes
- Why Vectors? • 3 minutes
- Word and Vectors • 3 minutes
- Bag of Words• 3 minutes
- Computing Word Similarity • 3 minutes
- Cosine Similarity • 3 minutes
- Cosine Similarity Example • 3 minutes
- Term Frequency • 3 minutes
- Inverse Document Frequency • 3 minutes
- TF-IDF • 3 minutes
1 discussion prompt• Total 20 minutes
- Applying Vector Semantics in a Real-World Scenario• 20 minutes
This module introduces Word Embeddings, focusing on the transition from sparse to dense vector representations of words. It covers Word2Vec models, including Skip-gram and CBOW, explained with simple, intuitive examples. The module also explores GloVe embeddings, which capture global word co-occurrence statistics for improved semantic understanding. Learners will visualise word embeddings to gain insights into how words relate in vector space. Finally, the module highlights real-world applications of word embeddings in NLP tasks like sentiment analysis, machine translation, and question answering.
What's included
13 videos3 readings14 assignments1 discussion prompt
13 videos• Total 79 minutes
- Word2Vec • 4 minutes
- Basic 1-Hot Word Representation• 4 minutes
- Feature Based Word Representations• 3 minutes
- Skip Gram Algorithm Introduction• 6 minutes
- Skip Gram Probabilities• 8 minutes
- Skip-Gram Negative Sampling (SGNS) Approach• 7 minutes
- Skip-Gram Negative Training Data Example• 7 minutes
- SGNS Log Loss Function• 7 minutes
- Derivative of SGNS Loss Function• 6 minutes
- SGNS Example Part 1• 12 minutes
- SGNS Example Part 2• 8 minutes
- Continuous Bag of Words (CBOW)• 5 minutes
- Module Wrap Up • 4 minutes
3 readings• Total 45 minutes
- Recommended Reading: Basics of Word2Vec • 15 minutes
- Recommended Reading: Skip-Gram Word Embedding • 15 minutes
- Other Word2Vec Approaches Title: Essential Reading Material – CBOW and GloVe • 15 minutes
14 assignments• Total 396 minutes
- Word2Vec• 3 minutes
- Basic 1-Hot Word Representation• 3 minutes
- Feature Based Word Representations• 3 minutes
- Skip Gram Algorithm Introduction• 3 minutes
- Skip Gram Probabilities• 3 minutes
- Skip-Gram Negative Sampling (SGNS) Approach• 3 minutes
- Skip-Gram Negative Training Data Example• 3 minutes
- SGNS Log Loss Function• 3 minutes
- Derivative of SGNS Loss Function• 3 minutes
- SGNS Example Part 1• 3 minutes
- SGNS Example Part 2• 3 minutes
- Continuous Bag of Words (CBOW)• 3 minutes
- Graded Quiz - Modules 3 and 4• 60 minutes
- SGA-1 Submission: Word Embedding• 300 minutes
1 discussion prompt• Total 20 minutes
- The Power of Dense Vectors: Choosing an Embedding Model• 20 minutes
This module introduces Language Modeling (LM) and its role in predicting word sequences in natural language. It explores practical applications of LMs and explains N-gram models, including challenges like generalization and handling zero probabilities. Techniques such as smoothing and stupid backoff are covered to improve model robustness. The module concludes with methods for evaluating language models using standard metrics.
What's included
15 videos4 readings13 assignments1 discussion prompt
15 videos• Total 96 minutes
- What is Language Modeling?• 3 minutes
- Language Modelling Applications • 3 minutes
- How to Build a Language Model • 5 minutes
- Markov Assumption • 2 minutes
- N-gram Language Models• 4 minutes
- Bi-gram Computation• 10 minutes
- Raw Probabilities• 10 minutes
- Perils of Overfitting• 3 minutes
- Laplace Smoothing• 14 minutes
- Interpolation & Backoff• 10 minutes
- How Good is the Model?• 3 minutes
- Extrinsic Evaluation• 5 minutes
- Perplexity & It's Example• 9 minutes
- Module Demo• 10 minutes
- Module Wrap-Up• 5 minutes
4 readings• Total 60 minutes
- Recommended Reading: Language Modelling Introduction• 15 minutes
- Recommended Reading: N-grams • 15 minutes
- Recommended Reading: Smoothing • 15 minutes
- Recommended Reading: Language Modelling Evaluation • 15 minutes
13 assignments• Total 39 minutes
- What is Language Modeling? • 3 minutes
- Language Modelling Applications • 3 minutes
- How to Build a Language Model • 3 minutes
- Markov Assumption• 3 minutes
- N-gram Language Models • 3 minutes
- Bi-gram Computation • 3 minutes
- Raw Probabilities • 3 minutes
- Perils of Overfitting • 3 minutes
- Laplace Smoothing• 3 minutes
- Interpolation & Backoff• 3 minutes
- How Good is the Model?• 3 minutes
- Extrinsic Evaluation • 3 minutes
- Perplexity & its Example• 3 minutes
1 discussion prompt• Total 20 minutes
- Balancing Simplicity and Performance in Language Modelling• 20 minutes
This module explores the use of Neural Networks in Language Modelling, starting with the fundamentals of Feed-Forward Neural Networks and their training process for language tasks. It introduces Neural Language Models, which capture complex patterns in text beyond traditional statistical methods. The module also provides a foundational understanding of Large Language Models (LLMs) and their capabilities. Finally, it introduces Prompt Engineering as a technique to effectively interact with and guide LLMs for various NLP applications.
What's included
17 videos6 readings16 assignments1 discussion prompt
17 videos• Total 98 minutes
- Neural Network Unit• 3 minutes
- Non-Linear Activation Functions• 5 minutes
- Perceptron with Examples• 4 minutes
- Multi-Layer Perceptron• 8 minutes
- Softmax Function with Example• 4 minutes
- Feed Connected Neural Network• 4 minutes
- Feedforward Network• 5 minutes
- Forward Algorithm• 4 minutes
- Backpropagation Algorithm• 5 minutes
- Training Neural Network• 12 minutes
- Neural Language Modeling• 6 minutes
- Training Neural Language Model• 9 minutes
- N-gram Versus Neural Language Model• 4 minutes
- Neural LM Demo• 10 minutes
- What is LLM?• 6 minutes
- LLM Use Cases• 5 minutes
- Module Wrap Up• 3 minutes
6 readings• Total 105 minutes
- Recommended Reading: Introduction to Neural Network• 15 minutes
- Recommended Reading: Feed Forward Neural Network • 15 minutes
- Recommended Reading: Training Neural Network • 15 minutes
- Recommended Reading: Neural Language Models • 15 minutes
- Recommended Reading: Introduction to Large Language Models • 30 minutes
- Instructional Document: Staff-Graded Assignment-2• 15 minutes
16 assignments• Total 105 minutes
- Neural Network Unit• 3 minutes
- Non-Linear Activation Functions• 3 minutes
- Perceptron with Examples• 3 minutes
- Multi-Layer Perceptron• 3 minutes
- Softmax Function with Example• 3 minutes
- Feed Connected Neural Network• 3 minutes
- Feed Forward Network• 3 minutes
- Forward Algorithm• 3 minutes
- Backpropagation Algorithm• 3 minutes
- Training Neural Network• 3 minutes
- Neural Language Modeling• 3 minutes
- Training Neural Language Model• 3 minutes
- N-gram Versus Neural Language Model• 3 minutes
- What is LLM?• 3 minutes
- LLM Use Cases• 3 minutes
- Graded Quiz - Modules 5 and 6• 60 minutes
1 discussion prompt• Total 20 minutes
- The Next Generation of Language Modelling: From N-grams to LLMs• 20 minutes
This module provides an introduction to Part-of-Speech (POS) Tagging, techniques to perform POS Tagging and their applications in NLP. POS tagging is a fundamental task in Natural Language Processing (NLP) that involves assigning grammatical categories (like noun, verb, adjective) to words in text. Starting from basic linguistic foundations and real-world applications, the module dives into the evolution of POS tagging techniques—from statistical models like Hidden Markov Models (HMMs) and Maximum Entropy classifiers, to modern deep learning approaches using Recurrent Neural Networks (RNNs). Learners will gain a strong theoretical understanding and insight into how POS tagging supports downstream tasks like parsing, named entity recognition, and machine translation. The module includes a hands-on coding demonstration for POS tagging.
What's included
13 videos5 readings11 assignments1 discussion prompt
13 videos• Total 74 minutes
- Outline of the Module • 2 minutes
- What is POS Tagging? • 6 minutes
- Challenges in POS Tagging• 4 minutes
- POS Tagsets • 6 minutes
- Markov Chain• 5 minutes
- Hidden Markov Model• 5 minutes
- Hidden Markov Model as POS Tagger • 6 minutes
- Viterbi Algorithm • 8 minutes
- Viterbi Algorithm - Example• 8 minutes
- Logistic Regression - Overview• 9 minutes
- Multinomial Logistic Regression - Overview• 6 minutes
- Maximum Entropy Markov Models (MEMM)• 7 minutes
- Module Wrap Up• 2 minutes
5 readings• Total 110 minutes
- Code Document: POS tagging using NLTK / spaCy • 10 minutes
- Recommended Reading: Introduction to POS Tagging and Applications • 30 minutes
- Code Document: Demonstrating HMM Based POS Tagger• 10 minutes
- Recommended Reading: HMM for POS Tagging • 30 minutes
- Recommended Reading: Maximum Entropy Markov Models• 30 minutes
11 assignments• Total 33 minutes
- What is POS Tagging?• 3 minutes
- Challenges in POS Tagging• 3 minutes
- POS Tagsets • 3 minutes
- Markov Chain• 3 minutes
- Hidden Markov Model• 3 minutes
- Hidden Markov Model as POS Tagger • 3 minutes
- Viterbi Algorithm • 3 minutes
- Viterbi Algorithm - Example• 3 minutes
- Logistic Regression - Overview• 3 minutes
- Multinomial Logistic Regression - Overview• 3 minutes
- Maximum Entropy Markov Models (MEMM)• 3 minutes
1 discussion prompt• Total 30 minutes
- POS Tagging: The Right Tool for the Job• 30 minutes
This module introduces students to the syntactic structure of natural language and its critical role in Natural Language Processing (NLP) applications. Parsing is the task of assigning a structured representation—typically a tree—to a sentence, revealing the grammatical relationships between its components. The module begins by revisiting Context-Free Grammars (CFGs) and how they form the foundation for syntactic parsing. We explore Constituent Parsing, introducing classical parsing techniques such as the CKY (Cocke-Kasami-Younger) algorithm. The module then transitions to modern span-based neural parsing approaches that use neural networks to score and predict parse trees. A significant portion of the module is dedicated to Dependency Parsing, where syntactic structure is represented through direct relationships between words rather than phrases. Students will study both transition-based and graph-based dependency parsers, gaining insight into their strengths, algorithmic designs, and practical performance. Throughout the module, we emphasise real-world NLP applications.
What's included
18 videos4 readings18 assignments1 discussion prompt
18 videos• Total 88 minutes
- Outline of the Module • 2 minutes
- Introduction to Context-Free Grammars (CFGs)• 8 minutes
- Constituency and Phrase Structure• 5 minutes
- Ambiguity in Grammar• 4 minutes
- Chomsky Normal Form (CNF) and Grammar Normalisation• 5 minutes
- Treebanks and Empirical Grammar• 3 minutes
- CKY Algorithm• 7 minutes
- CKY Algorithm - Walkthrough• 8 minutes
- Parse Tree Recovery From CKY Table• 5 minutes
- Neural Span-based Constituency Parsing• 5 minutes
- What is Dependency Parsing?• 5 minutes
- Dependency Formalism• 5 minutes
- Universal Dependency Relations• 4 minutes
- Transition-Based Dependency Parsing • 6 minutes
- Transition-Based Dependency Parsing - Walkthrough• 5 minutes
- Creating an Oracle • 4 minutes
- Graph-Based Dependency Parsing• 5 minutes
- Module Wrap Up• 2 minutes
4 readings• Total 120 minutes
- Recommended Reading: Review of Context-Free Grammars and Parsing in NLP • 30 minutes
- Recommended Reading: Constituency Parsing and CKY Algorithm • 30 minutes
- Recommended Reading: Dependency Parsing – Theory and Representations • 30 minutes
- Recommended Reading: Dependency Parsing Algorithms and Modern Applications • 30 minutes
18 assignments• Total 411 minutes
- Introduction to Context-Free Grammars (CFGs)• 3 minutes
- Constituency and Phrase Structure• 3 minutes
- Ambiguity in Grammar• 3 minutes
- Chomsky Normal Form (CNF) and Grammar Normalisation• 3 minutes
- Treebanks and Empirical Grammar• 3 minutes
- CKY Algorithm• 3 minutes
- CKY Algorithm - Walkthrough• 3 minutes
- Parse Tree Recovery From CKY Table• 3 minutes
- Neural Span-based Constituency Parsing• 3 minutes
- What is Dependency Parsing?• 3 minutes
- Dependency Formalism• 3 minutes
- Universal Dependency Relations• 3 minutes
- Transition-Based Dependency Parsing • 3 minutes
- Transition-Based Dependency Parsing - Walkthrough• 6 minutes
- Creating an Oracle• 3 minutes
- Graph-Based Dependency Parsing• 3 minutes
- Graded Quiz: Modules 7 and 8• 60 minutes
- SGA-2: POS Tagging and Parsing• 300 minutes
1 discussion prompt• Total 30 minutes
- Parsing Frameworks: Constituent vs. Dependency• 30 minutes
This module explores the semantic dimension of natural language by covering both lexical semantics—including word senses, ambiguity, and disambiguation techniques—and the semantic web—a framework for enabling machine-readable, structured understanding of web data. The module starts with foundational concepts in lexical semantics and WordNet, then proceeds to classical and modern word sense disambiguation (WSD) methods. The second part focuses on Semantic Web technologies, covering ontologies, knowledge graphs, RDF/OWL, and their role in enabling intelligent systems and knowledge-driven NLP applications.
What's included
17 videos5 readings14 assignments1 discussion prompt
17 videos• Total 85 minutes
- Outline of the Module• 1 minute
- What is a Word Sense?• 3 minutes
- Homonymy vs Polysemy• 7 minutes
- Sense Relations• 7 minutes
- Introduction to WordNet and Synsets• 7 minutes
- Relations in WordNet• 5 minutes
- Navigating WordNet Hierarchies and Graph Structures• 5 minutes
- What is Word Sense Disambiguation? • 4 minutes
- Supervised WSD• 8 minutes
- Knowledge-Based WSD: Lesk Algorithm• 5 minutes
- From Syntactic Web to Semantic Web: What's the Problem?• 6 minutes
- Semantic Web Vision: Data Integration and Automation• 3 minutes
- Ontologies• 4 minutes
- Ontology Languages and Their Layers• 9 minutes
- What is a Knowledge Graph? • 3 minutes
- Applications in NLP• 6 minutes
- Module Wrap Up• 1 minute
5 readings• Total 130 minutes
- Recommended Reading: Word Senses and Lexical Semantics• 30 minutes
- Code Document: Querying WordNet in Python (using nltk.corpus.wordnet)• 10 minutes
- Recommended Reading: WordNet and Semantic Lexicons• 30 minutes
- Recommended Reading: Word Sense Disambiguation (WSD)• 30 minutes
- Recommended Reading: Introduction to the Semantic Web and Ontologies• 30 minutes
14 assignments• Total 42 minutes
- What is a Word Sense? • 3 minutes
- Homonymy vs Polysemy• 3 minutes
- Sense Relations• 3 minutes
- Introduction to WordNet and Synsets• 3 minutes
- Relations in WordNet• 3 minutes
- Navigating WordNet Hierarchies and Graph Structures• 3 minutes
- What is Word Sense Disambiguation?• 3 minutes
- Supervised WSD• 3 minutes
- Knowledge-Based WSD: Lesk Algorithm• 3 minutes
- Semantic Web Vision: Data Integration and Automation• 3 minutes
- Ontologies• 3 minutes
- Ontology Languages and Their Layers• 3 minutes
- What is a Knowledge Graph? • 3 minutes
- Applications in NLP• 3 minutes
1 discussion prompt• Total 30 minutes
- Disambiguating the Future: WSD and the Semantic Web• 30 minutes
This module introduces students to the evolution of neural network architectures in NLP, beginning with recurrent models (RNNs), progressing through attention mechanisms, and culminating in Transformer-based models that have revolutionised natural language processing. Through hands-on coding and application-driven lessons, students will explore how Transformers power state-of-the-art systems in sentiment analysis (text classification), machine translation, and question answering. The module emphasises both theoretical foundations and practical implementation using modern deep learning frameworks.
What's included
16 videos5 readings17 assignments1 discussion prompt
16 videos• Total 97 minutes
- What RNNs Are and Why They Fall Short• 7 minutes
- Why Do We Need Attention• 5 minutes
- The Attention Mechanism Explained• 6 minutes
- From Attention to Transformer Architecture • 6 minutes
- High-Level Structure of the Transformer• 4 minutes
- Self-Attention in Detail• 6 minutes
- Multi-Head Attention• 4 minutes
- Positional Encodings• 4 minutes
- Popular Transformer Variants• 5 minutes
- What Text Summarisation is and its Uses • 2 minutes
- Types of Text Summarisation• 5 minutes
- Natural Text Summarisation • 11 minutes
- Stages of Text Summarisation • 6 minutes
- Demo of Text Summarisation • 9 minutes
- Ethical Issues in NLP • 10 minutes
- Ethical Design of NLP Applications • 6 minutes
5 readings• Total 130 minutes
- Recommended Reading: From RNNs to Attention• 30 minutes
- Recommended Reading: Transformer Architecture• 30 minutes
- Code Document: Transformer Demonstration with Classification• 10 minutes
- NLP Application - Text Summarisation• 30 minutes
- Recommended Reading: Ethics in NLP• 30 minutes
17 assignments• Total 108 minutes
- What RNNs Are and Why They Fall Short• 3 minutes
- Why Do We Need Attention• 3 minutes
- The Attention Mechanism Explained• 3 minutes
- From Attention to Transformer Architecture • 3 minutes
- High-Level Structure of the Transformer• 3 minutes
- Self-Attention in Detail• 3 minutes
- Multi-Head Attention• 3 minutes
- Positional Encodings• 3 minutes
- Popular Transformer Variants• 3 minutes
- What Text Summarisation is and its Uses • 3 minutes
- Types of Text Summarisation • 3 minutes
- Natural Text Summarisation • 3 minutes
- Stages of Text Summarisation • 3 minutes
- Demo of Text Summarisation • 3 minutes
- Ethical Issues in NLP • 3 minutes
- Ethical Design of NLP Applications • 3 minutes
- Graded Quiz - Modules 9 and 10• 60 minutes
1 discussion prompt• Total 30 minutes
- The Power and Peril of Large Language Models• 30 minutes
End Term Examination
What's included
1 assignment
1 assignment• Total 30 minutes
- End Term Examination • 30 minutes
Instructors


Offered by

Offered by

Birla Institute of Technology & Science, Pilani (BITS Pilani) is one of only ten private universities in India to be recognised as an Institute of Eminence by the Ministry of Human Resource Development, Government of India. It has been consistently ranked high by both governmental and private ranking agencies for its innovative processes and capabilities that have enabled it to impart quality education and emerge as the best private science and engineering institute in India. BITS Pilani has four international campuses in Pilani, Goa, Hyderabad, and Dubai, and has been offering bachelor's, master’s, and certificate programmes for over 58 years, helping to launch the careers for over 1,00,000 professionals.
Why people choose Coursera for their career

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.

Open new doors with Coursera Plus
Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy
Frequently asked questions
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
More questions
Financial aid available,