Birla Institute of Technology & Science, Pilani

Natural Language Processing

Birla Institute of Technology & Science, Pilani

Natural Language Processing

MD Husnain
Prof. S. P. Vimal

Instructors: MD Husnain

Included with Coursera Plus

Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

6 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

6 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Understand and recall core concepts and techniques in Natural Language Processing (NLP).

  • Analyse and evaluate NLP methods for varied tasks, considering performance, context, and suitability.

  • Design and develop real-world NLP applications by integrating multiple techniques.

Details to know

Shareable certificate

Add to your LinkedIn profile

Recently updated!

January 2026

Assessments

140 assignments

Taught in English

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

There are 12 modules in this course

In this module, the learners will be introduced to the course and its syllabus, setting the foundation for their learning journey. The course's introductory video will provide them with insights into the valuable skills and knowledge they can expect to gain throughout the duration of this course. Additionally, the syllabus reading will comprehensively outline essential course components, including course values, assessment criteria, grading system, schedule, details of live sessions, and a recommended reading list that will enhance the learner’s understanding of the course concepts. Moreover, this module offers the learners the opportunity to connect with fellow learners as they participate in a discussion prompt designed to facilitate introductions and exchanges within the course community.

What's included

2 videos1 reading1 discussion prompt

This module introduces the fundamental concepts of Natural Language Processing (NLP). It begins with the definition of NLP and explores a variety of real-world applications. You will gain an understanding of Natural Language Understanding (NLU) and Natural Language Generation (NLG). The module also covers key evaluation metrics used to assess NLP systems. Additionally, a hands-on lab session will guide you through the implementation of basic NLP preprocessing techniques.

What's included

13 videos4 readings12 assignments1 discussion prompt

This module introduces essential NLP preprocessing techniques. It begins with regular expressions for text pattern matching, followed by an overview of words and corpora as foundational data sources. Sentence segmentation and tokenization are then covered through practical demonstrations. Finally, the module explores normalization, lemmatization, and stemming as methods to standardise text, with a demo highlighting their differences and effects.

What's included

14 videos5 readings14 assignments1 discussion prompt

This module explores lexical and vector semantics, focusing on computational representations of word meaning. It covers word vectors, Bag of Words, and co-occurrence matrices to capture contextual relationships. Techniques such as TF-IDF are introduced to measure word importance, along with methods for computing word similarity. Practical examples and mathematical exercises on TF-IDF help reinforce these core NLP concepts.

What's included

13 videos3 readings10 assignments1 discussion prompt

This module introduces Word Embeddings, focusing on the transition from sparse to dense vector representations of words. It covers Word2Vec models, including Skip-gram and CBOW, explained with simple, intuitive examples. The module also explores GloVe embeddings, which capture global word co-occurrence statistics for improved semantic understanding. Learners will visualise word embeddings to gain insights into how words relate in vector space. Finally, the module highlights real-world applications of word embeddings in NLP tasks like sentiment analysis, machine translation, and question answering.

What's included

13 videos3 readings14 assignments1 discussion prompt

This module introduces Language Modeling (LM) and its role in predicting word sequences in natural language. It explores practical applications of LMs and explains N-gram models, including challenges like generalization and handling zero probabilities. Techniques such as smoothing and stupid backoff are covered to improve model robustness. The module concludes with methods for evaluating language models using standard metrics.

What's included

15 videos4 readings13 assignments1 discussion prompt

This module explores the use of Neural Networks in Language Modelling, starting with the fundamentals of Feed-Forward Neural Networks and their training process for language tasks. It introduces Neural Language Models, which capture complex patterns in text beyond traditional statistical methods. The module also provides a foundational understanding of Large Language Models (LLMs) and their capabilities. Finally, it introduces Prompt Engineering as a technique to effectively interact with and guide LLMs for various NLP applications.

What's included

17 videos6 readings16 assignments1 discussion prompt

This module provides an introduction to Part-of-Speech (POS) Tagging, techniques to perform POS Tagging and their applications in NLP. POS tagging is a fundamental task in Natural Language Processing (NLP) that involves assigning grammatical categories (like noun, verb, adjective) to words in text. Starting from basic linguistic foundations and real-world applications, the module dives into the evolution of POS tagging techniques—from statistical models like Hidden Markov Models (HMMs) and Maximum Entropy classifiers, to modern deep learning approaches using Recurrent Neural Networks (RNNs). Learners will gain a strong theoretical understanding and insight into how POS tagging supports downstream tasks like parsing, named entity recognition, and machine translation. The module includes a hands-on coding demonstration for POS tagging.

What's included

13 videos5 readings11 assignments1 discussion prompt

This module introduces students to the syntactic structure of natural language and its critical role in Natural Language Processing (NLP) applications. Parsing is the task of assigning a structured representation—typically a tree—to a sentence, revealing the grammatical relationships between its components. The module begins by revisiting Context-Free Grammars (CFGs) and how they form the foundation for syntactic parsing. We explore Constituent Parsing, introducing classical parsing techniques such as the CKY (Cocke-Kasami-Younger) algorithm. The module then transitions to modern span-based neural parsing approaches that use neural networks to score and predict parse trees. A significant portion of the module is dedicated to Dependency Parsing, where syntactic structure is represented through direct relationships between words rather than phrases. Students will study both transition-based and graph-based dependency parsers, gaining insight into their strengths, algorithmic designs, and practical performance. Throughout the module, we emphasise real-world NLP applications.

What's included

18 videos4 readings18 assignments1 discussion prompt

This module explores the semantic dimension of natural language by covering both lexical semantics—including word senses, ambiguity, and disambiguation techniques—and the semantic web—a framework for enabling machine-readable, structured understanding of web data. The module starts with foundational concepts in lexical semantics and WordNet, then proceeds to classical and modern word sense disambiguation (WSD) methods. The second part focuses on Semantic Web technologies, covering ontologies, knowledge graphs, RDF/OWL, and their role in enabling intelligent systems and knowledge-driven NLP applications.

What's included

17 videos5 readings14 assignments1 discussion prompt

This module introduces students to the evolution of neural network architectures in NLP, beginning with recurrent models (RNNs), progressing through attention mechanisms, and culminating in Transformer-based models that have revolutionised natural language processing. Through hands-on coding and application-driven lessons, students will explore how Transformers power state-of-the-art systems in sentiment analysis (text classification), machine translation, and question answering. The module emphasises both theoretical foundations and practical implementation using modern deep learning frameworks.

What's included

16 videos5 readings17 assignments1 discussion prompt

End Term Examination

What's included

1 assignment

Instructors

MD Husnain
1 Course 1 learner
Prof. S. P. Vimal
Birla Institute of Technology & Science, Pilani
2 Courses 675 learners

Offered by

Why people choose Coursera for their career

Felipe M.

Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."

Jennifer J.

Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."

Larry W.

Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."

Chaitanya A.

"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."
Coursera Plus

Open new doors with Coursera Plus

Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions