NLP University has three main modules, and two appendices. Let me tell you about these one by one. But before, a bit about the structure of the course.
The material we’ve put together can be used in two ways:
- Sequential: If you want to start from the very beginning, the course will take you through the basics of Large Language Models and their architecture, and then a lot of practice, including coding, and prompt engineering. The course starts with Large Language Models and requires very little background knowledge. However, if you'd like to brush up on the basics of Machine Learning and Natural Language Processing, this material is in Appendix 1.
- Non-Sequential: If you feel like you already know the basics of NLP and LLMs, or you have a particular project in mind, you can skip ahead to the later modules. If you have a specific goal in mind, you are more than welcome to go directly to that particular module.
On these modules you'll learn what LLMs are, how they work, and then get a practical hands-on labs for you to build your own language applications. The first module is focused on theory, while modules 2, 3, and 4 are a combination of theory and hands-on practice with code labs.
Module 1 gives you the fundamentals of LLMs. This module is geared towards understanding the architecture of transformers models, including embeddings, similarity, and attention, and some of their applications such as semantic search.
Module 2 is an introduction to Cohere’s endpoints. It is focused on those endpoints that are used for text representation, such as embed, classify, and summarize. In this module you’ll find several codelabs guided towards building cool applications such as clustering news, classifying text, or semantic search.
Since generation is so important and has been so relevant lately, module 3 is focused solely on generation. A very important part of generation is prompting, namely, being able to give the model the correct question or command in order to generate what you want. This is all done in the Cohere playground, so there’s no code involved. The second part then delves into the code, and teaches you how to use the generate endpoint.
Once you know how to use Cohere's endpoints to build applications, the next step is to learn how to deploy these applications in order for others to use them. In Module 4, you'll learn how to deploy LLM-powered applications on several platforms and frameworks, including AWS Sagemaker, Streamlit, and FastAPI.
Search systems have been crucial in computing even before the birth of the internet, for efficiently navigating and retrieving information within vast data sets. In Module 5, you'll learn how to enhance the performance of search systems with LLMs, and also how to use search to improve the results of a generative model, in order to reduce hallucinations and output more exact responses. You'll first learn basic search systems like keyword search. Then you'll learn more powerful search systems that retrieve information based on semantic meaning, such as dense retrieval and reranks. Finally, you'll get to put what you learned into practice in our coding labs, where you'll be searching for answers to queries using a large dataset of Wikipedia articles.
Generative LLMs are powerful and are making a deep impact in many areas. But to extract the best performance from them, it is imperative that you give them the most effective prompts. In Module 6, you'll learn the best practice techniques for constructing an effective prompt to help you get the intended output from a generative model. You'll then learn how to apply these techniques to various use cases, as well as how to chain multiple prompts to unlock even more opportunities to build innovative applications. Finally, you'll learn how to validate and evaluate the outputs generated by an LLM.
Cohere provides a fully managed LLM platform that enables teams to leverage the technology without having to worry about building infrastructure and managing deployments. In Module 7, you'll get a solid overview of the Cohere platform, including its serving framework, the types of foundation models it serves, the available endpoints, and the range of applications that can be built on top of it. This module will get you ready to build and deploy LLM-powered applications in the enterprise setting
We have also gathered some extra material to complement your learning.
If you feel like you need a brush up on the basics on NLP or ML, Appendix 1 has the material to get you up to speed.
This module which contains an introduction to natural language processing, including history, language pre-processing techniques, and the machine learning models that have been used since NLP started, including supervised and unsupervised techniques. It ends with classification, including training classification models, evaluating them, and exploring their applications.
And if after going through the course, you'd like some extra practice on building apps, Appendix 2 has links to several code labs which will show you how to build some very cool apps, such as recommenders, question/answering bots, and more!
If you want to get more practice into building bigger applications, we include a series of code labs in which you can find a plethora of demos combining different endpoints to build some really cool applications, such as article recommenders, invoice extractors, lazy writers, name generators, question answering models, etc. Some of these have also been deployed as applications using different platforms.
We are so excited to have you learning with us. Happy learning!
Updated 5 days ago
Learn about LLMU's instructors and mentors!