natural language processing in tensorflow

By attending Natural Language Processing In Tensorflow workshop, Participants will:

  • Build natural language processing systems using TensorFlow
  • Process text, including tokenization and representing sentences as vectors
  • Apply RNNs, GRUs, and LSTMs in TensorFlow
  • Train LSTMs on existing text to create original poetry and more

If you are a software developer who wants to build scalable AI-powered algorithms, you need to understand how to use the tools to build them. This Specialization will teach you best practices for using TensorFlow, a popular open-source framework for machine learning.

In Course TensorFlow Specialization, you will build natural language processing systems using TensorFlow. You will learn to process text, including tokenizing and representing sentences as vectors, so that they can be input to a neural network. You’ll also learn to apply RNNs, GRUs, and LSTMs in TensorFlow. Finally, you’ll get to train an  LSTM on existing text to create original poetry!

The Machine Learning course and Deep Learning Specialization from Andrew Ng teach the most important and foundational principles of Machine Learning and Deep Learning. This new deeplearning.ai TensorFlow Specialization teaches you how to use TensorFlow to implement those principles so that you can start building and applying scalable models to real-world problems. To develop a deeper understanding of how neural networks work, we recommend that you take the Deep Learning Specialization.

COURSE AGENDA

  • Introduction
  • Looking into the code
  • Training the data
  • More on training the data
  • Notebook for lesson 1
  • Finding what the next word should be
  • Example
  • Predicting a word
  • Poetry!
  • Looking into the code
  • Laurence the poet!
  • Your next task
  • Introduction
  • LSTMs
  • Implementing LSTMs in code
  • Accuracy and loss
  • A word from Laurence
  • Looking into the code
  • Using a convolutional network
  • Going back to the IMDB dataset
  • Tips from Laurence
  • Introduction
  • The IMBD dataset
  • Looking into the details
  • How can we use vectors?
  • More into the details
  • Notebook for lesson 1
  • Remember the sarcasm dataset?
  • Building a classifier for the sarcasm dataset
  • Let’s talk about the loss function
  • Pre-tokenized datasets
  • Diving into the code (part 1)
  • Diving into the code (part 2)
  • Notebook for lesson 3
  • Introduction
  • Word based encodings
  • Using APIs
  • Notebook for lesson 1
  • Text to sequence
  • Looking more at the Tokenizer
  • Padding
  • Notebook for lesson 2
  • Sarcasm, really?
  • Working with the Tokenizer
  • Notebook for lesson 3