Learning Library

← Back to Library

Restricted Boltzmann Machine for Recommendations

Key Points

  • A Restricted Boltzmann Machine (RBM) is a probabilistic graphical model that became popular for collaborative‑filtering after winning the Netflix competition, excelling at predicting user ratings.
  • RBMs consist of a visible layer and a hidden layer with full bipartite connections between them, while nodes within the same layer are deliberately **restricted** (no intra‑layer edges).
  • Each edge carries a weight that encodes the probability of activation, and training proceeds in two cycles: a feed‑forward pass that captures positive (and negative) associations, followed by a feedback pass that updates weights, biases, and logs edge probabilities.
  • By feeding many examples through these two phases, the network learns the underlying probability distribution of the data, effectively uncovering hidden structure.
  • In a video‑recommendation scenario, the visible layer can represent videos a user has watched, and the hidden layer can capture latent categories (e.g., “machine learning,” “cats”), enabling personalized video suggestions.

Full Transcript

# Restricted Boltzmann Machine for Recommendations **Source:** [https://www.youtube.com/watch?v=L3ynnRgpZwg](https://www.youtube.com/watch?v=L3ynnRgpZwg) **Duration:** 00:06:01 ## Summary - A Restricted Boltzmann Machine (RBM) is a probabilistic graphical model that became popular for collaborative‑filtering after winning the Netflix competition, excelling at predicting user ratings. - RBMs consist of a visible layer and a hidden layer with full bipartite connections between them, while nodes within the same layer are deliberately **restricted** (no intra‑layer edges). - Each edge carries a weight that encodes the probability of activation, and training proceeds in two cycles: a feed‑forward pass that captures positive (and negative) associations, followed by a feedback pass that updates weights, biases, and logs edge probabilities. - By feeding many examples through these two phases, the network learns the underlying probability distribution of the data, effectively uncovering hidden structure. - In a video‑recommendation scenario, the visible layer can represent videos a user has watched, and the hidden layer can capture latent categories (e.g., “machine learning,” “cats”), enabling personalized video suggestions. ## Sections - [00:00:00](https://www.youtube.com/watch?v=L3ynnRgpZwg&t=0s) **RBM-Based Video Recommendation Overview** - The passage explains how Restricted Boltzmann Machines, a probabilistic graphical model with visible and hidden layers fully connected across but not within layers, are employed for collaborative filtering to generate personalized video suggestions. - [00:03:22](https://www.youtube.com/watch?v=L3ynnRgpZwg&t=202s) **RBM Training and Applications** - The excerpt explains how a Restricted Boltzmann Machine uses a forward pass to detect negative associations and a backward pass to update weights, biases, and edge probabilities, illustrating its role in video recommendation and broader tasks such as feature extraction and pattern recognition. ## Full Transcript
0:00At this very moment, you've made a decision: 0:03to watch this video. 0:08Thank you! 0:09But when we're done, you'll have another decision to make, 0:12do you want to watch another one? 0:14Well, to assist you with that, 0:16you'll be presented with a personalized list of videos that might interest you. 0:20And that's a great use case for something called 0:25a Restricted Boltzmann Machine. 0:38Or RBM. 0:44In fact, RBMs became increasingly popular after a Netflix competition 0:50when it was used as a collaborative filtering strategy to 0:54forecast user ratings for movies, 0:56and it outperformed most of its rivals. 0:59A Restricted Boltzmann Machine is a probabilistic graphical model 1:04for unsupervised learning that is used to discover hidden structures in data. 1:09And a video recommendation system is just a perfect application of that. 1:13RBMs are made up of two parts. 1:16So there's the visible layer. 1:21That contains some nodes. 1:26And then there is the hidden layer. 1:39Now, every node in the visible layer 1:43is connected to every node in the hidden layer. 1:47So it's a one-to-many. 1:48So each node here goes to every node in the hidden layer, 1:52and so is the case for all of the other nodes in the visible layer. 2:00The restricted part here, that comes about because no node is connected to any other node in the same layer. 2:09So, you can see here the visible nodes are not connected to each other and nor are they hidden. 2:15Now all of these nodes are connected by edges that have something called weights associated with them. 2:24And the weights represent the probability of being active. 2:29Now this is a very efficient structure for a neural network 2:32because one input layer can be used for many hidden layers for training. 2:39Now to train the network, we need to provide multiple inputs. 2:47The nodes in the visible layer, they'll receive the training data. 2:51This is multiplied by the weights and added to a bias value at the hidden layer. 2:59This is the first phase of an RBM, and it's called the Feed Forward Pass. 3:13Here we're basically identifying the positive associations, 3:17meaning the link between the visible unit and the hidden unit is a match. 3:22So, maybe this one is a match. 3:25And we're looking for a negative association when the link between the two nodes is actually negative. 3:33The second phase is the Feed Backwards Pass. 3:42And this pass is really used to determine how weightings should be adjusted. 3:51And that passes three things. 3:53Basically, it adjusts the weights, it adjusts the biases, 3:57and it logs probability for every edge between the layers. 4:02Putting enough training data through these two phases teaches us the pattern that is responsible to activate the hidden nodes. 4:09We're basically learning the probability distribution across the dataset. 4:13Now, in our video recommendation example, 4:16our visible layer could consist of videos that a person has watched. 4:21And then our hidden layer, 4:23well, that could consist of a classification for each video, such as "what is the video about?" 4:28Machine learning, Python programming, cats. 4:33Or the hidden layer could be something else like the style of video. 4:36So like a demo video, a vlog, and a talking head video. 4:40By observing the videos a person is watching, 4:43our RBM can adjust the weighting and bias to determine things 4:47such as how likely a person who is interested in machine learning videos is also interested in Python videos. 4:56Now, beyond recommendation engines, which are an example of collaborative filtering, 5:01there are many other use cases for RBM. 5:04For example, feature extraction pattern recognition. 5:13And that could be used to understand things like handwritten text 5:17or we can identify structures in data sets like the hierarchy of what causes events to happen. 5:27Using an RBM can be a very powerful way to learn 5:31about your data without having to write code around iterating over every node and adjusting those weights manually. 5:39And if you do have a bit more time, 5:42perhaps the recommendation system can find you another video that suits your interests. 5:47Hopefully, one from the IBM Technology channel. 5:52If you have any questions, please drop us a line below,. 5:55And if you want to see more videos like this in the future, please like and subscribe. 6:00Thanks for watching.