Abstract
Restricted Boltzmann machines (RBMs) are a powerful class of generative models, but their training requires computing a gradient that, unlike supervised backpropagation on typical loss functions, is notoriously difficult even to approximate. Here, we show that properly combining standard gradient updates with an offgradient direction, constructed from samples of the RBM ground state (mode), improves training dramatically over traditional gradient methods. This approach, which we call ‘modeassisted training’, promotes faster training and stability, in addition to lower converged relative entropy (KL divergence). We demonstrate its efficacy on synthetic datasets where we can compute KL divergences exactly, as well as on a larger machine learning standard (MNIST). The proposed modeassisted training can be applied in conjunction with any given gradient method, and is easily extended to more general energybased neural network structures such as deep, convolutional and unrestricted Boltzmann machines.
Introduction
Boltzmann machines^{1} and their restricted version (RBMs), are unsupervised generative models applied to a variety of machine learning problems^{2}. They enjoy a universal approximation theorem for discrete probability distributions^{3}, are used as building blocks for deepbelief networks^{4} and, in no small feat, can even represent correlated states in quantum manybody systems^{5,6}. Despite these advantages, the limitations of the most popular training algorithms for RBMs, combined with rapid advances in supervised learning techniques, have led to the sideline of their unsupervised learning, known also as “pretraining”, in favor of supervised backpropagation from random initial conditions^{4}.
However, supervised learning requires large datasets of labeled examples, and even then, stateoftheart neural networks have been shown to be vulnerable to what are called adversarial examples^{7}, or slight perturbations of the input that ‘fool’ the network. On the other hand, unsupervised pretraining is possible in the absence of labels, and is known to be a strong regularizer^{8}, often resulting in better generalization for supervised models. Since adversarial vulnerability could be seen as a failure in generalization, an improvement in unsupervised training could lead to more robust performance in a downstream task. This motivates the search for better unsupervised training methods.
To set the stage for our training method, we provide a short introduction to RBMs and then describe standard training approaches and their limitations. RBMs are undirected weighted graphs with a bipartite structure that differentiates between n visible nodes, v ∈ {0, 1}^{n} and m latent, or ‘hidden’, nodes, h ∈ {0, 1}^{m}, not directly constrained by the data^{2}. (Note that an RBM does not allow connections within a layer.) These states are usually taken to be binary but can be generalized. Each state of the machine corresponds to an energy of the form
where the biases \({\bf{a}}\in {{\mathbb{R}}}^{n}\), \({\bf{b}}\in {{\mathbb{R}}}^{m}\), and weights \({\bf{W}}\in {{\mathbb{R}}}^{n\times m}\) are the learnable parameters. This induces a distribution over states given by a BoltzmannGibbs distribution,
The normalizing factor, \({\mathcal{Z}}={\sum }_{\{{\bf{v}}\}}{\sum }_{\{{\bf{h}}\}}{e}^{E({\bf{v}},{\bf{h}})}\), is the formidable partition function that involves the sum over an exponentially scaling number of states, making the exact computation of its value infeasible. Additionally, the bipartite structure of the RBM makes the hidden nodes conditionally independent given the visible nodes (and vice versa), with a closed form conditional distribution given by^{9} p(h_{j} = 1∣v) = σ(∑_{i}w_{ij}v_{i} + b_{j}), where \(\sigma (x)={(1+{e}^{x})}^{1}\).
RBMs can learn to represent some unknown data distribution over the visible layer, q(v), typically specified by a data set of samples. We indicate the unique elements of this data set as \({\mathcal{D}}=\{{{\bf{v}}}_{1},\cdots \ ,{{\bf{v}}}_{{n}_{d}}\}\subset \Omega \), where Ω is the space of all binary sequences of length n. Let us assume further that all data points have equal amplitude over the support, i.e., p_{i} = 1/n_{d}. Since most real world datasets consist of unique elements with no exact repeats, this class of distributions includes all relevant ones.
The RBM is tasked to match its marginal distribution over the visible layer, p(v) = ∑_{{h}}p(v, h), to this unknown data distribution. The former can be written as,
Training an RBM then amounts to a search for the appropriate weights and biases that will minimize the quantity known as the KullbackLeibler (KL) divergence between the two distributions,
The optimization of Eq. (4) is typically done via stochastic gradient descent with respect to the RBM parameters, which leads to weight updates of the form^{10},
The first term on the rhs of Eq. (5) is an expectation with the hidden nodes driven by the data, and is referred to as the “data term”. Since the conditional distributions across the hidden nodes are factorial and known in closed form, this inference problem is easy in the RBM case. The second term on the rhs of Eq. (5), instead, is an expectation over the model distribution with no nodes fixed, and called the “model term”. The exact calculation of this term requires computing the partition function of the RBM, which is proved to be hard even to estimate^{11}.
This expectation value is popularly approximated by a Markov Chain Monte Carlo (MCMC) procedure dubbed ‘contrastive divergence’ (CD)^{9}. Since CD initializes Markov chains from elements of the dataset, difficulties arise when the model distribution represented by the RBM contains modes where the data distribution has negligible probability, sometimes referred to as ‘spurious modes’. The prohibitively slow mixing due to randomwalk exploration, and typical high dimensionality of the problem render it difficult for CD to find and correct the probability of these modes.
In a recent work, a memcomputingassisted training scheme for RBMs^{12} was proposed to address this training difficulty. Memcomputing^{13} is a novel computing paradigm in which memory (time nonlocality) assists in the solution of hard decision and optimization problems^{14,15}. In that previous work^{12}, the difficult model expectation in Eq. (5) was replaced by a sample of the RBM mode obtained by a memcomputing solver, which led to better quality solutions versus CD in a downstream classification task. However, despite demonstrating a significant reduction in the number of pretraining iterations necessary to achieve a given level of classification accuracy, as well as a total performance gain over CD, that algorithm^{12} does not fully exploit samples of the mode. In particular, it does not give rise to training advantages over standard methods in the unsupervised setting.
In the present work, we overcome this limitation. We show that by appropriately combining the RBM’s mode (ground state) samples and data initiated chains (as in CD) not only improves considerably the model quality over CD and other MCMC procedures, but also improves the stability of the pretraining routine. Specifically, we introduce (i) a principled schedule for incorporating samples of the RBM ground state into pretraining, (ii) an appropriate modedriven learning rate, (iii) show comparisons to other stateoftheart unsupervised pretraining approaches without the need of supervised finetuning, and (iv) provide proofs of advantageous properties of the method. To corroborate our method, we will show exact KL/loglikelihood achieved on small synthetic datasets and on the MNIST dataset. In all cases, we find that modeassisted training is able to learn more accurate models than several other training methods such as CD, persistent contrastive divergence (PCD), parallel tempering (PT) used in tandem with enhanced gradient RBMs (ERBMs), and centered RBMs (CRBMs)^{16}.
Results
After these preliminaries we can now describe our modeassisted training method. It consists in replacing the average in the model term of Eq. (5) with the mode of p(v) at appropriate steps of the training procedure. However, p(v) is very cumbersome to compute (see Eq. (3)), thereby adding a considerable computational burden. Instead, we sample the mode of p(v, h), the model distribution of the RBM.
The rationale for replacing the mode, v^{+}, of p(v) with the visible configuration of the mode, v^{*}, of p(v, h) is because the two modes are equivalent with high probability under scenarios typical for different stages of the RBM pretraining. We prove this in the Supplementary Note 1, while here we provide numerical evidence of this fact.
Mode correspondence of joint and marginal distributions
To illustrate the equivalence between the modes of p(v) and p(v, h), let us begin by expressing the joint probability mass function (PMF) in terms of the product of the marginal PMF over the visible layer and the conditional PMF over the hidden layer p(v, h) = p(v)p(h∣v). For any given visible configuration v, we then have \({\max }_{{\bf{h}}}p({\bf{v}},{\bf{h}})=p({\bf{v}})\left[\right.{\max }_{{\bf{h}}}p({\bf{h}} {\bf{v}})\left]\right.\). We can then define the hidden “activation” of v to be
which allows us to write \({\max }_{{\bf{h}}}p({\bf{v}},{\bf{h}})=p({\bf{v}})r({\bf{v}})\). Note that we can interpret r(v) as a measure of the “certainty” that the hidden nodes acquire the value 0 or 1.
It is then clear that we can write the probability amplitude of the mode of the joint PMF as \({\max }_{\{{\bf{v}},{\bf{h}}\}}p({\bf{v}},{\bf{h}})={\max }_{{\bf{v}}}({\max }_{{\bf{h}}}p({\bf{v}},{\bf{h}})) = {\max }_{{\bf{v}}}(p({\bf{v}})r({\bf{v}}))\,\le\, {\max }_{{\bf{v}}}(p({\bf{v}}))=p({{\bf{v}}}^{+})\), where we have used the fact that r(v) ≤ 1 and v^{+} is the mode of the marginal PMF, p(v). If r(v^{+}) = 1 then we have modal equivalence of the joint and marginal PMFs.
In Fig. 1, we plot the evolution of r(v^{+}) as a function of the number of CD1 training iterations for a shifting bar synthetic dataset, which is small enough that we can compute the exact mode of p(v) at any iteration. The figure indeed shows that r(v^{+}) approaches 1 rather quickly as pretraining proceeds. The activation of a random visible configuration is being used as comparison.
In the Supplementary Note 1 we also prove that the condition of r(v^{+}) being close to 1 is not necessary for establishing modal equivalence. In fact, we prove that it is still possible for the two modes to be equal even when the weights are small (thus a smaller r(v^{+}) value). Additionally, we show in the Supplementary Note 1 that modeassisted training is more effective in exploring the PMF of the model distribution for RBM instances of greater frustration. The latter is a measure of the degeneracy of the lowenergy states of an RBM, and thus the difficulty of finding the ground state configuration. Since it was shown that the frustration of the RBM increases as pretraining proceeds^{17}, in order to effectively utilize the power of modeassisted training, the frequency of mode updates should be higher at the later stages of the training than the earlier stages.
Optimal modetraining schedule
The results from the previous subsection then suggest a schedule for the modeassisted training routine that performs mode updates more frequently the longer the pretraining routine has elapsed.
To realize this, we use a sigmoid, σ, to calculate the probability of replacing the data driven hidden CD term with a mode driven term at the iteration step n
Here, \(0\, <\, {P}_{\max }\,\le\, 1\) is the maximum probability of employing a mode update, and α and β are parameters that control how the mode updates are introduced into the pretraining. They are chosen such that the frequency of mode updates becomes dominant only when both the conditions of large weights and frustration are met (see the “Methods” section for the value of these parameters). Initially, P_{mode} will be small, since the joint and marginaldistribution modes are unequal, and gradually rises to its maximal value when the modes are of equal magnitude. Note that one may employ different functions to quantify the degree to which the joint and marginaldistribution modes equalize during training. However, we have found that the sigmoid works well enough in practice.
Combining Markov chains with mode updates
We are now ready to outline the full procedure of modeassisted training, that combines a MCMC method with the mode updates following the schedule (7). Although one may choose any variation of the MCMC method to train RBMs, for definiteness of discussion, we consider here the standard training method, CD^{9}. In this case, weight updates follow the modified KL(q∣∣p) gradient. As discussed in the introduction, it evaluates to a difference of two expectations called the data term and model term which we can write as
where ϵ^{CD} is the CD update learning rate, and the expectation in the second term is taken over the reconstructed distribution over a Markov chain initialized from the dataset after k Gibbs samples (k = 1 in most cases). When driving the weights with samples of the RBM ground state with the schedule (7), we use instead the following update,
where [ ]_{p} is the mode of the joint RBM model distribution. Note that the mode update learning rate, ϵ^{mode}, may be different from the CD learning rate, ϵ^{CD}.
We also stress that the updates in Eq. (9) are in an offgradient direction. As we show now, this is the reason for the increased stability of the training over MCMC approaches, and its convergence to arbitrarily small KL divergences.
Stability and convergence
The data term, which is identical in both Eqs. (8) and (9), tends to increase the weights associated with the visible node configurations in the dataset, thereby increasing their relative probabilities compared with states not in the support set, \({\bf{v}}\in \Omega \backslash {\mathcal{D}}\). Instead, the model term decreases the weights/probability corresponding to highly probable model states, or equivalently, minimizes the partition function^{2}. CD does this poorly and often diverges, while modeassisted training achieves this with better stability and faster convergence (see Fig. 2). We provide here an intuitive explanation of this phenomenon, while a formal treatment on this topic will be provided in the Supplementary Note 2.
The pretraining routine can be broken down in two phases. In the first phase, the training procedure attempts to discover the support \({\mathcal{D}}\) of the data distribution q(v). We call this phase the “discovery phase”. To better see this, consider a randomly initialized RBM with small weights. These small and uncorrelated weights give rise to RBM energies close to zero for all nodal states, or E(v, h) ≈ 0 for all v and h, see Eq. (1). This results in the model distribution p(v, h) being almost uniform.
Therefore, we see that in the discovery phase of training, the model term plays little role in the training as it simply pushes down on the weights in a practically uniform manner, with \({\langle {v}_{i}{h}_{j}\rangle }_{\text{M}}\approx 0.25\). On the other hand, the data term drives the initial phase of the training by increasing the marginal probability of the visible states in the support, \({\bf{v}}\in {\mathcal{D}}\). We can then employ a large learning rate (say, ϵ^{CD} = 1) in the beginning of the training, driving the visible layer configurations in the dataset, \({\mathcal{D}}\), to high probability versus configurations outside the support. Empirically, we find that CD training performs in the discovery phase reasonably well, and is quickly able to “find” the visible states in the support.
Now, having discovered the support, we arrive at the second phase of the training where we have to bring the model distribution as close to uniformity as possible over the support in order to minimize the KLdivergence. We call this phase the “matching phase” of the training, where we bring the model distribution as close to the data distribution as possible. CD usually performs poorly in this phase (see Fig. 2). To see this most directly, we simply have to consider a visible state with a slightly larger probability than the other states. It should then be necessary for the model term to locate and “push down” on this state to lower its probability, and in turn, increase the uniformity of the distribution over the support. However, for any CD approximation of the model term, this rarely happens in a timely manner as the mixing rate of the MCMC chain is far too slow to locate this state before the training diverges.
This is where samples of the mode are most effective, and can assist in the correction of the states’ amplitudes. As we have anticipated, finding the modal state, v^{*}, of the model distribution, p(v, h) allows us to immediately locate the mode, v^{+}, of the marginal probability, p(v), and decrease the weights corresponding to that state, which in turn “pushes down” on the probability of this state through an iteration of weight updates. This “push” may result in another state “popping” up and becoming the new modal state. However, often times the probability amplitude of this new state will be less than that of the previous mode. This results in a training routine that “cycles” through any dominant state that emerges at each iteration, and the probability amplitude of the mode decreases as training proceeds until the probability amplitudes of all the states in the support become equal (see the formal demonstration of this in the Supplementary Note 2), which results in the desired uniform distribution over the support. This can be visualized as a “seesaw” between the dominant states, with the oscillation amplitude of this seesaw decaying to zero in time.
We outline the pseudocode for modeassisted training in Algorithm 1 and a visual depiction of the training side by side with CD1 on a small data set is shown in Fig. 2. KL divergences for CD (Fig. 2a) and modeassisted training (Fig. 2b) are presented in the top row. In Fig. 2c, the length of the bars above or below zero imply an over or underestimation, respectively, of probability to a given data vector by CD. In contrast, Fig. 2d shows that the modeassisted training algorithm searches for the state with highest probability, and decreases the weights corresponding to that state, which in turn decreases the probability assigned to it by the model, aligning it closer to the data distribution. This prevents any probability from accumulating far away from the data distribution and eventually achieves a close to perfect model. The average height of the individual divergences is exactly the KL divergence plot in Fig. 2a, b.
As it should now be clearer, these modedriven updates are deviations from the gradient direction, since in general the mode over the model distribution is different from the expected value. This makes the modetraining algorithm, which mixes mode driven samples and datadriven ones, distinct from gradient descent. This is also supported by the fact that our method tends toward a particular class of distributions (uniform), when gradient descent would settle in some local minima or saddle points in the KL landscape.
Algorithm 1
Unsupervised learning of a restricted Boltzmann machine with the modeassisted training algorithm
1: procedure MT\(({P}_{\max },\alpha ,\beta ,{\{{\epsilon }_{n}^{\,\text{CD}\,}\}}_{n = 1}^{N},N)\) 
2: \({\theta }_{0} \sim {\mathcal{N}}(0,0.01)\) 
3: for i = 1; i ≤ N; i + + do 
4: p_{mode} ← P_{max}σ(αi + β) 
5: Sample u ~ U[0, 1] 
6: if u ≤ p_{mode} then 
7: v^{*}, h^{*}, E_{0} ← argminE(v, h) 
8: \(\gamma \leftarrow \frac{{E}_{0}}{(n+1)(m+1)}\) 
9: \({\theta }_{i}\leftarrow {\theta }_{i1}+\gamma {\epsilon }_{i}^{\,\text{CD}}\Delta {\theta }^{\text{mode}}\) ⊳ Eq. (9) 
10: else 
11: \({\theta }_{i}\leftarrow {\theta }_{i1}+{\epsilon }_{i}^{\,\text{CD}}\Delta {\theta }^{\text{CD}}\) ⊳ Eq. (8) 
12: end If 
13: end for 
14: return θ_{N} 
15: end procedure 
The free parameters in this method are the schedules of the mode sample using P_{mode}(n) (defined by \({P}_{\max }\), α and β in Eq. (7)) and the CD learning rate, ϵ^{CD}. With ϵ^{CD} fixed, we set ϵ^{mode} = γϵ^{CD}, where γ = − E_{0}/[(n + 1)(m + 1)], with E_{0}(<0) being the ground state of the corresponding RBM with nodal values {−1, 1}^{n+m}. This particular choice of γ is an upper bound to the learning rate which minimizes the RBM energy variance over all states (see the Supplementary Note 2 for the proof of this statement).
We find that the modeassisted training method is not very sensitive to the parameters chosen. In fact, as long as the mode samples are incorporated after the joint and marginal mode equilibration, the training is stabilized and the learned distribution will tend to uniformity (see also the Supplementary Note 2). This result reinforces the intuitive notion that the pushes on the mode provide a stabilizing quality to the training over CD (or any other MCMC approach), which can otherwise diverge when mixing rates grow too large at later times during training.
To realize this method in practice, one must supplement standard gradient updates with updates constructed from samples of the ground state of the RBM. Finding this ground state is equivalent to a quadratic unconstrained binary optimization (QUBO) problem, known to be nondeterministic polynomial time (NP)hard^{18}. Therefore, although we can compute the ground state of RBMs exactly for small datasets, for efficient mode sampling in realistically sized cases, we employ a memcomputing solver that compares favorably to other stateoftheart optimizers in efficiently sampling the ground states of similar nonconvex landscapes^{19,20}. Utilizing this approach and accounting for CPU time, we find that modeassisted training leads to models with lower KL divergences than CD. The details of our implementation, including computational complexity and energy comparisons with MCMC, can be found in Supplementary Note 3. However, in principle, one could use other optimizers for modeassisted training.
Importance of representability
Note that since modeassisted training is driven to distributions of a particular form, instead of local minima as in the case of CD or other gradient approaches, the representability of the RBM becomes important. The ability of a RBM to represent a given data distribution is given by the amount of hidden nodes, where one is guaranteed universal representability with n_{d} + 1 hidden nodes^{3}. In other words, one more hidden node than the number of visible configurations with nonzero probability is sufficient (but perhaps not necessary) for general representability. In practice, this bound is found to be very conservative and typically much fewer nodes are needed for a reasonable solution.
Representability can become an issue in modeassisted training when the parameter space of the RBM does not include the uniform distribution over the support (or a reasonable approximation). Since modeassisted training is generally in a nongradient direction, this means that it may settle to a worse solution than a local optimum obtainable by CD. This is a signal that more hidden nodes are required for an optimal solution.
Since most natural datasets live on a very small dimensional manifold of the total visible phase space, ∣n_{d}∣/∣Ω∣ ≪ 1, the amount of hidden nodes required typically scales polynomially with the size of the problem, versus the exponential scaling of the visible phase space. This makes representability not an insurmountable problem for modeassisted training, even in full size problems. In the Supplementary Note 2, we demonstrate that regardless of representability, a modeassisted weight update reduces the variance of the energies across the RBM states. To this end, the examples of Figs. 2 and 3 show that modeassisted training does not necessarily fail if the number of hidden nodes is less than that needed to guarantee representability.
Examples of modeassisted training on datasets
As examples of our method, we have computed the loglikelihoods achieved with modeassisted training across two synthetic and one realistic (MNIST) datasets, and compared the results against the best achieved loglikelihoods with CD1, PCD1 and PT on standard RBMs, ERBMs, and CRBMs^{16}. For the small synthetic datasets we could also compute the exact loglikelihoods, thus providing an even stronger comparison. An implementation of the modeassisted training algorithm on these synthetic data sets is available in the Supplementary Code that accompanies this paper. For the larger MNIST case, mode sampling was done via simulation of a digital memcomputing machine based on ref. ^{21}. The specific details of our implementation can be found in Supplementary Note 3.
We plot an example of training progress in a moderately large synthetic problem in Fig. 3. Reported is the KL divergence (which differs from the loglikelihood by a constant factor independent of the RBM parameters^{2}) of a slightly bigger 14 × 10 RBM as a function of number of parameter updates on a L = 14, B = 7 shifting bar set, for both CD1 and modeassisted training. We consider two learning rate schedules, constant (ϵ^{CD} = 0.05) and exponential decay (ϵ^{CD}(n) = e^{−cn}, c = 4, n ∈ [0, 1], the fraction of completed training iterations). Additionally, every time a mode sample is taken, CD is allowed to run with k = 720, a number scaled to the equivalent computational cost of taking a mode sample. Thus, the xaxes in Fig. 3 could be interpreted as wall time. The details of the computational comparison between a mode sample using memcomputing and iterations of CD are discussed in greater detail in Supplementary Note 3. In both cases, even when computational cost is factored in, modeassisted training leads to better solutions and proceeds in a much more stable way across runs (lower KL variance at convergence). Importantly, modeassisted training never diverges while CD often times does. Following our intuition about modeassisted training established in the “Mode Correspondence of Joint and Marginal Distributions” section, using larger learning rates in the CDdominated phase accelerates the convergence of modeassisted training.
It is known that using CD to train RBMs can result in poor models for the data distribution^{22}, for which PCD and PT are recommended. We note that for the modeassisted training employed in this paper, CD1 was used as the gradient approximation (except in the case for MNIST where PCD1 was used). Impressively, in all cases tested, the mode samples were able to stabilize the CD algorithm sufficiently to overcome the other, more involved approximations (PT) and model enhancements (centering).
In addition, it is clear that modeassisted training exhibits several desirable properties over CD (or other gradient approaches). Most significantly, it seems to perform better with larger learning rates during the gradient dominated phase, and smaller learning rates when using mode samples. CD and other gradient methods generally perform better with smaller learning rates, as their approximation to the exact gradient gets better. Irrespective, even in this regime, modeassisted training eventually drives the system to the uniform solution compared with the local optimum of CD. The main advantage is that with modeassisted training, one can (and often should) use larger learning rates, resulting in fewer required iterations.
For further comparison, we report in Table 1 results for the shifting and inverted bar, bars and stripes, and MNIST datasets obtained with modeassisted training and those reported in past work^{16}. The results show modeassisted training with a standard RBM always converges to models with loglikelihoods higher than ERBMs, and CRBMs trained with CD1, PCD1, or PT. Furthermore, the modeassisted training loglikelihood increases with an increasing number of hidden nodes (better representability). Empirically, we also find the incredible result that with sufficient representability and the proper learning rate, modeassisted training can find solutions arbitrarily close to the exact distribution.
Discussion
In this paper we have introduced an unsupervised training method that stabilizes gradient based approaches by utilizing samples of the ground state of the RBM, and is empirically seen to get as close as desired to a given uniform data distribution. It relies on the realization that as training proceeds, the RBM becomes increasingly frustrated, leading to the modes of the visible layer distribution and joint model distribution becoming effectively equal. As a consequence, by using the mode (or ground state) of the RBM during training, our approach is able to “flatten” all modes in the model distribution that do not correspond to modes in the data distribution, reaching a steady state only when all modes are of equal magnitude. In this sense, the ground state of the RBM can be thought of as ‘supervising’ the gradient approximation during training, preventing any pathological evolution of the model distribution.
Our results are valid if the representability of the RBM is enough to include good approximations of the data distribution. Once the representability is sufficient, a properly annealed learningrate schedule will take the KL divergence as low as desired. Increasing the number of hidden nodes increases the nonconvexity of the KLdivergence landscape, easily trapping standard algorithms in suboptimal states. In practice, after some point, increasing the number of hidden nodes will not decrease the KL divergence that a pretraining procedure actually converges to, as the tradeoff between effective gradient update and representation quality is reached. We here claim that this point of tradeoff for our modeassisted procedure is reached at far greater number of nodes than standard procedures, thus allowing us to find representations with far smaller KLdivergence. The modeassisted training we suggest then provides an extremely powerful tool for unsupervised learning, which (i) prevents a divergence of the model, (ii) promotes a more stable learning, and (iii) for data distributions uniform across some support, can lead to solutions with arbitrary high quality.
Superficially, this method resembles ‘mode hopping’ MCMC proposed in recent literature^{23,24}, where local maxima are either found with some optimization method or assumed to be known before hand (via a dataset). However, a crucial difference between our modeassisted training for RBMs and mode hopping algorithms is that we do not use the modal configuration to initiate a new MCMC update to improve the mixing rate. Instead, the mode itself is used to inform the weight updates directly. The difference is substantial. In fact, since higher energy states are exponentially suppressed, exposing the Markov chain to the mode will most likely get it stuck there, which requires ad hoc constructions to recover detailed balance. Our modetraining method does not suffer from these drawbacks and is thus a more computationally efficient way to utilize the mode to train RBMs. Furthermore, we show that under a sufficiently large learning rate, sampling the global mode alone is capable of exploring efficiently a multimodal energy landscape.
To scale our approach, one would need an efficient way to sample lowenergy configurations of the RBM, a difficult optimization problem on its own. This is equivalent to a weighted MAXSAT problem, for which there are several industrialscale solvers available. Also, the recent successes of memcomputing on these kind of energy landscapes in large cases (million of variables) are fodder for optimism^{19,20}.
Finally, fitting general discrete distributions (with modes of different height) with this technique seems also within reach. In this respect, we can point to our results on the bars and stripes dataset (a nonuniform q(v)) for inspiration. We have found the best loglikelihood on that set with modeassisted training with a lower frequency of the mode sampling, P_{max} = 0.1 → 0.05, compared with the shifting bar (a uniform set). This suggests that a general update, which properly weighs the mode sample in combination with the dataset samples, may extend this technique to general nonuniform probabilities, with the weight analogous to a tunable demand for uniformity.
Our method is useful from a number of perspectives. First, from the unsupervised learning point of view, it opens the door to the training of RBMs with unprecedented accuracy in a novel, nongradient approach. Second, many unsupervised models are used as ‘feature learners’ in a downstream supervised training task (e.g., classification), where the unsupervised learning is referred to as pretraining. We suspect that starting backpropagation from an initial condition obtained through modeassisted training would be highly advantageous. Third, the modeassisted training we suggest can be done on models with any kind of pairwise connectivity, which include deep, convolutional, and fully connected Boltzmann machines. We leave the analysis of these types of networks for future work.
Methods
For synthetic data, we use the commonly employed binary shifting bar and bars and stripes datasets^{25}. The former is defined by two parameters: the total length of the vector, L, and the amount of consecutive elements (with periodic boundary conditions), B < L, set to one, with the rest set to zero. This results in L unique elements in the dataset with uniform probability, giving a maximum likelihood of \(L\mathrm{log}\) (1/L). The inverted shifting bar set is obtained by swapping ones and zeros. The bars and stripes dataset is constructed by setting each row of a D × D binary pattern to one with probability 1/2, and then rotating the resulting pattern 90^{∘} with probability 1/2. This produces 2^{D+1} elements, with the allzero and allone patterns being twice as likely as the others.
For a direct comparison to previous work, we followed the same training setup^{16}. For the data in Table 1, a 9 × 4 RBM was tested on a shifting bar dataset with L = 9, B = 1 and a D = 3 bars and stripes dataset. Both synthetic sets were trained for 50,000 parameter updates, with no minibatching, and a constant ϵ^{CD} = 0.2. For the MNIST dataset, a 784 × 16 sized model was trained for 100 epochs, with batch sizes of 100. The mode samples in both cases are slowly incorporated into training in a probabilistic way following Eq. (7), initially with P_{mode} = 0 and driven to \({P}_{\max }=0.1\) for the shifting bar and MNIST datasets, and \({P}_{\max }=0.05\) for the bars and stripes dataset. In both cases, we chose α = 20/N and β = −6, where N is the total number of parameter updates.
Data availability
The MNIST data set used in this paper is publicly available (http://yann.lecun.com/exdb/mnist/). All other data that support the plots within this paper are available from the corresponding author upon request.
Code availability
The Matlab code for the training examples used in this work is provided in the Supplementary Code file which accompanies the paper.
References
 1.
Ackley, D. H., Hinton, G. E. & Sejnowski, T. J. A learning algorithm for Boltzmann machines. Cogn. Sci. 9, 147–169 (1985).
 2.
Goodfellow, I., Bengio, Y., Courville, A. & Bengio, Y. Deep Learning 1 (MIT Press, Cambridge, 2016).
 3.
LeRoux, N. & Bengio, Y. Representational power of restricted Boltzmann machines and deep belief networks. Neural Comput. 20, 1631–1649 (2008).
 4.
Bengio, Y. et al. Learning deep architectures for ai. Found. Trends® Mach. Learn. 2, 1–127 (2009).
 5.
Carleo, G. & Troyer, M. Solving the quantum manybody problem with artificial. Neural Netw. Sci. 355, 602–606 (2017).
 6.
Gao, X. & Duan, L.M. Efficient representation of quantum manybody states with deep neural networks. Nat. Commun. 8, 662 (2017).
 7.
Szegedy, C. et al. Intriguing properties of neural networks. Preprint at https://arxiv.org/abs/1312.6199 (2013).
 8.
Erhan, D. et al. Why does unsupervised pretraining help deep learning? J. Mach. Learn. Res. 11, 625–660 (2010).
 9.
Hinton, G. E. Training products of experts by minimizing contrastive divergence. Neural Comput. 14, 1771–1800 (2002).
 10.
Fischer, A. & Igel, C. An introduction to restricted Boltzmann machines. In Alvarez, L., Mejail, M., Gomez, L. & Jacobo, J. (eds) Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 14–36 (Springer, Berlin, Heidelberg, 2012).
 11.
Long, P. M. & Servedio, R. A. Restricted Boltzmann machines are hard to approximately evaluate or simulate. ICML 703–710 (2010).
 12.
Manukian, H., Traversa, F. L. & Di Ventra, M. Accelerating deep learning with memcomputing. Neural Netw. 110, 1–7 (2019).
 13.
Di Ventra, M. & Pershin, Y. V. The parallel approach. Nat. Phys. 9, 200–202 (2013).
 14.
Traversa, F. L. & DiVentra, M. Polynomialtime solution of prime factorization and npcomplete problems with digital memcomputing machines. Chaos: an Interdisciplinary. J. Nonlinear Sci. 27, 023107 (2017).
 15.
DiVentra, M. & Traversa, F. L. Memcomputing: Leveraging memory and physics to compute efficiently. J. Appl. Phys. 123, 180901 (2018).
 16.
Melchior, J., Fischer, A. & Wiskott, L. How to center deep boltzmann machines. J. Mach. Learn. Res. 17, 3387–3447 (2016).
 17.
Pei, Y. R., Manukian, H. & Di Ventra, M. Generating weighted max2sat instances of tunable difficulty with frustrated loops. Preprint at https://arxiv.org/abs/1905.05334 (2019).
 18.
Arora, S. & Barak, B. Computational Complexity: A Modern Approach. (Cambridge University Press, New York, 2009).
 19.
Traversa, F. L., Cicotti, P., Sheldon, F. & Di Ventra, M. Evidence of exponential speedup in the solution of hard optimization problems. Complexity 2018, 798285 (2018).
 20.
Sheldon, F., Traversa, F. L. & Di Ventra, M. Taming a nonconvex landscape with dynamical longrange order: memcomputing the ising spinglass. Preprint at https://arxiv.org/abs/1810.03712 (2018).
 21.
Bearden, S. R. B., Sheldon, F. & Di Ventra, M. Critical branching processes in digital memcomputing machines. EPL (Europhys. Lett.) 127, 30005 (2019).
 22.
Hinton, G. A practical guide to training restricted Boltzmann machines. Momentum 9, 926 (2010).
 23.
Sminchisescu, C. & Welling, M. Generalized darting monte carlo. In Artificial Intelligence and Statistics, 516–523 (2007).
 24.
Lan, S., Streets, J. & Shahbaba, B. Wormhole Hamiltonian Monte Carlo. Proc Conf. AAAI Artif. Intell. 1953–1959 (2014).
 25.
MacKay, D. J. Information Theory, Inference and Learning Algorithms. (Cambridge University Press, New York, 2003).
Acknowledgements
Work supported by DARPA under grant No. HR00111990069. H.M. acknowledges support from a DoDSMART fellowship. M.D. and Y.R.P. acknowledge partial support from the Center for Memory and Recording Research at the University of California, San Diego. All memcomputing simulations reported in this paper have been done on a single core of an AMD EPYC server.
Author information
Affiliations
Contributions
M.D. has supervised and suggested the project. H.M. and M.D. conceived the idea. Y.R.P. established the theoretical framework for analyzing the algorithm. H.M. has done all the simulations reported. S.R.B.B. has designed the digital memcomputing machine employed in this work. All authors have discussed the results and contributed to the writing of the paper.
Corresponding authors
Ethics declarations
Competing interests
M.D. is the cofounder of MemComputing, Inc. (https://memcpu.com/) that is attempting to commercialize the memcomputing technology. All other authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Manukian, H., Pei, Y.R., Bearden, S.R.B. et al. Modeassisted unsupervised learning of restricted Boltzmann machines. Commun Phys 3, 105 (2020). https://doi.org/10.1038/s4200502003738
Received:
Accepted:
Published:
Further reading

Modeassisted joint training of deep Boltzmann machines
Scientific Reports (2021)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.