This is an implementation of the
Apart from cloning, an easy way of using the package is the following:
1 - Add the registry AINJuliaRegistry
2 - Switch into "package mode" with ]
and add the package with
add GPLVMplus
The following functions are of interest to the end user:
gplvmplus
, see Experiment 1.inferlatent
, see Inferring latent projections.predictivesampler
Below we show two examples of how to use the model. The experiments demonstrate the use of the model with a dataset of 72 images that are made available via the package GPLVMplusData.jl. These 72 images have been taken from the COIL-20 repository that can be found here. The 72 images are images of a rubber duck photographed from 72 angles, thus they form a latent space that is intrinsically a one-dimensional circle. We show in the experiments below that the model successfully discovers this latent space.
In this experiment we run the model on the downsampled images setting the low-dimensional space to
using GPLVMplus
using GPLVMplusData # must be independently installed
using PyPlot # must be independently installed.
# Other plotting packages can be used instead
X = GPLVMplusData.loadducks(;every=4); # load rubber duck images in 32x32 resolution
# warmup
let
gplvmplus(X; Q = 2, iterations = 1)
end
# Learn mapping from Q=2 latent dimensions to high-dimensional images.
# Use a two-hidden layer neural network for amortised inference.
result = gplvmplus(X; Q = 2, H1 = 20, H2 = 20, iterations = 5000);
# Plot latent 2-dimensional projections
plot(result[:Z][1,:],result[:Z][2,:],"o")
This experiment demonstrates the scale-invariant property of the proposed
using GPLVMplus
using GPLVMplusData # must be independently installed
using PyPlot # must be independently installed.
# Other plotting packages can be used instead
using Random
X = GPLVMplusData.loadducks(;every=4); # load rubber duck images in 32x32 resolution
# warmup
let
gplvmplus(X; Q = 2, iterations = 1)
end
# Instantiate random number generator
rng = MersenneTwister(1);
# Sample 72 scaling coefficients between 0.5 and 2.5
C = rand(rng, 72)*2 .+ 0.5;
# Scale each image with the corresponding scaling coefficients
Xscale = reduce(hcat, [x*c for (x,c) in zip(eachcol(X),C)]);
# Learn mapping from Q=2 latent dimensions to high-dimensional scaled images.
result2 = gplvmplus(Xscale; Q = 2, H1 = 20, H2 = 20, iterations = 5000);
# Plot latent 2-dimensional projections
plot(result2[:Z][1,:],result2[:Z][2,:],"o")
# Compare inferred scaling coefficients to actual coefficients C
figure()
plot(C, label="scaling coefficients C")
plot(result2[:c], label="inferred scaling coefficients")
legend()
Continuing with the example above, we show how to infer the latent coordinate of a high-dimensional data item. For convenience, we take one of the images used for training but scale it with a new scaling coefficient.
Xtest = 1.2345 * X[:,1]
We infer the latent coordinates, and associated scaling coefficient, using:
Ztest, ctest = inferlatent(Xtest, result2);
Barring local minima present in the inference of the latent coordinate, variable Ztest
should hold approximately the same latent coordinate as the training image X[:,1]
:
display(Ztest)
display(result2[:Z][:,1])
The following call returns a function sampler
for sampling from the predictive distribution:
sampler = GPLVMplus.predictivesampler(Ztest,result2);
pcolor(rot180(reshape(sampler(), 32, 32))); # sample image, reshape it, rotate it and plot it.