heron package

Submodules

heron.acquisition module

heron.corner module

heron.corner.corner(data_object, figsize=10, 10)[source]

heron.data module

The data module is designed to load and prepare arbitrary data sets for use in machine learning algorithms.

class heron.data.Data(targets, labels, target_sigma=None, label_sigma=None, target_names=None, label_names=None, test_targets=None, test_labels=None, test_size=0.05)[source]

Bases: object

The data class is designed to hold non-timeseries data, and is capable of automatically selecting test data from the provided dataset.

Future development will include the ability to add pre-selected test and verification data to the object.

add_data(targets, labels, target_sigma=None, label_sigma=None)[source]

Add new rows into the data object.

targetsarray-like

An array of training targets or “x” values which are to be used to train a machine learning algorithm.

labelsarray-like

An array of training labels or “y” values which represent the observations made at the target locations of the data set.

target_sigmaarray-like

Either an array of the uncertainty for each target point, or an array of the uncertainties, as a float, for each column in the targets.

label_sigmaarray-like

Either an array of the uncertainty for each target point, or an array of the uncertainties, as a float, for each column in the labels.

calculate_normalisation(data, name)[source]

Calculate the offsets for the normalisation. We’ll normally want to normalise the training data, and then be able to normalise and denormalise new inputs according to that.

dataarray-like

The array of data to use to calculate the normalisations.

namestr

The name to label the constants with.

copy()[source]

Return a copy of this data object.

denormalise(data, name)[source]

Reverse the normalise() method’s effect on the data, and return it to the correct scaling.

dataarray-like

The normalised data

scalearray-like

The scale-factors used to normalise the data.

array-like

The denormalised data

get_starting()[source]

Attempts to guess sensible starting values for the hyperparameter values.

hyperparametersndarray

An array of values for the various hyperparameters.

ix2name(name)[source]

Convert the index of a column to a column name.

name2ix(name)[source]

Convert the name of a column to a column index.

normalise(data, name)[source]

Normalise a given array of data so that the values of the data have a minimum at 0 and a maximum at 1. This improves the computability of the majority of data sets.

dataarray-like

The array of data to be normalised.

namestr

The name of the normalisation to be applied, e.g. training or label

norm_dataarray-like

An array of normalised data.

scale_factorsarray-like

An array of scale factors. The first is the DC offset, while the second is the multiplicative factor.

In order to perform the normalisation we need two steps: 1) Subtract the “DC Offset”, which is the minimum of the data 2) Divide by the range of the data

class heron.data.Timeseries(targets, labels, target_names=None, label_names=None, test_size=0.05)[source]

Bases: object

This is a class designed to hold timeseries data for machine learning algorithms.

Timeseries data needs to be handled differently from other datasets as it is rarely likely to be advantageous to select individual points from a timeseries as either test data or verification data. Instead the timeseries class will select individual timeseries as the test and verification data.

heron.kernels module

class heron.kernels.ExponentialSineSq(period=1, width=15, ax=0)[source]

Bases: heron.kernels.Kernel

An implementation of the exponential sine-squared kernel.

function(data1, data2, period)[source]

The functional form of the kernel inside the exponential.

gradient(data1, data2)[source]
hyper = [1, 1]
matrix(data1, data2)[source]

Produce a gram matrix based off this kernel.

datandarray

An array of data (x)

covarndarray

A covariance matrix.

name = 'Exponential sine-squared kernel'
class heron.kernels.Kernel[source]

Bases: object

A generic factory for Kernel classes.

distance(data1, data2, hypers=None)[source]

Calculate the squared distance to the point in parameter space.

matrix(data1, data2)[source]

Produce a gram matrix based off this kernel.

datandarray

An array of data (x)

covarndarray

A covariance matrix.

name = 'Generic kernel'
ndim = 1
set_hyperparameters(hypers)[source]
class heron.kernels.Matern(order=1.5, amplitude=100, width=15)[source]

Bases: heron.kernels.Kernel

An implementation of the Matern Kernel.

function(data1, data2)[source]
name = 'Matern'
order = 1.5
class heron.kernels.SquaredExponential(ndim=1, amplitude=100, width=15)[source]

Bases: heron.kernels.Kernel

An implementation of the squared-exponential kernel.

property flat_hyper
function(data1, data2)[source]

The functional form of the kernel.

gradient(data1, data2)[source]

Calculate the graient of the kernel.

hyper = [1.0]
name = 'Squared exponential kernel'
set_hyperparameters(hypers)[source]

heron.priors module

class heron.priors.Normal(mean, std)[source]

Bases: heron.priors.Prior

A normal prior probability distribution.

logp(x)[source]
transform(x)[source]

Transform from unit normalisation to this prior.

xfloat

The position in the normalised hyperparameter space

class heron.priors.Prior[source]

Bases: object

A prior probability distribution.

heron.regression module

class heron.regression.MultiTaskGP(training_data, kernel, tikh=1e-06, solver=<class 'george.solvers.hodlr.HODLRSolver'>, hyperpriors=None)[source]

Bases: heron.regression.SingleTaskGP

An implementation of a co-trained set of Gaussian processes which share the same hyperparameters, but which model differing data. The training of these models is described in RW pp115–116.

A multi-task GPR is capable of acting as a surrogate to a many-to-many function, and is trained by making the assumption that all of the outputs from the function share a common correlation structure.

The principle difference compared to a single task GP is the presence of multiple Gaussian Processes, with one to model each dimension of the output data.

The MultiTask GPR implementation is very much a work in progress at the moment, and not all methods implemented in the SingleTask GPR are implemented correctly yet.

get_hyperparameters()[source]

Return the kernel hyperparameters. Returns the hyperparameters of only the first GP in the network; the others /should/ all be the same, but there might be something to be said for checking this.

hyperslist

A list of the kernel hyperparameters

ln_likelihood(p)[source]

Provides a wrapper to the ln_likelihood functions for each component Gaussian process in the multi-task system.

This is implemented in a separate function because of the mild peculiarities of how the pickle module needs to serialise functions, which means that instancemethods (which this would become) can’t be serialised.

prediction(new_datum)[source]

Produce a prediction at a new point, or set of points.

new_datumarray

The coordinates of the new point(s) at which the GPR model should be evaluated.

prediction meansarray

The mean values of the function drawn from the Gaussian Process.

prediction variancesarray

The variance values for the function drawn from the GP.

set_hyperparameters(hypers)[source]

Set the hyperparameters of the kernel function on each Gaussian process.

train(method='MCMC', metric='loglikelihood', sampler='ensemble', **kwargs)[source]

Train the Gaussian process by finding the optimal values for the kernel hyperparameters.

methodstr {“MCMC”, “MAP”}

The method to be employed to calculate the hyperparameters.

metricstr

The metric which should be used to assess the model.

hyperpriorslist

The hyperprior distributions for the hyperparameters. Defaults to None, in which case the prior is uniform over all real numbers.

update()[source]

Update the stored matrices.

class heron.regression.Regressor(training_data, kernel, tikh=1e-06, solver=<class 'george.solvers.hodlr.HODLRSolver'>, hyperpriors=None)[source]

Bases: heron.regression.SingleTaskGP

class heron.regression.SingleTaskGP(training_data, kernel, tikh=1e-06, solver=<class 'george.solvers.hodlr.HODLRSolver'>, hyperpriors=None)[source]

Bases: object

This is an implementaion of a Single task Gaussian process regressor. That is, a GPR which is capable of acting as a surrogate to a many-to-one function. The Single Task GPR is the fundamental building block of the MultiTask GPR, which consists of multiple Single Tasks which are trained in tandem (but which do NOT share correlation information). — Ahem… There /are/ components of this code in here, but things need a little bit more thought before this will work efficiently… An implementation of a Gaussian Process Regressor with multiple response outputs and multiple inputs.

active_learn(afunction, x, y, iters=1, afunc_args={})[source]

Actively train the Gaussian process from a set of provided labels and targets using some acquisition function.

afunctionfunction

The acquisition function.

xarray-like

The input labels of the data. This can be a multi-dimensional array.

yarray-like

The input targets of the data. This can only be a single-dimensional array at present.

itersint

The number of times to iterate the learning process: equivalently, the number of training points to digest.

afunc_argsdict

A dictionary of arguments for the acquisition function. Optional.

add_data(target, label, label_error=None)[source]

Add data to the Gaussian process.

correlation()[source]

Calculate the correlation between the model and the test data.

corrfloat

The correlation squared.

entropy()[source]

Return the entropy of the Gaussian Process distribution. This can be calculated directly from the covariance matrix, making this a nice, quick calculation to perform.

entropyfloat

The differential entropy of the GP.

expected_improvement(x)[source]

Returns the expected improvement at the design vector X in the model

xarray-like

A real world coordinates design vector

EI: float

The expected improvement value at the point x in the model

get_hyperparameters()[source]

Return the kernel hyperparameters.

grad_neg_ln_likelihood(p)[source]

Return the negative of the gradient of the log likelihood for the GP when its hyperparameters have some specified value.

gpheron Regressor object

The gaussian process to be evaluated

parray-like

An array of the hyper-parameters at which the model is to be evaluated.

grad_ln_likelihoodfloat

The gradient of log-likelihood for the Gaussian process

hyperpriortransform(p)[source]

Return the true value in the desired hyperprior space, given an input of a unit-hypercube prior space.

parray-like

The point in the unit hypercube space

x : The position in the desired hyperparameter space of the point.

km = None
ln_likelihood(p)[source]

Provides a convenient wrapper to the ln likelihood function.

This is implemented in a separate function because of the mild peculiarities of how the pickle module needs to serialise functions, which means that instancemethods (which this would become) can’t be serialised.

loghyperpriors(p)[source]

Calculate the log of the hyperprior distributions at a given point.

pndarray

The location to be tested.

neg_ln_likelihood(p)[source]

Returns the negative of the log-likelihood; designed for use with minimisation algorithms.

gpheron Regressor object

The gaussian process to be evaluated.

parray-like

An array of the hyper-parameters at which the model is to be evaluated.

neg_ln_likelihoodfloat

The negative of the log-likelihood for the Gaussian process

nei(x)[source]

Calculate the negative of the expected improvement at a point x.

prediction(new_datum, normalised=False)[source]

Produce a prediction at a new point, or set of points.

new_datumarray

The coordinates of the new point(s) at which the GPR model should be evaluated.

normalisedbool

A flag to indicate if the input is already normalised (this might be the case if you’re trying to efficiently sample to parameter space). If False the input will be normalised to the same range as the training data.

prediction meanarray

The mean values of the function drawn from the Gaussian Process.

prediction variancearray

The variance values for the function drawn from the GP.

rmse()[source]

Calculate the root mean squared error of the whole model.

rmsefloat

The root mean squared error.

save(filename)[source]

Save the Gaussian Process to a file which can be reloaded later.

filenamestr

The location at which the Gaussian Process should be written.

In the current implementation the serialisation of the GP is performed by the python pickle library, which isn’t guaranteed to be binary-compatible with all machines.

set_bmatrix(values)[source]

Set the values of the B matrix from a vector.

set_hyperparameters(hypers)[source]

Set the hyperparameters of the kernel function.

test_predict()[source]

Calculate the value of the GP at the test targets.

train(method='MCMC', metric='loglikelihood', sampler='ensemble', **kwargs)[source]

Train the Gaussian process by finding the optimal values for the kernel hyperparameters.

methodstr {“MCMC”, “MAP”, “nested”}

The method to be employed to calculate the hyperparameters.

metricstr

The metric which should be used to assess the model.

hyperpriorslist

The hyperprior distributions for the hyperparameters. Defaults to None, in which case the prior is uniform over all real numbers.

update()[source]

Update the stored matrices.

heron.regression.load(filename)[source]

Load a pickled heron Gaussian Process.

heron.training module

These are functions designed to be used for training a Gaussian process made using heron.

heron.training.cross_validation(p, gp)[source]

Calculate the cross-validation factor between the training set and the test set.

gpheron.Regressor

The Gaussian process object.

parray, optional

The hyperparameters for the Gaussian process kernel. Defaults to None, which causes the current values for the hyperparameters to be used.

cvfloat

The cross validation of the test data and the model.

heron.training.ln_likelihood(p, gp)[source]

Returns to log-likelihood of the Gaussian process, which can be used to learn the hyperparameters of the GP.

gpheron Regressor object

The gaussian process to be evaluated

parray-like

An array of the hyper-parameters at which the model is to be evaluated.

ln_likelihoodfloat

The log-likelihood for the Gaussian process

  • TODO Add the ability to specify the priors on each hyperparameter.

heron.training.logp(x)[source]
heron.training.prior_transform(x)[source]
heron.training.run_nested(gp, metric='loglikelihood', **kwargs)[source]
heron.training.run_sampler(sampler, initial, iterations)[source]

Run the MCMC sampler for some number of iterations, but output a progress bar so you can keep track of what’s going on

heron.training.run_training_map(gp, metric='loglikelihood', repeats=20, **kwargs)[source]

Find the maximum a posteriori training values for the Gaussian Process.

gpheron.GaussianProcess,

The Gaussian process object.

metric{“loglikelihood”, “cv”}

The metric to be used to train the MCMC. Defaults to log likelihood (loglikelihood), which is the more traditionally Bayesian manner, but cross-validation (cv) is also available.

repeatsint, optional

The number of times that the optimisation should be repeated in order to partially combat having the optimiser choose a local rather than the global maximum log_like.

The current implementation has no way of specifying the optimisation algorithm.

  • TODO Add an option to change the optimisation algorithm.

heron.training.run_training_mcmc(gp, walkers=200, burn=500, samples=1000, metric='loglikelihood', samplertype='ensemble')[source]

Train a Gaussian process using an MCMC process to find the maximum evidence.

gpheron.Regressor

The Gaussian process object.

walkersint

The number of MCMC walkers.

burnint

The number of samples to be used to evaluate the burn-in for the MCMC.

samplesint

The number of samples to be used for the production sampling.

metric{“loglikelihood”, “cv”}

The metric to be used to train the MCMC. Defaults to log likelihood (loglikelihood), which is the more traditionally Bayesian manner, but cross-validation (cv) is also available.

samplertypestr {“ensemble”, “pt”}

The sampler to be used on the model.

probsarray

The log probabilities.

samplesarray

The array of samples from the sampling chains.

At present the algorithm assigns the median of the samples to the value of the kernel vector; this may not ultimately be the best way to do this, and so it should be possible to specify the desired value to be used from the distribution.

  • TODO Add ability to change median to other statistics for training

heron.training.run_training_nested(gp, method='multi', maxiter=None, npoints=1000)[source]

Train the Gaussian Process model using nested sampling.

gpheron.Regressor

The Gaussian Process object.

method{“single”, “multi”}

The nested sampling method to be used.

maxiterint

The maximum number of iterations which should be carried out on the marginal likelihood. Optional.

npointsint

The number of live-points to use in the optimisation.

heron.training.train_cv(gp)[source]

Module contents

HERON: A Gaussian Process framework for Python