bayespy.nodes.GaussianMarkovChain

class bayespy.nodes.GaussianMarkovChain(mu, Lambda, A, nu, n=None, inputs=None, **kwargs)[source]

Node for Gaussian Markov chain random variables.

In a simple case, the graphical model can be presented as:

% tikzlibrary.code.tex
%
% Copyright 2010-2011 by Laura Dietz
% Copyright 2012 by Jaakko Luttinen
%
% This file may be distributed and/or modified
%
% 1. under the LaTeX Project Public License and/or
% 2. under the GNU General Public License.
%
% See the files LICENSE_LPPL and LICENSE_GPL for more details.

% Load other libraries
\usetikzlibrary{shapes}
\usetikzlibrary{fit}
\usetikzlibrary{chains}
\usetikzlibrary{arrows}

% Latent node
\tikzstyle{latent} = [circle,fill=white,draw=black,inner sep=1pt,
minimum size=20pt, font=\fontsize{10}{10}\selectfont, node distance=1]
% Observed node
\tikzstyle{obs} = [latent,fill=gray!25]
% Constant node
\tikzstyle{const} = [rectangle, inner sep=0pt, node distance=1]
% Factor node
\tikzstyle{factor} = [rectangle, fill=black,minimum size=5pt, inner
sep=0pt, node distance=0.4]
% Deterministic node
\tikzstyle{det} = [latent, diamond]

% Plate node
\tikzstyle{plate} = [draw, rectangle, rounded corners, fit=#1]
% Invisible wrapper node
\tikzstyle{wrap} = [inner sep=0pt, fit=#1]
% Gate
\tikzstyle{gate} = [draw, rectangle, dashed, fit=#1]

% Caption node
\tikzstyle{caption} = [font=\footnotesize, node distance=0] %
\tikzstyle{plate caption} = [caption, node distance=0, inner sep=0pt,
below left=5pt and 0pt of #1.south east] %
\tikzstyle{factor caption} = [caption] %
\tikzstyle{every label} += [caption] %

\tikzset{>={triangle 45}}

%\pgfdeclarelayer{b}
%\pgfdeclarelayer{f}
%\pgfsetlayers{b,main,f}

% \factoredge [options] {inputs} {factors} {outputs}
\newcommand{\factoredge}[4][]{ %
  % Connect all nodes #2 to all nodes #4 via all factors #3.
  \foreach \f in {#3} { %
    \foreach \x in {#2} { %
      \draw[-,#1] (\x) edge[-] (\f) ; %
    } ;
    \foreach \y in {#4} { %
      \draw[->,#1] (\f) -- (\y) ; %
    } ;
  } ;
}

% \edge [options] {inputs} {outputs}
\newcommand{\edge}[3][]{ %
  % Connect all nodes #2 to all nodes #3.
  \foreach \x in {#2} { %
    \foreach \y in {#3} { %
      \draw[->,#1] (\x) -- (\y) ;%
    } ;
  } ;
}

% \factor [options] {name} {caption} {inputs} {outputs}
\newcommand{\factor}[5][]{ %
  % Draw the factor node. Use alias to allow empty names.
  \node[factor, label={[name=#2-caption]#3}, name=#2, #1,
  alias=#2-alias] {} ; %
  % Connect all inputs to outputs via this factor
  \factoredge {#4} {#2-alias} {#5} ; %
}

% \plate [options] {name} {fitlist} {caption}
\newcommand{\plate}[4][]{ %
  \node[wrap=#3] (#2-wrap) {}; %
  \node[plate caption=#2-wrap] (#2-caption) {#4}; %
  \node[plate=(#2-wrap)(#2-caption), #1] (#2) {}; %
}

% \gate [options] {name} {fitlist} {inputs}
\newcommand{\gate}[4][]{ %
  \node[gate=#3, name=#2, #1, alias=#2-alias] {}; %
  \foreach \x in {#4} { %
    \draw [-*,thick] (\x) -- (#2-alias); %
  } ;%
}

% \vgate {name} {fitlist-left} {caption-left} {fitlist-right}
% {caption-right} {inputs}
\newcommand{\vgate}[6]{ %
  % Wrap the left and right parts
  \node[wrap=#2] (#1-left) {}; %
  \node[wrap=#4] (#1-right) {}; %
  % Draw the gate
  \node[gate=(#1-left)(#1-right)] (#1) {}; %
  % Add captions
  \node[caption, below left=of #1.north ] (#1-left-caption)
  {#3}; %
  \node[caption, below right=of #1.north ] (#1-right-caption)
  {#5}; %
  % Draw middle separation
  \draw [-, dashed] (#1.north) -- (#1.south); %
  % Draw inputs
  \foreach \x in {#6} { %
    \draw [-*,thick] (\x) -- (#1); %
  } ;%
}

% \hgate {name} {fitlist-top} {caption-top} {fitlist-bottom}
% {caption-bottom} {inputs}
\newcommand{\hgate}[6]{ %
  % Wrap the left and right parts
  \node[wrap=#2] (#1-top) {}; %
  \node[wrap=#4] (#1-bottom) {}; %
  % Draw the gate
  \node[gate=(#1-top)(#1-bottom)] (#1) {}; %
  % Add captions
  \node[caption, above right=of #1.west ] (#1-top-caption)
  {#3}; %
  \node[caption, below right=of #1.west ] (#1-bottom-caption)
  {#5}; %
  % Draw middle separation
  \draw [-, dashed] (#1.west) -- (#1.east); %
  % Draw inputs
  \foreach \x in {#6} { %
    \draw [-*,thick] (\x) -- (#1); %
  } ;%
}

\tikzstyle{latent} += [minimum size=30pt];

\node[latent] (x0) {$\mathbf{x}_0$};
\node[latent, right=of x0] (x1) {$\mathbf{x}_1$};
\node[right=of x1] (dots) {$\cdots$};
\node[latent, right=of dots] (xn) {$\mathbf{x}_{N-1}$};
\edge {x0}{x1};
\edge {x1}{dots};
\edge {dots}{xn};

\node[latent, above left=1 and 0.1 of x0] (mu) {$\boldsymbol{\mu}$};
\node[latent, above right=1 and 0.1 of x0] (Lambda) {$\mathbf{\Lambda}$};
\node[latent, above left=1 and 0.1 of dots] (A) {$\mathbf{A}$};
\node[latent, above right=1 and 0.1 of dots] (nu) {$\boldsymbol{\nu}$};
\edge {mu,Lambda} {x0};
\edge {A,nu} {x1,dots,xn};

where \boldsymbol{\mu} and \mathbf{\Lambda} are the mean and the precision matrix of the initial state, \mathbf{A} is the state dynamics matrix and \boldsymbol{\nu} is the precision of the innovation noise. It is possible that \mathbf{A} and/or \boldsymbol{\nu} are different for each transition instead of being constant.

The probability distribution is

p(\mathbf{x}_0, \ldots, \mathbf{x}_{N-1}) = p(\mathbf{x}_0)
\prod^{N-1}_{n=1} p(\mathbf{x}_n | \mathbf{x}_{n-1})

where

p(\mathbf{x}_0) &= \mathcal{N}(\mathbf{x}_0 | \boldsymbol{\mu}, \mathbf{\Lambda})
\\
p(\mathbf{x}_n|\mathbf{x}_{n-1}) &= \mathcal{N}(\mathbf{x}_n |
\mathbf{A}_{n-1}\mathbf{x}_{n-1}, \mathrm{diag}(\boldsymbol{\nu}_{n-1})).

Parameters:

mu : Gaussian-like node or (...,D)-array

\boldsymbol{\mu}, mean of x_0, D-dimensional with plates (...)

Lambda : Wishart-like node or (...,D,D)-array

\mathbf{\Lambda}, precision matrix of x_0, D\times D -dimensional with plates (...)

A : Gaussian-like node or (D,D)-array or (...,1,D,D)-array or (...,N-1,D,D)-array

\mathbf{A}, state dynamics matrix, D-dimensional with plates (D,) or (...,1,D) or (...,N-1,D)

nu : gamma-like node or (D,)-array or (...,1,D)-array or (...,N-1,D)-array

\boldsymbol{\nu}, diagonal elements of the precision of the innovation process, plates (D,) or (...,1,D) or (...,N-1,D)

n : int, optional

N, the length of the chain. Must be given if \mathbf{A} and \boldsymbol{\nu} are constant over time.

__init__(mu, Lambda, A, nu, n=None, inputs=None, **kwargs)[source]

Create GaussianMarkovChain node.

Methods

__init__(mu, Lambda, A, nu[, n, inputs]) Create GaussianMarkovChain node.
add_plate_axis(to_plate)
broadcasting_multiplier(plates, *args)
delete() Delete this node and the children
get_gradient(rg) Computes gradient with respect to the natural parameters.
get_mask()
get_moments()
get_parameters() Return parameters of the VB distribution.
get_pdf_nodes()
get_riemannian_gradient() Computes the Riemannian/natural gradient.
get_shape(ind)
has_plotter() Return True if the node has a plotter
initialize_from_parameters(*args)
initialize_from_prior()
initialize_from_random() Set the variable to a random sample from the current distribution.
initialize_from_value(x, *args)
load(filename)
logpdf(X[, mask]) Compute the log probability density function Q(X) of this node.
lower_bound_contribution([gradient, ...]) Compute E[ log p(X|parents) - log q(X) ]
lowerbound()
move_plates(from_plate, to_plate)
observe(x, *args[, mask]) Fix moments, compute f and propagate mask.
pdf(X[, mask]) Compute the probability density function of this node.
plot([fig]) Plot the node distribution using the plotter of the node
random(*phi[, plates])
rotate(R[, inv, logdet])
save(filename)
set_parameters(x) Set the parameters of the VB distribution.
set_plotter(plotter)
show() Print the distribution using standard parameterization.
unobserve()
update([annealing])

Attributes

dims
plates
plates_multiplier Plate multiplier is applied to messages to parents