Add report for JRC contract.

jrc-report-2023
Stéphane Adjemian (Argos) 2023-11-15 12:24:58 +01:00
parent 1dc71e1b65
commit 6c5ecb80cd
Signed by: stepan
GPG Key ID: A6D44CB9C64CE77B
1 changed files with 543 additions and 0 deletions

543
report.tex Normal file
View File

@ -0,0 +1,543 @@
% Created 2023-11-10 Fri 14:22
% Intended LaTeX compiler: pdflatex
\synctex=1
\documentclass{amsart}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{graphicx}
\usepackage{longtable}
\usepackage{wrapfig}
\usepackage{rotating}
\usepackage[normalem]{ulem}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{capt-of}
\usepackage{hyperref}
\author{Stéphane Adjemian}
\date{\today}
\title{}
\hypersetup{
pdfauthor={Stéphane Adjemian},
pdftitle={},
pdfkeywords={},
pdfsubject={},
pdfcreator={Emacs 29.1 (Org mode 9.6.6)},
pdflang={English}}
\usepackage{float,caption,booktabs,dcolumn,geometry,changepage,nopageno,hyperref,fancyvrb}
\usepackage{pgfplots}
\newcolumntype{d}[1]{D{.}{.}{#1}}
\pgfplotsset{compat=newest}
\usetikzlibrary{plotmarks}
\usetikzlibrary{arrows.meta}
\usepgfplotslibrary{patchplots}
% \usepackage{grffile}
% \usepackage{amsmath}
\pgfplotsset{every axis/.append style={
scaled y ticks = false,
scaled x ticks = true,
xticklabel style={/pgf/number format/fixed},
yticklabel style={/pgf/number format/fixed, /pgf/number format/1000 sep={}},
}}
\hypersetup{
colorlinks=true,
linkcolor=blue,
filecolor=magenta,
urlcolor=cyan,
pdfpagemode=FullScreen,
pdfauthor={Stéphane Adjemian},
pdftitle={},
pdfkeywords={},
pdfsubject={},
pdfcreator={Stéphane Adjemian},
pdflang={English}
}
\urlstyle{same}
\VerbatimFootnotes
\setlength{\parindent}{0pt}
\begin{document}
\section{Improve deterministic simulation}
\label{sec:orga51ffcf}
The purpose of the present item is to improve the performance of the
Laffargue-Boucekkine-Juillard algorithm (henceceforth LBJ), which can used for
solving perfect foresight models in Dynare (LBJ is available under the option
\texttt{stack\_solve\_algo=1} of the \texttt{perfect\_foresight\_solver} command).\newline
More precisely, the goal is to improve the performance of LBJ when used in
conjunction with the \texttt{bytecode} and \texttt{block} options of the \texttt{model} block. The
\texttt{bytecode} option instructs Dynare to use a custom bytecode representation of
the model when evaluating the model residuals and Jacobian, supposedly faster
than a pure \texttt{.m} representation. The \texttt{block} option instructs Dynare to
decompose the model into blocks in order to accelerate the computation via a
divide-and-conquer approach.\newline
It turns out that, for historical reasons, the implementation of LBJ in Dynare
in combination with the \texttt{bytecode} and/or \texttt{block} options was inefficient, and
the purpose of the present item is therefore to improve the situation and
deliver better performance for these cases.\newline
The improvements have been implemented in the unstable version of Dynare. The
table below reports a performance comparison between before and after those
changes (respectively corresponding to commits
\href{https://git.dynare.org/Dynare/dynare/-/commit/af205124766be59fe8bfbb053a91167696a89fb4}{af20512} and
\href{https://git.dynare.org/Dynare/dynare/-/commit/666aa46dfb8519db326a0671f75bc300f48fb71f}{666aa46}). The comparison was performed using
a model provided by DG ECFIN (filename \texttt{SMALLMODEL10.dyn}, which has 1364
equations), applying a permanent shock that is simulated with LBJ over 50
periods. The test environment consists of an AMD Ryzen 9 7950X CPU, using
MATLAB R2023b under Debian GNU/Linux. Timings correspond to the full execution
time of the \texttt{.mod} file, and are reported in seconds.\newline
\begin{center}
\begin{tabular}{lrr}
& Before & Now\\[0pt]
\hline
Neither \texttt{bytecode} nor \texttt{block} & 19 & 19\\[0pt]
\texttt{block} (without \texttt{bytecode}) & 72 & 11\\[0pt]
\texttt{bytecode} (without \texttt{block}) & 123 & 24\\[0pt]
\texttt{bytecode} + \texttt{block} & 64 & 18\\[0pt]
\end{tabular}
\end{center}
\bigskip
As one can see, before the changes implemented under the present item, the LBJ
implementation with either \texttt{bytecode} or \texttt{block} was much less efficient than
the one with neither of those options.\newline
The changes that have been implemented lead to a drastic performance
improvement: by a factor of more than 3 for \texttt{bytecode} + \texttt{block}, of more than
5 for \texttt{bytecode} alone, and of more than 6 for \texttt{block} alone.\newline
Before the change, there were 3 implementations of LBJ in Dynare:
\begin{enumerate}
\item a pure \texttt{.m} implementation (\texttt{sim1\_lbj.m}) for the case with neither \texttt{block} nor \texttt{bytecode}
\item a pure \texttt{.m} implementation (\texttt{solve\_two\_boundaries.m}) for the case with \texttt{block} without \texttt{bytecode}
\item a C++ implementation in the \texttt{bytecode} MEX for the case with \texttt{bytecode}
(with or without \texttt{block})
\end{enumerate}
Implementation 1 was unchanged, hence the first line of the table shows no
change in timings.\newline
Implementation 2 and 3 were both inefficient. We decided to drop implementation
3, and to improve implementation 2. As a consequence, implementation 1 is now
used when \texttt{block} is not present (with or without \texttt{bytecode}), and the improved
implementation 2 is now used when \texttt{block} is present (with or without
\texttt{bytecode}).\newline
Under the present situation, one can observe that adding the \texttt{block} option
always improve performance, which is consistend with expectations (without
\texttt{bytecode}, it leads to a drop from 19 to 11 seconds; with \texttt{bytecode}, from 24
to 18 seconds).\newline
More surprisingly, adding the \texttt{bytecode} option now always degrades
performance. As a consequence, the current best option is \texttt{block} without
\texttt{bytecode}.\newline
It remains to be seen why \texttt{bytecode} degrades performance. First, it should be
noted that we tested only one specific simulation on one specific model, with
one specific algorithm (LBJ); results may be different under another scenario.
The most plausible explanation for these results is that the just in-time (JIT)
compiler in MATLAB has been improved to the point of making our bytecode
representation useless. Another possible explanation is that the way the
Jacobian is returned to the \texttt{.m} routines is less efficient in the \texttt{bytecode}
case, because the latter does not use the same sparse matrix format as without
the \texttt{bytecode} option; but it is unlikely that this explains the whole
difference (actually, this explanation probably only plays a role in the
\texttt{bytecode} + \texttt{block} option, because in \texttt{bytecode} without \texttt{block} the code
currently exploits some sparsity).\newline
As another follow-up to the present project, if further improvements of LBJ
were deemed necessary, it could be possible to explore the potential for a
dedicated MEX (probably written in Fortran).
\section{A MEX file for the Kalman filter}
\label{sec:org10b97d6}
In Dynare, the vanilla Kalman filter is currently implemented using MATLAB code
\texttt{kalman\_filter.m} under \texttt{matlab/kalman/likelihood/}. We coded the equivalent routine
in Fortran and compiled it into a MEX to assess the associated computational
gains. The source code is located at
\texttt{mex/sources/kalman\_filter/kalman\_filter.f08}. The efficiency improvements
stem from the resort to a unique call to a LU decomposition routine and the
tailored use of its output components to perform the update step of the
Kalman filter and compute each periods contribution to the marginal
likelihood. In the MATLAB routine, using similar input arguments, an
operation as costly as the LU decomposition is called at least twice to
perform each Kalman filter update step (matrix inversion and reciprocal
condition number calculation) and a full determinant calculation is
necessary to compute each periods contribution to the marginal likelihood.\newline
As commented at the end of the source code, exploiting the symmetry of some
of the involved matrices using a Cholesky decomposition instead of a LU
decomposition may bring induce further computational gains. We wrote a first
test in \texttt{tests/kalman/likelihood}. The main file is \texttt{test\_kalman\_mex.m}, while
an auxiliary routine lies in \texttt{compare\_kalman\_mex.m}. The test sets up a
state-space framework with simulated data and allows for different
specifications of the observation equation through inputs \texttt{Z}, \texttt{Zflag} and \texttt{H}.
One may provide an index signaling observables among state variables, in
which case \texttt{Zflag} is 0 and \texttt{Z} is a vector of integer indices. Otherwise, one
may declare observable variables as a linear combination of state variables,
in which case \texttt{Z} is a real matrix and \texttt{Zflag} is 1. We test both formulations.
The structure of observation errors can also vary, being either constantly
equal to zero or following a multidimensional normal law with specified
variance-covariance matrix \texttt{H}. In that latter regard, we test 3 setups : (i)
no observation errors; (ii) observation errors with a diagonal
variance-covariance matrix; (iii) observation errors with an unrestricted
variance-covariance matrix.\newline
Using a setup embedding an 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz with
32 GB memory (under Debian GNU/Linux with MATLAB R2023b), we compare the outputs
and the execution speeds of the MATLAB and MEX routines. The results are similar
across routines and the Fortran implementation provides a small gain, as
the test log bears witness of:
\begin{table}[H]
\begin{tabular}{lccc}
& No measurement erros & Measurement errors & Correlated measurement errors \\ \hline
\texttt{Zflag=0} & 1.07 & 1.07 & 1.06 \\
\texttt{Zflag=1} & 1.03 & 1.01 & 1.01 \\
\end{tabular}
\end{table}
In this test the model has 100 state variables, 12 observed variables and 10
structural innovations. The sample contains 300 observations. In this example
the gain is below 10\%, but the measure of the relative advantage of the MEX
heavily depends on the model specification and the sample size (the larger is
the proportion of periods before reaching the Kalman steady state, the larger is
the gain)\footnote{One can easily change the size of the state-space model in the main test routine.}. With the \texttt{fs2000.mod} model provided in the \texttt{examples}
subfolder, the gain is more substantial (about 20\%). For the Quest-III model,
also used as a benchmark in section \ref{sec:orgacd17cc}, the evaluation of the
posterior kernel are 10\% faster with the MEX. Obviously, we ensured the
consistency of values returned by the MEX for all the considered models. The
last line of the previous table shows that the advantage of the MEX vanishes
with \texttt{Zflag=1}. The performance of the MEX routine seems to degrade
further when utilising a selection matrix in place of a vector of indices to
choose the observed variables, as compared to the MATLAB code.\newline
Note that these execution speed wedges are wider when the convergence of the
gain matrix demands more iterations to reach. Indeed, when the gain matrix no
longer significantly evolves, both the MATLAB and MEX routines call the
\texttt{kalman\_filter\_ss.m} routine.\newline
\section{Initialisation of the Kalman filter}
\label{sec:orgacd17cc}
Computing the likelihood of a linear DSGE model's heavily depends on the
utilisation of the Kalman filter. When the DSGE model's steady state is already
available in a closed form, the estimation process allocates a significant
portion of its time to the Kalman filter. In this document, we aim to quantify
the potential benefits that may arise from considering an alternative
initialisation of the Kalman filter.\newline
The reduced form DSGE model is given by:
\[
y_t = \bar y + A \left(y_{t-1}-\bar y\right) + B \varepsilon_t
\]
where $y_t$ is a $n\times 1$ vector of the endogenous variables defined by the
DSGE model, $\bar y$ the steady state of these endogenous variables,
$\varepsilon_t$ a $q\times 1$ vector of Gaussian innovations with zero mean and
covariance matrix $\Sigma_{\varepsilon}$. Matrices $A$ and $B$, respectively of
dimensions $n\times n$ and $n\times q$, as the steady state $\bar y$ are
nonlinear functions of the deep parameters we aim to estimate (along with the
covariance matrix $\Sigma_{\varepsilon}$). We only observe $p\geq q$ endogenous
variables, the vector of observed endogenous variables is:
\[
y^{\star}_t = Z y_t
\]
where $Z$ is a $p\times n$ selection matrix\footnote{Here, without loss of
generality, we implicitly exclude measurement errors.}. The likelihood, the
density of the sample, can be expressed as a product of the (Gaussian) conditional
densities of the prediction errors $v_t$ which are determine recursively with
the Kalman filter:
\begin{equation}
\label{eq:kalman:1}
v_t = y_t^{\star} - \bar y^{\star}- Z \hat y_t
\end{equation}
\begin{equation}
\label{eq:kalman:2}
F_t = Z P_t Z'
\end{equation}
\begin{equation}
\label{eq:kalman:3}
K_t = A P_t A' Z' F_t^{-1}
\end{equation}
\begin{equation}
\label{eq:kalman:4}
\hat y_{t+1} = A \hat y_t + K_t v_t
\end{equation}
\begin{equation}
\label{eq:kalman:5}
P_{t+1} = A P_t \left( A - K_t Z\right)' + B \Sigma_{\varepsilon}B'
\end{equation}
where $F_t$ is the conditional covariance matrix of the prediction errors, what
we learn from observation $t$ is given by the Kalman gain matrix $K_t$,
$\hat y_t$ the vector of endogenous variables in deviations to the steady state,
and $P_t$ the conditional covariance matrix of $\hat y_t$. Substituting (\ref{eq:kalman:2})
and (\ref{eq:kalman:3}) into (\ref{eq:kalman:5}) we obtain the following recurrence
for the conditional covariance matrix of the endogenous variables:
\begin{equation}
\label{eq:ricatti}
P_{t+1} = A P_t A' - A P_t Z' \left(Z P_t Z \right)^{-1}Z A P_t A' + B \Sigma_{\varepsilon}B'
\end{equation}
This is the Riccati equation, which consumes a significant portion of the
computation time during the evaluation of the Kalman filter. Additionally, it is
a critical point where round-off errors can accumulate significantly. As we
iterate through these equations, the Riccati equation will ultimately, given a
sufficiently large sample size, converge to a fixed point denoted as $\bar P$,
which satisfies the following:
\begin{equation}
\label{eq:ricatti:fp}
A \bar P A' - \bar P - A \bar P Z' \left(Z \bar P Z \right)^{-1}Z A \bar P A' + B \Sigma_{\varepsilon}B' = 0
\end{equation}
and the Kalman filter equations fall down into the following system:
\[
\begin{cases}
v_t &= y_t^{\star} - \bar y^{\star}- Z \hat y_t\\
\hat y_{t+1} &= A \hat y_t + \bar K v_t
\end{cases}
\]
referred to as the steady state Kalman filter, with $\bar F = Z \bar P Z'$ and a
constant Kalman gain matrix $\bar K = A \bar P A' Z' \bar F^{-1}$. Iterations are
then significantly faster compared to the system described by
(\ref{eq:kalman:1})--(\ref{eq:kalman:5}). The number of iterations required to
(approximately) converge to the steady state does not depend on the data but only
on the persistence in the model (through matrix $A$).\newline
Due to its recursive nature, the Kalman filter requires an initial condition
$\hat y_0$ and $P_0$. Assuming that the model is stationary, a usual choice for
the initial condition is the (stochastic) fixed point of the reduced form model:
$\hat y_0 = 0$ and $P_0=P^{\star}$ solving:
\[
P^{\star} - A P^{\star} A' - B \Sigma_{\varepsilon} B' = 0
\]
which is a Sylvester equation that can be easily solved.\newline
In this note we consider instead the fixed point of the Ricatti equation
$\bar P$ as an initial condition for the covariance matrix of the endogenous
variables. This approach offers a clear computational advantage: we directly jump to the
steady state Kalman filter without the need to go through
(\ref{eq:kalman:1})--(\ref{eq:kalman:5}). Clearly this will reduce the time
required to compute the likelihood, but this will also change the evaluation of
the likelihood. To assess the impact of this alternative initial condition, we
conduct experiments using a medium-sized model, quantifying the time savings and
evaluating their implications on parameter estimation. Dynare
provides a MEX file, linked to the subroutine \verb+sb02od+ from the
\href{http://www.slicot.org/}{Slicot library} (version 5.0), to solve
(\ref{eq:ricatti:fp}) for $\bar{P}$. The use of $\bar P$ as an initial condition
for the Kalman filter is triggered by option \verb+lik_init=4+ in the
\verb+estimation+ command.\newline
We consider\footnote{Under the same environment as in section \ref{sec:org10b97d6}.} the estimation of the Quest-III model (113 endogenous variables, 19
innovations, 17 observed variables) as an example. All the codes to reproduce
the results are available in a
\href{https://git.ithaca.fr/stepan/quest-iii-estimation}{Git repository}. The
model is estimated under both settings (initialization with $\bar P$, or
with $P^{\star}$)\footnote{To reproduce the results run \verb+Q3.mod+, provided
in the root of the Git repository, with macro options
\verb+-DUSE_RICATTI_FIXED_POINT=true+ and
\verb+-DUSE_RICATTI_FIXED_POINT=false+.} with \verb+mode_compute=5+. We find
that the estimation of the model with the fixed point of the reduced form model
($P^{\star}$) is 33\% slower than the estimation of the same model with the
fixed point of the Ricatti equation ($\bar P$). Because the objective functions
are different, the optimiser probably do not behave identically in both cases.
If we just evaluate the likelihood on the same point (the prior mean), instead
of running \verb+mode_compute=5+, we find that the evaluation with $P^{\star}$
is just 21\% slower than the evaluation with $\bar P$, indicating that initialising
with the fixed point of the Riccati equation renders the objective more amenable to
optimisation. This is expected since it is widely recognised that
iterations on (\Ref{eq:ricatti}) exhibit erratic behaviour. The values of the logged
posterior kernel at the estimated posterior mode are close (7235.92 with
$\bar P$ and 7237.68 with $P^{\star}$). But much larger differences can be
observed elsewhere. For instance, at the prior mean the values are -275.58 with
$\bar P$ and 5.13 with $P^{\star}$. Do these differences in the likelihood
functions matter? Table \ref{tab:1} shows the estimated values for the
parameters. Both likelihood functions deliver similar estimates in terms of
point estimate or variance\footnote{Note however that the estimates for the
posterior variance should be taken with care since they rely on an asymptotic
Gaussian approximation. For some parameters, the estimated posterior variance
is larger than the prior variance... That should not be the case.}. Figures
(\ref{fig:1})--(\ref{fig:7}) display cross sections of the likelihood (black
plain curves for the initialisation with $P^{\star}$, red dashed curves for the
initialisation with $\bar P$). The conclusion seems to be that even if the
likelihood functions are different, inference on the parameters is only
marginally affected. At least for a first attempt, we can safely consider the
alternative initialisation of the Kalman filter. We should also try to compare
posterior densities, running samplers under both settings, this would bring a
more definitive answer (at least in the case of Quest-III). Ideally we should
also build a Monte-Carlo exercise, on a smaller model. All this is left for
future research. Note that the gain in terms of computing time is model
\textit{and parameter} dependent. If the standard Kalman filter converges
quickly to the steady state the gain is small (this depends on the persistence
in the model).
\newpage
\begin{longtable}[l]{ l l d{4} d{4} d{4} d{4} d{4} d{4} }
\toprule
\multicolumn{1}{c}{} & \multicolumn{3}{c}{\textbf{Prior}} & \multicolumn{2}{c}{\textbf{Posterior with $P^{\star}$}} & \multicolumn{2}{c}{\textbf{Posterior with $\bar P$}}\\
\cmidrule(rl){2-4}
\cmidrule(rl){5-6}
\cmidrule(rl){7-8}
\textbf{Parameters} & Shape & \multicolumn{1}{c}{Mode} & \multicolumn{1}{c}{SD} & \multicolumn{1}{c}{Mode} & \multicolumn{1}{c}{SD} & \multicolumn{1}{c}{Mode} & \multicolumn{1}{c}{SD} \\
\midrule
\endhead
\\
\midrule
\multicolumn{8}{c}{\tiny{\textbf{Continued on next page}}}\\
\endfoot
\bottomrule
\\
\caption{\textbf{Estimation results of Quest-III.} Without and with the fixed point of the Ricatti equation as an initial condition of the Kalman filter. Standard deviation estimates rely on the inverse of the Hessian matrix at the estimated posterior mode.}
\endlastfoot
E\_EPS\_C & Gamma & 0.0320 & 0.0300 & 0.0528 & 0.0109 & 0.0521 & 0.0111\\
E\_EPS\_ETA & Gamma & 0.0640 & 0.0600 & 0.1163 & 0.0340 & 0.1225 & 0.0347\\
E\_EPS\_ETAM & Gamma & 0.0088 & 0.0150 & 0.0183 & 0.0022 & 0.0183 & 0.0023\\
E\_EPS\_ETAX & Gamma & 0.0640 & 0.0600 & 0.0274 & 0.0152 & 0.0279 & 0.0159\\
E\_EPS\_EX & Gamma & 0.0032 & 0.0030 & 0.0044 & 0.0004 & 0.0043 & 0.0004\\
E\_EPS\_G & Gamma & 0.0320 & 0.0300 & 0.0047 & 0.0003 & 0.0047 & 0.0003\\
E\_EPS\_IG & Gamma & 0.0320 & 0.0300 & 0.0054 & 0.0004 & 0.0054 & 0.0004\\
E\_EPS\_L & Gamma & 0.0320 & 0.0300 & 0.0220 & 0.0052 & 0.0213 & 0.0049\\
E\_EPS\_LOL & Gamma & 0.0032 & 0.0030 & 0.0040 & 0.0009 & 0.0045 & 0.0008\\
E\_EPS\_M & Gamma & 0.0016 & 0.0015 & 0.0012 & 0.0001 & 0.0012 & 0.0001\\
E\_EPS\_RPREME & Gamma & 0.0032 & 0.0030 & 0.0015 & 0.0002 & 0.0014 & 0.0002\\
E\_EPS\_RPREMK & Gamma & 0.0032 & 0.0030 & 0.0050 & 0.0016 & 0.0054 & 0.0018\\
E\_EPS\_TR & Gamma & 0.0320 & 0.0300 & 0.0022 & 0.0001 & 0.0022 & 0.0001\\
E\_EPS\_W & Gamma & 0.0320 & 0.0300 & 0.0396 & 0.0129 & 0.0410 & 0.0136\\
E\_EPS\_Y & Gamma & 0.0320 & 0.0300 & 0.0118 & 0.0012 & 0.0120 & 0.0012\\
A2E & Beta & 0.0500 & 0.0240 & 0.0378 & 0.0132 & 0.0370 & 0.0130\\
G1E & Beta & 0.0000 & 0.6000 & -0.0456 & 0.0977 & -0.0455 & 0.0963\\
GAMIE & Gamma & 16.6667 & 20.0000 & 62.7825 & 18.9969 & 66.3819 & 19.6627\\
GAMI2E & Gamma & 8.3333 & 10.0000 & 0.6044 & 0.3537 & 0.6161 & 0.3627\\
GAMLE & Gamma & 16.6667 & 20.0000 & 54.0067 & 10.8176 & 55.4743 & 11.2277\\
GAMPE & Gamma & 16.6667 & 20.0000 & 45.8370 & 13.5341 & 48.2540 & 13.8116\\
GAMPME & Gamma & 16.6667 & 20.0000 & 1.3257 & 0.4455 & 1.3501 & 0.4538\\
GAMPXE & Gamma & 16.6667 & 20.0000 & 8.9856 & 7.5122 & 9.3195 & 7.9101\\
GAMWE & Gamma & 16.6667 & 20.0000 & 0.5913 & 0.3666 & 0.5815 & 0.3625\\
GSLAG & Beta & 0.0000 & 0.4000 & -0.4336 & 0.1045 & -0.4321 & 0.1046\\
GVECM & Beta & -0.5000 & 0.2000 & -0.1552 & 0.0432 & -0.1591 & 0.0439\\
HABE & Beta & 0.7222 & 0.1000 & 0.5381 & 0.0400 & 0.5369 & 0.0398\\
HABLE & Beta & 0.7222 & 0.1000 & 0.8731 & 0.0560 & 0.8637 & 0.0591\\
IG1E & Beta & 0.0000 & 0.6000 & 0.1802 & 0.0943 & 0.1824 & 0.0924\\
IGSLAG & Beta & 0.5000 & 0.2000 & 0.4384 & 0.0819 & 0.4401 & 0.0819\\
IGVECM & Beta & -0.5000 & 0.2000 & -0.1330 & 0.0586 & -0.1354 & 0.0582\\
ILAGE & Beta & 0.8856 & 0.0750 & 0.8887 & 0.0156 & 0.8856 & 0.0160\\
KAPPAE & Gamma & 1.0500 & 0.5000 & 1.6617 & 0.3660 & 1.6443 & 0.3596\\
RHOCE & Beta & 0.8856 & 0.0750 & 0.9305 & 0.0242 & 0.9382 & 0.0221\\
RHOETA & Beta & 0.5000 & 0.2000 & 0.0771 & 0.0596 & 0.0733 & 0.0563\\
RHOETAM & Beta & 0.8856 & 0.0750 & 0.9599 & 0.0138 & 0.9602 & 0.0138\\
RHOETAX & Beta & 0.8856 & 0.0750 & 0.8841 & 0.0592 & 0.8824 & 0.0610\\
RHOGE & Beta & 0.5000 & 0.2000 & 0.2930 & 0.1116 & 0.2978 & 0.1130\\
RHOIG & Beta & 0.8856 & 0.0750 & 0.8789 & 0.0660 & 0.8806 & 0.0645\\
RHOLE & Beta & 0.8856 & 0.0750 & 0.9844 & 0.0081 & 0.9865 & 0.0076\\
RHOL0 & Beta & 0.9578 & 0.0200 & 0.9360 & 0.0233 & 0.9320 & 0.0214\\
RHOPCPM & Beta & 0.5000 & 0.2000 & 0.6987 & 0.1411 & 0.7006 & 0.1403\\
RHOPWPX & Beta & 0.5000 & 0.2000 & 0.1889 & 0.0667 & 0.1895 & 0.0671\\
RHORPE & Beta & 0.8856 & 0.0750 & 0.9908 & 0.0065 & 0.9943 & 0.0037\\
RHORPK & Beta & 0.8856 & 0.0750 & 0.9254 & 0.0270 & 0.9175 & 0.0271\\
RHOUCAP0 & Beta & 0.9578 & 0.0200 & 0.9602 & 0.0160 & 0.9560 & 0.0160\\
RPREME & Beta & 0.0200 & 0.0080 & 0.0205 & 0.0071 & 0.0204 & 0.0068\\
RPREMK & Beta & 0.0200 & 0.0080 & 0.0243 & 0.0030 & 0.0256 & 0.0025\\
SE & Beta & 0.8000 & 0.0800 & 0.8540 & 0.0208 & 0.8561 & 0.0206\\
SFPE & Beta & 0.5000 & 0.2000 & 0.8831 & 0.0672 & 0.8878 & 0.0656\\
SFPME & Beta & 0.5000 & 0.2000 & 0.7211 & 0.1288 & 0.7239 & 0.1277\\
SFPXE & Beta & 0.5000 & 0.2000 & 0.9293 & 0.0508 & 0.9299 & 0.0503\\
SFWE & Beta & 0.5000 & 0.2000 & 0.7997 & 0.1439 & 0.8045 & 0.1413\\
SIGC & Gamma & 1.5000 & 1.0000 & 4.3516 & 1.2195 & 4.3758 & 1.2276\\
SIGEXE & Gamma & 1.0500 & 0.5000 & 2.4811 & 0.3204 & 2.4645 & 0.3212\\
SIGIME & Gamma & 1.0500 & 0.5000 & 1.1520 & 0.1930 & 1.1564 & 0.1931\\
SLC & Beta & 0.5000 & 0.1000 & 0.3293 & 0.0668 & 0.3225 & 0.0668\\
TINFE & Beta & 2.0000 & 0.4000 & 1.9073 & 0.1996 & 1.7677 & 0.1378\\
TR1E & Beta & 0.0000 & 0.6000 & 0.9235 & 0.0913 & 0.9246 & 0.0908\\
RHOTR & Beta & 0.8856 & 0.0750 & 0.8588 & 0.0488 & 0.8576 & 0.0486\\
TYE1 & Beta & 0.1500 & 0.2000 & 0.3427 & 0.0920 & 0.3198 & 0.0893\\
TYE2 & Beta & 0.1500 & 0.2000 & 0.0720 & 0.0258 & 0.0680 & 0.0253\\
WRLAG & Beta & 0.5000 & 0.2000 & 0.2274 & 0.1364 & 0.2258 & 0.1352\\
\label{tab:1}
\end{longtable}
\begin{figure}[H]
\centering
\scalebox{.75}{
\input{cross-posterior-kernel-1.tex}}
\caption{Cross sections of the posterior kernel.}
\label{fig:1}
\end{figure}
\begin{figure}[H]
\centering
\scalebox{.75}{
\input{cross-posterior-kernel-2.tex}}
\caption{Cross sections of the posterior kernel.}
\label{fig:2}
\end{figure}
\begin{figure}[H]
\centering
\scalebox{.75}{
\input{cross-posterior-kernel-3.tex}}
\caption{Cross sections of the posterior kernel.}
\label{fig:3}
\end{figure}
\begin{figure}[H]
\centering
\scalebox{.75}{
\input{cross-posterior-kernel-4.tex}}
\caption{Cross sections of the posterior kernel.}
\label{fig:4}
\end{figure}
\begin{figure}[H]
\centering
\scalebox{.75}{
\input{cross-posterior-kernel-5.tex}}
\caption{Cross sections of the posterior kernel.}
\label{fig:5}
\end{figure}
\begin{figure}[H]
\centering
\scalebox{.75}{
\input{cross-posterior-kernel-6.tex}}
\caption{Cross sections of the posterior kernel.}
\label{fig:6}
\end{figure}
\begin{figure}[H]
\centering
\scalebox{.75}{
\input{cross-posterior-kernel-7.tex}}
\caption{Cross sections of the posterior kernel.}
\label{fig:7}
\end{figure}
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% End: