It is not without reason that the Gibbs sampler has been nicknamed the ?workhorse of the simulation? by Robert and Casella. Indeed this simulation algorithm has been proven to be extremely efficient in many situations. However, this is not the case for large dimensional Gaussian random vectors where the inversion of the covariance matrix is not numerically tractable. Attempts to resort to an approximate version of the Gibbs sampler using a limited number of components (neighbourhoods) at each iteration has turned out not to be necessarily successful. Indeed, a phase transition can take place if those neighbourhoods have not been suitably chosen, and divergent outcomes may result.
In this presentation, an alternative iterative algorithm is proposed for simulating Gaussian random vectors. At each iteration, a component is selected at random, a Gaussian value is then assigned to it and finally appropriately ?propagated? to all other components. This algorithm does not suffer the dimensionality problem incurred by the standard Gibbs sampler. It does not require the inversion nor the factorization of the covariance matrix.
The presentation is organized as follows. Starting from the standard Gibbs sampler, a first propagative version of this algorithm is derived. Then several variations are proposed that contribute to make it more flexible while keeping it simple to implement. A number of examples are used to show experimental evidence that this algorithm is fast. A theoretical result is finally stated about its rate of convergence.