Introduction to point processes

What is a point process:

A point pattern in the unit square is a finite set of points in the unit square.
Let denote the set of all finite point patterns in the unit square.

A number can be interpreted as being a realization of some stochastic variable.
In the same way, a point pattern can be interpreted as being a realization of a stochastic point process.
Thus, a point process on the unit square has state space
Strauss point process in the unit square
with torus edge correction (also used as background)

Maximum likelihood estimation for point processes:

Let X be a point process.
A density of X can depend on the positions of the points in a given realization, or the number of points or the distance between the points and much more.
Let the density be parametrized by and denote the density f(.;).

Suppose that we have observed a point pattern, say x. This could be the positions of trees in a forrest.
We want to find the best model for x among f(.;) for varying , i.e. the model where x is most likely.
This is done (maximum likelihood estimation) by chosing the value of that maximizes f(x;). This is called the maximum likelihood estimate (MLE).
We find the MLE by solving the equation d/d f(.;) = 0. This is called the maximum likelihood equation.

Partial (profile) maximum likelihood estimation is when some of the parameters ( can be higher dimensional) are fixed and the rest are being estimated.

Partial maximum likelihood estimation for the Strauss point process:

For a particular class of densities (exponential model), the maximum likelihood equation becomes E t(X) = t(x), where t(x) is a vector of statistics and E denote the mean.

Two points are said to be R-neighbours if they lie closer than R>0.
For the Strauss process, =(,,R) and t(x)=(n(x),s(x;R)), where n(x) is the number of points in the point pattern x and s(x;R) is the number of R-neighbours in x.
The Strauss process is an exponential model if R is fixed.

Thus, the partial (R fixed) maximum likelihood equations for the Strauss process are E,,R n(X) = n(x) and E,,R s(X;R) = s(x;R).

In order to calculate the full MLE, the partial MLE ((R),(R)) is computed for each value of R in a grid. Then the values of the partially maximized likelihood function is computed (up to a constant) for each R, and maximized in order to find the MLE of R and the corresponding MLE of (,).

Simulation and approximation:

The density can be written as f(x;)=h(x;)/ (), where h(x;) is explicitely described, and ()= h(x;)dx is intractable and referred to as the normalizing constant.
Therefore the density cannot be calculated explicitely. Markov chain Monte Carlo methods can be used to simulate a realization drawn from a given distribution. Let x1,...,xm be realizations from the distribution with parameters (,,R). Then the mean value is approximated by the sample mean,
E,,R n(X) 1 / m
m
i=1
n(x
 
i
)    and    E,,R s(X;R) 1 / m
m
i=1
s(x
 
i
;R)
It is not possible to compute the density, but using importance sampling (combined with bridge sampling) the density can be approximated up to a constant. No further details will be given here.



This page was last modified on September 28th 2001