Particle Image Velocimetry (PIV) is a non-intrusive measurement technique for studying the velocity of particles in some type of flow.  This is most commonly either a gas flow in a wind tunnel, or a liquid flow of some viscous fluid.  The medium is then seeded with some sort of tracer particles and then illuminated periodically by some high power light source, which is often a laser.  The idea is to obtain successive digital images from charged coupled device (CCD) cameras.  These images can then be analyzed by a computer, which determines the velocities of the tracer particles, which can be used to understand the velocity of the given medium.  The advantage of this technique is that it does not require the placement of any type of probe in the medium, which could affect the overall flow.  This lack of interference is what gives it its non-intrusive quality.  Also, where a probe can only measure the velocity at a single point, PIV can return information about the flow of the entire field.



     In this experiment, under the supervision of Dr. Goldstein, we are interested in understanding the fluid mechanic properties of an injected jet of a specific concentration salt solution into a gradient salt solution tank.  As the jet descends into the tank a corkscrew effect is observed, caused by the change in difference of density between the jet and the surrounding fluid.  Of particular interest there appears to be some areas of circulation near where the jet enters the tank.  Also, we know that the jet is pulling in fresh water from the top of the tank because of a no-slip boundary, and we know that it must come back up to the top at some point, but we’re not sure what path it takes in returning.  For the most part the basic fluid mechanic properties of this system are not understood.  By observing the overall flow of the fluid we hope to find out just what is going on.

     This phenomenon has been observed in laboratory experiments.  In particular, Wu and Libchaber report when studying the movement of micron-sized beads in a bacteria bath that the movement of the bacteria, Escherichia coli, is not due to Brownian motion but to “transient formations of coherent structures, swirls and jets, in the bacteria bath.[2]”  Thus, since this motion has been observed in a biological process it becomes applicable to a wide range of fields, particularly those interested in chemotaxis.




     For the beginning of this experiment I first concentrated on creating a simple particle tracker.  I created an animation consisting of a single black sphere traveling across a white background.  The path of the sphere could vary, but to keep things simple I had it follow a sinusoidal path from one side of the image to the other.  To determine the location of the sphere in any given image I used a method known as cross-correlation.  This method involves taking a picture of the sphere, referred to as the kernel, and basically sliding it around in the image until a match is found.  For each pixel in the image the intensity of each pixel in the kernel is multiplied by the intensity of the surrounding image pixels. In this way values are calculated for all of the pixels in the image and the cross-correlation matrix is formed.  So, if the kernel has dimensions a and b, then the value of the cross-correlation matrix for a given pixel in the image (x,y) is:

Where I and K represent the image and kernel intensities respectively and s is the average kernel intensity [1].  After calculating the cross-correlation matrix we then make a computation similar to a center of mass calculation, only now we can think of it more like a center of intensity equation.  So we get the following center point (xc, yc) where



Here T is a threshold value that is subtracted from all the points and any negative values are discarded.  This helps eliminate all of the background noise that results that is not one of the major peaks [1].

     This method of cross correlation is very similar to a technique known as convolution.  In a convolution the intensity of each pixel is replaced by a linear combination of its surrounding pixels.  The number of neighboring pixels to combine, as well as the relative weights to use is based on the size and properties of the kernel used.  Convolutions are used in a wide variety of pattern recognition applications.  They can be used to detect distinct objects, as well as for defining their edges.  They can also be used as part of different threshold techniques.  All we have to do is change the values in the kernel to get a different type of filter.  As an example, here is a Laplacian filter that would be used as a convolution kernel to extract the edges of an image:

So, letting I(x,y) be the intensity of the pixel at position (x,y) in the image we now have



If a pixel value is calculated to be greater than 255, it is replaced with 255.  Similarly, if the value is calculated to be less than 0, it is replaced by 0.  So the method of cross correlation is merely an example of performing a convolution where we are using a subset of the previous image as the convolution kernel.

     This algorithm was programmed in LabVIEW, allowing the user to specify the number of images to be processed as well as what image to use as the kernel.  It was then later found that LabVIEW had a predefined correlation function.  This function appears to make use of Fast Fourier Transforms (FFT’s), to improve its efficiency.  The above algorithm runs at O(n4), while the FFT method runs at O(n2logn).  However, Fourier Transforms are new to me so I will be spending a bit of time trying to understand this process.  After using the pre-defined function I was able to get the accuracy down to about one fifth of a pixel in either the x or y direction.

     The next step in my research will be to apply this simple process to track multiple particles.  This can be done by first dividing the original image into a set of interrogation areas.  We then treat each interrogation area as a kernel and perform the same above calculations.  There are a couple assumptions however that must be made when we divide the image up.  We have to be able to assume that there is homogeneous motion within an interrogation area.  We also assume minimal motion in the third dimension as well as limited interaction between the particles, such as touching or overlap.  My hope is that we can then look at two successive predictions of center locations for each interrogation area and from that calculate the displacement and velocity, resulting in a nice vector field.

     There are a couple of different filters that we can apply to the resulting correlation matrix in order to calculate the center location.  I am currently looking into methods known as “fuzzy filters” that apply a weighted average to the larger peaks in the matrix.  It then uses this average to predict the center location.  I believe the weighting for the average is based on an extrapolation of previous motion of a given interrogation area.  Thus a peak is more likely to represent the true center location if it lies closer to the path the interrogation appears to be moving in.  We can also do a similar averaging technique to filter out spurious vectors that tend to appear.  These vectors end up being much larger and often pointed in a completely different direction than their surrounding neighbors, which suggests that they are violating conservation of flow principles.  Therefore we can strongly suspect that they occurred because of small inconsistencies in the image (most likely from noise) and can justify removing them and replacing them with an average of their neighboring vectors.

     Below are two sample vector fields calculated using the correlation method from a set of images found at, a web site designed to create a standard that can be used to compare results from different tracking algorithms.  The images are from a wall shear flow at the top of the image, which can be seen by the small magnitude vectors in the upper right of the field and the larger vectors in the lower portion.






In the image on the left the length of the vectors represents the actual displacement of the given kernel.  However, in some cases, this displacement was very small and the resulting vector was hard to see.  Thus I created a second visualization that creates all vectors the same size, but color-codes them according to their magnitude.  In the example warmer colors represent faster vectors.

     While doing research on the correlation method for particle tracking I also came across several other interesting tracking algorithms.  For example, there is a method know as IPAN, which is a non-iterative, competitive linking algorithm that creates a list of trajectories for a set of feature points and analyzes the probability of each particle lying along one of the already established trajectories.  This method belongs to a family known as feature based particle tracking, which distinguishes notable features in the image using convolutions and then tracks the motion of these features.  The IPAN tracker is notable because it is non-iterative.  This means that it does not need to perform a calculation on every single pixel like the correlation method does.  It can also handle particles that disappear for a few frames and then reappear.  However, this method seems most suited for images with a lower particle density.  Also, due to the complexity of the algorithm, we decided not to try and encode it.  More information can be found about the IPAN tracker at:


LabVIEW code for the particle tracker can be found here:  Click on the following link for instructions using this program.




[1]. Gelles J et al., “Tracking Kinesin-driven movements with nanometer-scale precision”, Nature, 331: 450-453 (1988).

[2]. Wu and Libchaber, “Particle Diffusion in a Quasi-Two-Dimensional Bacterial Bath”, Physical Review Letters, 84: 3017 (2000).