Vector Maxwell Simulations
For the past year I have been working with the Arizona Center for Mathematical Sciences under Professor Moysey Brio developing and expanding upon a set of Vector Maxwell Simulators. The software MX2D and MX3D is a powerful finite difference time domain (FDTD) solver for Maxwell's equations. This code is being used to study light interaction with nanoscale (10^-9 meter) dielectric and metallic features. Specific applications include designing high density optical data storage media (CD/DVD), light reflection from and transmission through photonic Bragg dielectric structures, local field (surface plasmon) excitations on nanometer
thick metal thin films and nanospheres. Other potential applications
include biosensing, medical diagnostics, and more.
The simulations define a two or three dimensional space as a matrix of cells over which the calculations can be preformed. The size (and therefore number) of cells determines the precision of the simulation. If there is a huge number of cells it can take days or weeks to process. If there are too few cells we get an inaccurate portrayal of what really happens. This was the motivation for both 'Adaptive Mesh Refinement' (AMR) and Non-Uniform Grid Spacing, they allow us to have a detailed resolution in the areas of interest and then a coarse resolution in the less important regions. In the AMR code you have a hierarchy of ‘boxes’ where the largest box defines the domain (at a low resolution) and then smaller boxes can be placed inside with a higher resolution. The Non-Uniform Grid Spacing approach allows the user to specify ranges over any axis (x, y, or z) over which they would like to increase the resolution. This allows for a single computational Matrix, whereas the AMR approach leads to some fairly complex data structures.
There are essentially four different simulators, three of them are closely related and one has been largely built from scratch. The former three are all based on the computational code designed Aramis Zakharian and programmed purely in C. There is essentially a 2D version(MX2D), a 3D version(MX3D), and an early AMR (adaptive mesh refinement) version. Both MX2D and MX3D are capable of Non-Uniform Grid Spacing and were also equipped with a Python (http://www.python.org/) interface allowing the simulation to be set up in a simple Python script. This allows us to change the simulation variables without having to recompile the whole thing again, this saves researchers huge amounts of time. The Python script is also simple and uncluttered, making it possible for people with no programming experience to set up and run a simulation with only the simulation’s user manual in hand. The fourth simulator was written by Colm Dineen in C++ and Fortran using the Chombo framework (designed at Lawrence Berkeley National Laboratory http://seesar.lbl.gov/anag/chombo/index.html ). This version also uses AMR, but it allows for distributed memory parallel processing while the C version of the AMR code only has shared memory parallel processing capabilities due to the complexity of the data structures. Both AMR versions are fairly new and suffer from some high frequency noise at the Mesh Refinement Boundary, but only on long simulations.
My first project was to create a library of functions in C++ that would draw some basic geometric shapes into the matrix of cells for Colm Dineen's Chombo based simulator. These functions allow the user to simply specify the location, size, and material for the object, which is then drawn into the appropriate matrices for each level of mesh refinement. I later expanded on these geometric primitives and designed functions that can automatically draw triangular and square lattice structures. This library of functions is designed to simplify the setup of a simulation and eventually be linked to an interface, which will eliminate the compile step.
An example geometry created using the geometric functions I wrote.
By using the ChomboVis visualization software (http://seesar.lbl.gov/ anag/chombo/chombovis.html ) we are able to view the matrix of cells in full 3d, and even view various levels of the AMR at the same time. The second project I have been working on also uses the Chombo library. This time we wanted a standalone converter that could read in binary files dumped by the MX3D software and then use the Chombo library to create a .hdf5 file (http://hdf.ncsa.uiuc.edu/HDF5/ ) which could then be viewed in full 3d using ChomboVis. Before this code was written you could only view a 2d slice of a single level of mesh refinement, now we can view all levels of mesh refinement at the same time in full 3d.
An example of an HDF5 file created by my program. You can see 2 levels of mesh
refinement at the same time, in realtime 3D
This project has proven itself useful to the researchers and so I did some work to make it a more complete and polished application. I provided command line options that allow the user to select a range of time steps that they want to convert, which components to extract (various fields or geometry layout), and various other helpful functionalities.
The rest of my time has been spent updating the MX3D simulation (the non-AMR C version) into Version 1.5, and eventually 2.0. The first priority was to tidy up the interface as well as the general code. At the same time I have been implementing some new geometries and features for researchers using the software, eliminated some bugs and inefficiencies, and finally updated the user manual to reflect these changes (available on the ACMS website www.acms.arizona.edu). The goal has been to essentially make a more complete, efficient, and logically coded piece of software that we can distribute to the researchers.
Much of the code for the simulations was written by people with more physics experience than programming experience. In places the code is confusing and very difficult to maintain. I have been going through this code file by file trying to make it more logical, readable, and efficient. I have reorganized and simplified the observer and source functionalities, the geometric functions, the interface functions, and various helper functions. In particular I have minimized the function calls that take place inside the simulation loop as well as the unnecessary logic operations. I have also worked on making the python interface more consistent both internally and with the other Vector Maxwell solvers. Finally I have been deleting the clutter of unused and outdated code from the core of the simulation.
I have also been adding a lot of new functionalities to the code. One of the first things I did was set up the output function so that you can view each resulting time frame as they were output rather than having to wait for the simulation to finish in its entirety. This allows researchers to look at preliminary results as the simulation runs to make sure it is set up correctly, this is especially helpful when the simulation can take over 12 hours to complete. I changed the output model so that the output function is now accessed through a ‘Domain_Observer’. This allows the user to create multiple Domain_Observers which switch on and off at different times and have their own temporal resolution (the number of time frames outputted in their lifetime). I have also added interface code that will allow you to choose which fields to output, and if you want to show the observers, sources, and processor division inside the layout file. Also you can output the magnitude of the field vectors rather than just the components in the x, y, and z directions. In the interface you can now ‘name’ your observers and sources and then set up the field they work with, the shaping of the time pulse to use, when they turn on and off, etc.
While reorganizing the functions that draw geometries I have also added some new geometries for the researchers using the simulation. I added a function that draws what is essentially a 2D parabolic curve stretched into 3D. This can be used to simulate a parabolic mirror for optical research as well as many other things. I also created a function that draws a 3D cone which can also be used in optical simulations as a guide for the light. I have also looked into creating a set of functions which would rotate and translate the geometries so that they could be placed at any angle, while not yet implemented this is probably something I will do soon.
There was also a bug found in the generation of planes for the observers, for example when you want to calculate the flux going through a certain plane inside the domain there would be a line missing in the plane. As a quick fix I had already created a simpler version where you can have a plane that is parallel to the XY, YZ, or ZX planes.
I just finished implementing and testing an interface to the Non-Uniform Grid Spacing in the MX3D software. This allows the user to specify inside the python script which axis they want to increase the resolution on (x,y,z), exactly where they would like to change the resolution of the grid in real world coordinates (microns or meters), what resolution they want in that specified area, and how large of an area they want to use to transition between the cell sizes (essentially how smooth of a transition they want). The transition uses the hyperbolic tangent function to compute a series of ratios between the two cell sizes. The result is a smooth change in the cell sizes going into and coming out of the mesh refinement region. The purpose of this transition is to reduce the high frequency noise which builds up around sudden changes in cell sizes. The code automatically maintains the domain size by adding and removing cells as they are resized.
You can see the high frequency noise around the mesh refinement region in this image.
Finally, I set up a framework to ‘parallelize’ the observers; that is I set up a new “communicator” within MPI for the observers to use. We use observers to calculate certain quantities (such as the electromagnetic flux going through a plane) however when they were split over multiple processors they would each write a separate file which would then be post processed and merged into a single file. I improved this so they use MPI to communicate with each other and write their data out to a single file as the simulation runs. So far I have implemented this for the ‘Flux Plane’ observers and the ‘Source Record’ and ‘Playback’ observers. Each observer has a unique ID that is used as a tag in correlation with the new observer Communicator, to prevent interference in the messages passed between observers and the core of the simulation. These new observer functionalities have been rigorously tested for bugs, accuracy, and performance.
Research and Publications:
The code I have written is currently being used and adapted for grad students doing Ph.D. research. The MX2D code has been used by Optical Science undergraduate students, and the MX3D code will most likely also be used in this manner. The MX2D and MX3D software has been used for the research behind a general article entitled "Interaction of Light with Subwavelength Structures" by Mansuripur, Zakharian and J. Moloney, illustrating the application of the code to high density optical data storage appeared in the March 2003 issue of the Optical Society of America magazine, Optics and
Photonics News (OPN). Two more back-to-back articles entitled "Transmission of Light Through Small Elliptical Apertures (Part I and II)" by the same authors, will appear in the upcoming March and April issues of OPN.