Concept Study Report (236K)

To download the whole report in pdf format (236K), click here.

(pdf files are read with the free program "Adobe Acrobat", which can be found at Adobe's web site.


T. W. Hill ; Rice University

F. R. Toffoletto ; Rice University

M. A. Heinemann; Phillips Laboratory

G. M. Erickson ; Boston University



Final Project Report for NSF Grant ATM-9704563. May 4, 1998





Background 3

Three-Phase Implementation Plan 4

Summary Results of Community Survey 4


Underlying GEM Objectives 6

Science Motivations for a Modular-Progressive GGCM 7

Operational Advantages of a Modular-Progressive GGCM 9


Concept Definition 12

Code Structure 14


Boundaries 17

Numerical Considerations 18

A Prototype GGCM 22


Development/Operations Center 28

Module Development Grants 31

Oversight 32

Rules of the Road 33


Development Time 34

Development and Operations Cost 35


Conclusions 37

Recommendations 38


APPENDIX: Detailed Results of Community Survey 49


Survey Results


Table of Contents Next SubSection > Next Section >

The Geospace Environment Modeling (GEM) Program, supported by the Magnetospheric Physics Program office of the National Science Foundation (NSF), is dedicated to the construction of a Geospace General Circulation Model (GGCM), a numerical research model that will test and advance our understanding of geospace dynamics, and will ultimately codify that understanding for the purpose of operational space environment forecasting. Three concept studies were commissioned in May 1997 to examine three possible approaches to future GGCM development and implementation. The study reported here focuses on a fully modular programming approach that we call modular-progressive modular because the geospace system is divided for numerical modeling purposes into a set of discrete but mutually interacting numerical modules, each representing either a physical domain or a boundary between domains, and progressive because the system is designed to be adaptable to new physical insights, numerical techniques, and computer architectures as they become available.

The first step in our study was to conduct a survey of the GEM research community. The results of this survey confirmed the guidance received from the founders of the GEM program in two important respects: the GGCM must be global in scope, extending from the upstream solar wind to the conducting layers of the atmosphere, and it must be flexible, able to incorporate a wide variety of physical hypotheses as well as empirical data. Both of these attributes are included, by design, in the modular-progressive approach. A review of the fundamental scientific questions that a GGCM is expected to address reveals that most are amenable to a modular-progressive approach, and many appear to require such an approach.

A modular-progressive GGCM code requires three program elements: a collection of independently developed science modules, a control program to provide coupling among the modules and to orchestrate their simultaneous operation, and a user interface program to provide easy GGCM access to the GEM research community. We have provisionally identified an optimum set of five regional "domain" modules (magnetosheath, tail lobes, plasma sheet, ring current, and ionosphere) and a corresponding set of seven "boundary" modules (magnetopause, plasma-sheet boundary layer, tail-dipole transition region, ring-current field lines, cusp/boundary layer, polar-cap field lines, and plasma sheet field lines). Thus each domain module is coupled through a boundary module to each of its immediate neighbors as well as to the ionosphere. The boundary modules provide the mechanism not only for numerical coupling between adjacent domains but also for explicit simulation of boundary physics that is not resolved spatially or temporally by the domain modules or by the control program. For each module type we have identified at least two, and in some cases many, candidates from the published literature – existing independent models that could, with some further work, be cast in the form of modules suitable for inclusion in the modular GGCM. Thus a modular-progressive GGCM can generate a large variety of physically distinct, globally coupled geospace models within a single computational framework. This opens the possibility of controlled numerical experiments in which a single algorithm or a single parameter can be varied in a controlled fashion from one computer run to the next. This would provide an invaluable tool for unraveling the chain of cause and effect in the complex, nonlinear geospace system.

We recommend that a GGCM Development/Operations Center be set up to carry out the work of developing a control program and user-interface program, and establishing a uniform set of boundary-condition protocols for each module type. An early step in this process should be a series of workshops open to the community of potential users, to choose among various options that will affect code design. These options include the choice of an optimum set of domain and boundary module types, the content and protocol of boundary conditions for each module type, the choice of a numerical storage strategy for the control program (basis-function representations versus grids of various types), and the desired format of input and output data streams. The result of these workshops should be a set of specifications to guide the code development effort.

We estimate that the development and validation of a working modular GGCM can be accomplished in three years, including the initial workshop-mediated design phase, by a properly staffed and funded development center. We estimate the required staff to be of the order of two full-time scientific programmers, including at least one computational physicist, together with a part-time senior scientist overseeing and managing the development activity. We estimate the required funding for this center to be of the order of $400K/yr if full salaries and 50% overhead are charged. If a cooperating government laboratory were to contribute staff salaries and waive overhead, the incremental cost of GGCM center operation could be as low as the order of $60K/yr. A similar level of effort would probably be required after the development phase to maintain, operate, and upgrade the code and train others in its use.

We also recommend that a similar level of effort be devoted to module development by the GEM research community through peer-reviewed research grants. Broad-based community involvement in GGCM development and use is a good thing, both scientifically and programmatically, and the modular-progressive approach is an ideal vehicle for promoting such involvement.

Acknowledgments –

< Previous Subsection Table of Contents Next Subsection >

We are grateful to the GEM program management and the GGCM Steering Committee for their energetic efforts to initiate development of a full-scale GGCM, and for allowing us to be a part of that effort. We thank Dick Wolf for his critical reading of our draft and for his insightful suggestions and comments. This study was supported by the GEM Program under NSF Grant ATM-9704563.

Authors –

Thomas W. Hill and Frank R. Toffoletto, Space Physics and Astronomy Department, Rice University, MS 108, Houston, TX 77251-1892. Email,

Michael A Heinemann, Air Force Research Laboratory/VSBS, 29 Randolph Road, Hanscom AFB, MA 01731-3010. Email

Gary M. Erickson, Center for Space Physics, Boston University, 725 Commonwealth Avenue, Boston, MA 02215. Email



< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

The objective of the NSF Geospace Environment Modeling (GEM) program is to advance our scientific understanding of the dynamics of the complex, globally coupled geospace system comprising the magnetosheath, the magnetosphere, and the ionosphere/thermosphere. The Geospace General Circulation Model (GGCM) is envisioned as a mechanism both for obtaining this advanced understanding and also for codifying that understanding, once obtained, for use in space-weather forecasting and public education purposes. These three functions (research, forecasting and education) are equally important in the long run, but the second and third depend on the first: a model that does not get the physics right will have negative, if any, impact on forecasting and education. Thus, the GGCM must be designed from the outset as a research tool, but with one eye on future applications as a forecasting and education tool.

Research tools exist to test and refine hypotheses. The GGCM must be an unusually versatile research tool because it must simultaneously test and refine a wide variety of different hypotheses applied to different components of a complex, tightly coupled nonlinear system. There are fundamental unanswered questions about the dynamics even of individual geospace regions, let alone the coupled behavior of the whole system. These questions have been enumerated elsewhere (for example, in the GGCM planning document on which this study is based [Wolf et al., 1996]), and will not be repeated here. The important point is that the GGCM must accommodate a wide variety of disparate physical processes and hypotheses, some of which may not even have been formulated yet. No single scientist or co-located research group possesses all the expertise and insight that are needed to achieve a predictive understanding of geospace dynamics. By the same token, no single numerical algorithm, however sophisticated, can be expected to do the whole job. Thus the GGCM enterprise was conceived as a distributed, community-based effort requiring, therefore, a modular numerical-modeling approach.

It was originally envisioned [Roederer, 1988] that the assembly of a GGCM would be undertaken in the final stages of the GEM program, to codify the results of the preceding series of region-specific research campaigns. It soon became apparent, however, that even a rudimentary GGCM could provide a powerful tool to accelerate the progress of ongoing GEM campaigns. Thus, a GGCM working group was set up in late 1992, and was elevated to "campaign" status in late 1995 to run in parallel with ongoing and future region-specific campaigns. This GGCM campaign has fostered exploratory collaborative efforts between individual research groups (e.g., embedding the Rice Convection Model in the Dartmouth/Goddard/NRL global MHD code [Lyon et al., 1995], and coupling the Birn-Hesse 3D magnetotail MHD model to the Rice Convection Model [Toffoletto et al., 1996]), and has also taken on the larger task of guiding the development of a full-scale GGCM during, rather than after, the completion of the region-specific GEM research campaigns. Community-wide discussions at the 1995 and 1996 GEM Summer Workshops at Snowmass, CO, revealed a broad consensus that the time was right to pursue GGCM development in earnest. There was little consensus, however, on the computational form that such a GGCM should take, and in particular, the extent to which it should be modularized. The result of these community-wide discussions was a GGCM planning document [Wolf et al., 1996] that laid out the motivations and requirements for rapid deployment of a research GGCM, and called for rapid concept studies to explore alternative code structures.

Three such concept studies were funded in May, 1997. The present document is the final report for one of these studies (NSF Grant ATM-9704563) conducted at Rice University (T. W. Hill, PI) to examine the fully-modular end of the spectrum of possible code structures. Another study at Dartmouth College (J. G. Lyon, PI) has focused on the opposite end of the spectrum, in which a large-scale magnetohydrodynamic (MHD) code provides a global computational "spine", and non-MHD effects ("modules") are incorporated either as effective transport coefficients in the fluid equations, or as test particles, or as explicit over-rides of the MHD variables. The third concept study, conducted at TRW-Colorado Springs (A. E. Ronn, PI), has addressed the important issue of data-stream management, irrespective of scientific code structure.

Three-Phase Implementation Plan

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

Further discussions at the Summer 1997 Snowmass GEM workshop produced an apparent consensus in favor of a three-phase GGCM implementation plan, in which the first two phases exploit existing models and the third phase comprises the planned development of a fully-coupled GGCM. Phase 1 will provide a catalog of numerical and graphical results from existing research codes for a prescribed set of representative inputs, available by Internet and/or CD-ROM to the GEM research community. Phase 2 will provide to the GEM research community the capability to run existing research codes for user-specified inputs that are not in the existing catalog. Phase 3 encompasses the development of the fully coupled GGCM. The three-phase plan is further described by Wolf and Hesse [1997].

The present concept study relates to Phase 3 of GGCM development under this three-phase plan. It is worth noting, however, that the modular-progressive approach that we discuss herein is well suited to the three-phase plan that has been adopted. The success of a modular GGCM depends on the availability of reliable stand-alone science modules, and the planned Phase 1 and 2 efforts will, among other things, accelerate the development and validation of suitable science modules.

Summary Results of Community Survey

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

The first step of our concept study was to survey the needs and opinions of the GEM research community, both to validate and to expand upon the guidance provided in the GGCM planning document [Wolf et al., 1996]. In consultation with the other two study groups, we devised and conducted a World-Wide-Web survey with an announcement in the GEM Messenger email newsletter. The same survey form was circulated in hard copy at the Summer 1997 GEM workshop. We were encouraged by the strong response (63 individuals) and by the fact that the response statistics did not change noticeably when the 27 responses from the Summer workshop were added to the 36 pre-workshop responses. There is, if anything, a benign bias toward the opinions of those who are likely to actually use a GGCM, and who therefore took the time to fill out the form.

The Appendix includes the survey form and a graphical display of complete survey results. The survey results are confirmatory in that none of the GGCM attributes identified in the planning document were viewed as unimportant by a majority of respondents. The survey did, however, draw distinctions among these attributes; some are viewed as more important than others. A glance at the charts in the Appendix reveals that three GGCM attributes stand out in importance in the view of survey respondents:

Global Coverage of Geospace

Flexibility (ease of incorporating new model algorithms and/or data products)

Data assimilation (ability to over-ride computed results with actual data)

Other desirable attributes (e.g., rigorous self-consistency, detailed code documentation, point-and-click user interface) were assigned a lower priority by survey respondents. Of these three paramount attributes, the first (global coverage) is equally attainable by either a modular-progressive or an MHD-spine approach. The other two attributes (flexibility and data assimilation) are included by design in a modular-progressive approach; their inclusion in an MHD-spine approach, though perhaps possible, is by no means automatic.


Underlying GEM Objectives

< Previous Section < Previous Subection Table of Contents Next Subsection > Next Section >

The GGCM planning document [Wolf et al., 1996] presents an extensive, if not exhaustive, list of 41 hypotheses to be tested and 13 research challenges to be addressed by the GGCM. Most of these hypotheses and challenges fall under one of the following broad objectives of the GEM program:

Understanding solar-wind/magnetosphere coupling;

Understanding magnetospheric substorms;

Understanding magnetic storms; and

Understanding the aurora.

These four general objectives can, in turn, be lumped under the even more general heading of

Learning how to predict space weather.

These overall goals should be kept constantly in mind during the development and operation of a GGCM. In thinking about the actual design of a GGCM code, however, it is equally necessary to look in some detail at the various specific research problems that will have to be addressed along the way to achieving the overall goals. The nature of the specific problem being addressed determines the type of numerical approach best suited to its solution.

Some problems clearly benefit from a global MHD simulation approach. A recent example is the explanation of the "sigmoid" shape of minimum-|B| contours observed near the magnetopause in a magnetotail cross section. This structure can be understood, in retrospect, as the necessary downstream signature of the antiparallel merging process at the high-latitude dayside magnetopause [Crooker, 1979], but it was first successfully simulated by the use of a global MHD code [Siscoe, 1997]. There are, no doubt, many other examples of problems in solar-wind/magnetosphere coupling that are amenable to a global MHD simulation approach. Our task here is not to enumerate them, but simply to acknowledge that they exist. Thus, a modular-progressive GGCM should not be viewed as a replacement for the global MHD simulation procedure, but rather as a complement to that procedure.

There is another class of problems, involving non-MHD effects, that may be addressable equally well through a modified MHD simulation approach or a more fully modular approach. The following list from the GGCM planning document update [Wolf and Hesse, 1997] serves to illustrate this class of problem:

Magnetopause transfer processes


Many-component inner magnetospheric plasma (radiation belts, plasmasphere...)

Ionosphere and thermosphere

Exosphere (controls particle loss by charge exchange)

Pitch-angle scattering (controls particle loss by precipitation)

Acceleration of auroral electrons downward (affects ionospheric conductivity, winds...)

Acceleration of ionospheric ions upward (source of magnetospheric particles)

Many of these effects can plausibly be included in a suitably-modified global MHD code, either by devising effective fluid transport coefficients or source/loss terms (e.g., magnetopause transfer processes, charge-exchange loss, pitch-angle scattering), or by the introduction of test particles that do not, by definition, affect the background electric and magnetic fields (e.g., radiation belts, plasmasphere, acceleration of auroral electrons and ionospheric ions), or by the introduction of appropriate boundary conditions (e.g., ionosphere and thermosphere).

There is, however, another class of fundamental magnetospheric research problems that would require explicit over-riding of the global MHD variables in certain regions of space. In other words, they require a modular numerical-modeling approach.

Science Motivations for a Modular-Progressive GGCM

<Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

Many of the most fundamental unsolved problems in geospace dynamics will benefit from, if indeed they do not require, an explicitly modular computational approach. The following list is illustrative but not exhaustive.

Global consequences of different hypotheses on location, geometry, and microphysics of magnetopause and magnetotail merging. In a global MHD simulation, magnetic merging at the magnetopause results from discretization (numerical noise) or from the use of non-linear switches in the code. Likewise, magnetotail merging in an MHD simulation results either from discretization or from the explicit inclusion of ohmic resistivity. The numerical resistivity cannot simply be turned off without violating conservation laws or sacrificing the numerical stability of the code. Estimates of an effective ohmic resistivity (or viscosity) may be available from independent microphysics calculations, but unless the grid resolution of the MHD simulation approaches the lengthscale of the responsible microphysics, merging and transport will still be dominated by numerical resistivity. Thus an MHD code will inevitably make a particular prediction on the geometry and strength of magnetic merging, a prediction that depends on the numerical method employed rather than on any physical considerations. By contrast, a modular-progressive approach enables one to specify on physical grounds the location, geometry, and rate of magnetic merging, either as a boundary condition or as a computed result from a microphysics module. This permits investigation of the global consequences of different hypotheses about the geometry and microphysics of merging at the magnetopause and in the tail. This capability is essential for testing at least 14 of the 41 sample hypotheses listed by Wolf et al. [1996] in the GGCM planning document (H1-12 and H40-41 on their list), and for addressing at least three of their 13 research challenges (C1-2, C5).

Magnetospheric closure of Birkeland currents. The generation mechanisms and closure paths of the major Birkeland-current systems is another wide class of problems that requires a modular-progressive approach for its solution. In the case of the Birkeland currents that occur on open magnetic field lines (the "cusp", "mantle", and "NBZ" currents and probably the dayside portion of Region 1), the current closure paths and generation mechanisms are intimately related to the geometry of the open magnetosphere. As noted above, the open geometry is a byproduct of the numerical procedure in a global MHD simulation but is a controllable physical input (or output) in a modular-progressive approach. The Birkeland currents that flow on closed field lines (Region 2 and probably the nightside portion of Region 1) are also problematical in global MHD simulations, for different reasons. For example, Region-2 currents are typically unrealistically weak in MHD simulations. It is unclear whether this failure results from inadequate spatial resolution, or from the neglect of non-MHD particle drifts, or from a combination of these factors. Accurate representation of Region-2 Birkeland currents is critical for calculation of near-Earth drift paths, ionospheric potentials, and any scientific or space-weather application that depends on the near-Earth plasma/field configuration. Efforts are underway to incorporate the inner-magnetospheric drift physics of the Rice Convection Model (RCM) within a global MHD code, and these efforts should ultimately improve the code's performance with respect to the generation of Region-2 currents. An explicitly modular approach is likely to be required, however, to investigate and understand the intricacies of Birkeland current generation and closure. This capability is essential for testing at least three of the 41 sample hypotheses of Wolf et al. [1996] (H13-15) and for addressing at least one of their 13 research challenges (C3).

Global consequences of finite gyroradius effects. Finite-gyroradius effects (or finite Larmor-radius (FLR) effects) are thought to be responsible for a significant fraction (~ 1 MA) of the Birkeland current flowing within the plasma-sheet boundary layer (PSBL) [Heinemann and Erickson, 1997] and perhaps also the low-latitude boundary layer (LLBL) [Wei et al., 1996]. The magnitude and global consequences of such effects in other thin magnetospheric boundary regions (e.g., magnetopause, plasma-sheet inner edge, ring-current drift boundaries, auroral arcs, the thinned, pre-onset plasma sheet) are unknown but potentially important. Finite gyroradius effects are excluded by definition in ideal MHD and are not easily incorporated within a global MHD code because they are, at least in part, dispersive rather than diffusive, and highly sensitive to the local magnetic geometry. A modular-progressive approach provides a natural avenue for investigation of these important microphysical effects and of their global consequences. This capability is essential for testing at least four of the 41 hypotheses of Wolf et al. [1996] (H11-12, H27, H41), and for addressing at least two of their 13 research challenges (C2, C5).

Global consequences of different substorm onset hypotheses. There are many competing hypotheses for the physical cause of substorm onset, some involving explicitly non-MHD effects. Many of these hypotheses have been developed in sufficient detail to provide quantitative predictive schemes suitable for inclusion in one or more regional modules of a modular-progressive GGCM. Some (but not all) could also in principle be incorporated within a global MHD framework, but only with considerable programming effort on a case-by-case basis. As noted above, magnetotail merging occurs in global MHD models by virtue of either numerical noise or artificially imposed ohmic resistivity, with no obvious connection to the various physical onset mechanisms that have been proposed. A modular-progressive approach, by contrast, is capable of incorporating any substorm onset scenario that has been reduced to a predictive algorithm or regional simulation, and to investigate its global observable consequences. This capability is essential for testing at least 16 of the 41 sample hypotheses listed by Wolf et al. [1996] (H16-31) and for addressing at least five of their 13 research challenges (C4-8).

Competition between ionospheric and solar-wind sources of magnetospheric plasma. Both the ionosphere and the solar wind are known to provide important sources of magnetospheric plasma. These sources are not independent; for example, the upwelling of ionospheric ions in the dayside "cleft ion fountain" and in the nightside auroral zone respond to both particle and energy inputs from the solar wind. The relative importance of the two sources is clearly a function of position as well as of magnetospheric activity. In addition, there are two potential mechanisms for solar-wind particle entry into the plasma sheet, direct field-aligned flow along open magnetic field lines coupled with E¥B drift through the mantle and tail lobes, and diffusion (turbulent or otherwise) across a low-latitude boundary layer onto closed field lines. Both the direct access route along open field lines, and the cross-field diffusion, are excluded by assumption from ideal MHD, and are included in a global MHD simulation only by virtue of numerical noise. The ionospheric source is also difficult to incorporate within a global MHD code because of the long time scale for equilibration along flux tubes. The problem of incorporating realistic plasma sources within the GGCM is greatly facilitated by the modular-progressive approach in which the physics of particle transport across boundaries and along flux tubes can be handled explicitly within regional or boundary modules. This capability is essential for testing at least four of the 41 sample hypotheses listed by Wolf et al. [1996] (H36-39) and for addressing at least four of their 13 research challenges (C10-13).

Of the 41 sample hypotheses listed in the GGCM planning document [Wolf et al., 1996], 37 (90%) have been cited above as either suggesting or requiring an explicitly modular approach to numerical modeling. Similarly, 12 of the 13 "research challenges" (92%) have been so cited. Even if we are only half right in our appraisal of these hypotheses and challenges, it is clear that a significant fraction of GGCM science will not be adequately addressed without an explicitly modular approach.

Operational Advantages of a Modular-Progressive GGCM

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

Aside from the nature of the specific questions to be addressed, there are three general features of a modular-progressive approach that may be critical to the scientific success of the GGCM program.

Inclusiveness. By definition, the modular-progressive design is an inclusive approach to GGCM development that supports, indeed requires, the participation of a broad segment of the GEM research community. The complexity of the code structure is designed to emulate the complexity of the geospace system itself. Thus, important insights about auroral field-line dynamics may come from one research effort (represented by one module) while important insights about magnetotail dynamics come from another research effort (represented by another module), and further new insight may result from the coupling of the two different modules within a global context. Even for a given region or a given boundary there are typically multiple competing hypotheses, none of which can be ruled out on physical grounds until they have been tested side by side in a modular GGCM environment.

Inclusiveness is important for two reasons: (1) by casting a wider net, one is more likely to find the right answers (and to understand why the wrong answers are wrong), and (2) by enlisting the skills and insights of a greater number of researchers, one is more likely, not only to find the right answers, but also to retain the enthusiastic support of the GEM research community, without which the GGCM program would quickly stagnate.

Flexibility for numerical experimentation. Extensive computer experimentation is needed to unravel many of the cause-and-effect relationships in the complex geospace system. The modular-progressive approach makes it possible to focus on specific, limited physics questions such as "What is the effect of plasma-sheet temperature on inner-magnetospheric electric fields?", or "How does the density of the plasma mantle affect the plasma sheet?". Such questions are amenable to controlled computer experiments in which a specific part of geospace, comprising perhaps just a few of the available GGCM modules, is probed by repeated computer runs with controlled variations of the input parameters.

It is exceedingly unlikely that any GGCM, regardless of its code structure, will give all the right answers on the first try. A modular progressive structure provides maximum flexibility for diagnosing and fixing the weak points both in the physics and in the numerics.

Capacity for data assimilation. The ability to assimilate observational data and empirical models is essential not only to the scientific use of the GGCM but also to its future applications in an operational forecast setting. Data assimilation implies more than a brute-force overriding of computed results by observed results. Ideally, it involves nudging the code in the right direction without upsetting the flow of the calculation. We identify three promising strategies for accomplishing this objective:

(1) Substitution of an empirical algorithm for a theoretical one in one or more of the regional or boundary modules. This is possible, by design, in the modular-progressive approach defined below. In a global MHD simulation approach, it would require bifurcating the simulation domain and inserting the empirically-determined domain, with newly-formulated boundary conditions on the simulation at the new boundaries thus created. In other words, it would require the modular-progressive approach defined below.

(2) Insertion of key observed parameters at strategic places and times. A classic example is the location of the polar-cap boundary, the ionospheric footprint of the separator surface between open and closed magnetic field lines (give or take a few degrees of geomagnetic latitude depending on one's interpretation). Experience with the Magnetospheric Specification Model [Bales et al., 1993] shows that run-time adjustments of this more-or-less readily observable boundary location go a long way toward keeping a global simulation model in touch with reality. This flexibility would be retained in the modular-progressive approach defined below. Such run-time corrections are problematical in a global MHD simulation model because they launch artificial waves that can affect the results in unpredictable and unphysical ways.

(3) Global incorporation of observational data. For some scientific applications, and for all operational space-weather applications, it is more important for the GGCM to stay close to the right answer, with respect to the global time-dependent plasma-field configuration, than to maintain rigorous mathematical self-consistency. Any numerical simulation of the magnetosphere, no matter how completely it may include the relevant physics, is subject to the inherent sensitivity of nonlinear differential equations to the details of the initial conditions, and the inherent observational uncertainties in the specification of those initial conditions. It is thus appropriate, if not essential, to allow for global-scale empirical corrections of computed results in time-dependent GGCM simulations.

Such empirical "nudging" (rhymes with "fudging") is commonplace in tropospheric weather forecasting models [Ghil and Malanotte-Rizzoli, 1991], but is largely untested in geospace modeling. The only example of which we are aware is the recent attempt to ingest geosynchronous electron flux observations into the Magnetospheric Specification Model [Garner et al., 1998], which propagates given particle distributions through the inner magnetosphere under the influence of given electric and magnetic fields. The consequences of data ingestion will undoubtedly become more profound when it is tried in a more self-consistent context, like that envisioned for the GGCM, where ingested particle data will influence the electric and magnetic field structure (or vice-versa). The task will be to make these consequences profoundly physical rather than profoundly artificial.

Some useful lessons may be learned from recent attempts to embed a particle drift algorithm based on the Rice Convection Model within a global MHD simulation code [Lyon et al., 1995; Toffoletto et al., 1997]. Here, the inner-magnetospheric pressure computed by the MHD simulation is overridden at each time step by the pressure computed in the MHD fields by the more detailed RCM drift algorithm. The "data" being ingested by the MHD code are, in these cases, generated by the embedded RCM code rather than by observations, but their effect on the overlying MHD code is analogous to the effect of observational data assimilation. In the cases cited above, the correction applied at each time step is a small fraction (few percent) of the quantity itself. It remains to be seen how large a correction can be accommodated by a global MHD code without introducing intolerable levels of artificial waves in the MHD solution.

In a modular GGCM code, the artificial waves induced by data assimilation would be limited to, at most, the adjacent regional modules, and could be suppressed altogether by the use of quasistatic regional modules. Moreover, the use of a basis-function representation of global fields in place of a grid-based representation, as described below, may provide a more graceful way of incorporating observational data into a computed solution.


< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

Concept Definition

Geospace has a cellular structure, that is, it contains a number of distinct physical domains of large volume delineated by mutual boundaries having relatively small, albeit finite, volume. The domains are distinguishable by the physical parameters and processes that govern their dynamics. For example, the magnetosheath is largely governed by the equations of ideal MHD, while the ionosphere is governed largely by the collisional coupling between ions and neutrals. If the boundaries between domains were fixed and passive, the problem of geospace environment modeling would be reducible to a small set of much simpler (but still challenging) problems focusing on individual domains, with suitably-chosen boundary conditions representing the coupling to adjacent domains. This is the mode in which geospace modeling has largely progressed in the past; indeed such studies have laid the foundation for the proposed GGCM effort. Among many other insights, the application of these regional studies has shown that the boundaries between domains are neither fixed nor passive. Not only does the magnetosheath influence the ionosphere, but also vice-versa, and the locus of their mutual boundary (the cusp/boundary layer field lines) influences and is influenced by their mutual interaction. The behavior of the whole system is more than the linear sum of the behavior of its constituent parts. This is why a GGCM is needed for further progress in geospace modeling, and this is the central fact that must be taken into account explicitly and directly in the design of the GGCM.

Thus we have based our study of GGCM development on an approach that we call modular-progressive. It is modular in that the geospace system is divided, for modeling purposes, into a set of discrete but mutually interacting "modules", each of which represents either a distinct physical domain or a boundary between domains. It is progressive in that it is deliberately designed to evolve with time to accommodate new insights and techniques that cannot be foreseen at the time of its initial design. A modular-progressive approach is suggested, if not dictated, by the ambitious design requirements of precision, reliability, efficiency, flexibility, and user-friendliness that must characterize a successful GGCM.

Thus, the modular-progressive GGCM approach is based on the explicit division of geospace into a number of separate physical domains bounded by a suitable number of inter-domain boundaries. If there are n domains, then there are in principle n(n-1)/2 inter-domain boundaries, although some of these can perhaps be eliminated from consideration on topological or physical grounds. The key to the modular-progressive approach is to treat each physical domain and each inter-domain boundary as a separate program module within the GGCM numerical framework. The various modules are not required (nor expected) to be similar in size or structure, but they are logically equivalent program "objects" insofar as the GGCM control program is concerned. The control program is essentially a network that transfers data among the various modules and controls their execution.

The concept is illustrated in Figure 1, where we have divided geospace into five spatial domains: (1) magnetosheath, (2) magnetotail lobes, (3) magnetotail plasma sheet, (4) inner magnetosphere/ring current, and (5) ionosphere/thermosphere. Corresponding to these five domains are seven physically relevant inter-domain boundaries: (a) the magnetopause, connecting domains 1 and 2, (b) the plasma-sheet boundary layer, connecting domains 2 and 3, (c) the

magnetotail-dipole transition region, connecting domains 3 and 4, (d) the ring-current field lines, connecting domains 4 and 5, (e) the cusp and boundary layer, connecting domains 1 and 5, (f) the polar-cap field lines, connecting domains 2 and 5, and (g) the plasma-sheet field lines, connecting domains 3 and 5. Note that, of the ten boundaries that are possible in principle among these five domains, three have been eliminated on physical grounds, i.e., it has been assumed that the magnetosheath domain (1) interacts with the plasma sheet (3) and ring current (4) domains only through their mutual interactions with the intervening open field-line domain (2), and that the open-field domain (2) interacts with the inner-magnetosphere domain (4) only through the intervening plasma-sheet domain (3). The rule of thumb is that each domain interacts with its immediate neighbor(s) and with the ionosphere/thermosphere domain (5), which is the "glue" that binds the whole system together [Siscoe, 1991]. The parameters of the upstream solar wind and of the lower atmosphere are treated as input parameters, determined outside the scope of the GGCM model.

Code Structure

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

A modular-progressive GGCM code requires three types of program element: a control program, a set of science modules (perhaps a dozen, as in the above example), and a user interface program. The bottom half of Figure 1 illustrates the logical GGCM code structure associated with the particular compartmentalization of geospace shown in the top half. Spatial regions ("domain modules") are indicated by numbers and their physically relevant mutual boundaries ("boundary modules") are indicated by letters, corresponding to the above designation. There are, in all, twelve program modules, including five domain modules and seven boundary modules. It is essential that the inter-domain boundaries be treated as logically equivalent to the domains themselves. These boundary modules provide not only a mechanism for explicit modeling of boundary physics, but also an orderly protocol for the transfer of boundary conditions between physical domains. The particular compartmentalization shown in Figure 1 is preliminary and is subject to revision during the course of the code development. The important point is that both the domains and their mutual boundaries are treated as independent modules.

The GGCM control program is not illustrated in Figure 1 because it lies in the third dimension. It is, however, easy to describe; it has direct links to each of the twelve science modules and it is responsible for the simultaneous execution of these modules and for the interchange of data among them. As noted above, it is neither required nor expected that each module will be of similar size in terms of physics content or in terms of computer processing requirements. Some modules, for example, may contain empirical rather than theoretical algorithms. It is, however, a critical feature of the modular-progressive GGCM approach that each module, whether it represents a domain or a boundary, be treated logically as a separate and hence interchangeable unit.

Control Program. The first thing the control program must do is to activate the user interface program to obtain needed inputs and options. It must then assemble the appropriate subroutines (science modules) from an established library, initialize the appropriate data arrays, and execute the appropriate science modules in parallel for the designated total time interval, while returning the requested output data to the user interface program at designated intervals. The data arrays will include the (time variable) boundary conditions that are shared by adjacent science modules. These boundary condition arrays are the only means of communication between different science modules, and they must be designed with careful attention to the direction of information flow as dictated by the form of the equations that are being solved within each adjacent module. In most cases, we must allow for information flow toward a boundary from both sides. In some cases a boundary science module will explicitly accommodate the information flowing toward it from both sides; for example, a magnetopause module would logically adjust the magnetopause position to equalize the pressures on the two sides, as provided by the magnetosheath module on one side and the tail lobe module on the other. In other cases it will be necessary to devise an interpolation scheme to accommodate the information flowing toward a boundary from both sides. The control program will keep track of the positions of all boundaries as well as the values of all relevant physical parameters on those boundaries.

The control program will update the boundary locations and variable arrays each control time step, namely, the smallest time step utilized by any of the science modules (which must be less than or equal to the requested time resolution of results reported to the user interface program). The time resolution required by various different science modules will probably differ from one to another, possibly by orders of magnitude, so it is important to design an "intelligent" control program that does not waste time updating a given data array unless it has been changed since the previous control time step by one of the science modules to which it is connected. For example, if one is studying MHD wave propagation effects in the cusp or in the auroral zone, having timescales of the order of a few seconds to a few minutes, one clearly does not need to update the Earth's dipole tilt at every time step.

Storage of boundary locations and variable values by the control program will require either a three-dimensional control grid or a set of basis functions with variable coefficients. The choice between these two information-storage techniques is a non-trivial one, and we devote a section to it below. In either case, the GGCM control program should be designed such that it is possible, but not mandatory, for a given science module to adopt the control grid or basis-function set as its internal computational grid or basis-function set. The control grid or basis-function set must continuously span the whole geospace region from the ionosphere to the upstream solar wind.

The control program must also provide for optional modes of GGCM execution (e.g., substitution of science modules, lumping together of adjacent modules, or deactivation of un-needed modules), as described further in a following section.

Science Modules. The actual physics calculations are carried out within the module subroutines. The design of these module algorithms is the responsibility of the individual module developer, but the GGCM design must provide adequate guidance and testing to ensure that each candidate module subroutine will actually work as intended within the overall GGCM structure. The physical variables that are calculated within each science module, and hence passed back and forth between that module and the control program, will depend on the module type. For example, a magnetosheath module would typically calculate, and hence require as input and generate as output, the first three moments of the plasma velocity distribution (density, flow velocity, and pressure) and the vector components of the magnetic field. (The electric field in this region would typically not be an independent variable but would be obtained from the ideal MHD approximation E + v¥B = 0.) A plasma-sheet boundary layer module would probably require, in addition to these, a separation between parallel and perpendicular pressures, and an inner magnetosphere/ring current module would probably divide the velocity distribution into several discrete energy steps, as in the Rice Convection Model [Wolf et al., 1991].

Thus, the boundary condition protocol may differ from one module type to another, but it must be standardized for each given module type. Each candidate magnetopause module, for example, must conform to a predetermined protocol for input and output of those parameters that are generally deemed important to magnetopause physics. It need not actually use or alter all of the supplied parameters, but it may not require or produce parameters that are not within the standardized set for that module type. A critical part of GGCM development will be to examine the physics of each domain and of each inter-domain boundary to determine a standard set of type-specific boundary conditions that is neither too restrictive (thus constraining the range of physics questions that the module type may reasonably be expected to address) nor too general (thus squandering the finite computing resources available). This will impose a burden both on the developers of the GGCM control program and on the developers of the candidate science modules. An important goal of GGCM development will be to maximize the share of this burden that falls on the control program development, which will only be done once (if it is done right), and to minimize the share that falls on the science module development, which will be done many times by many people (if the GGCM concept is to be successful).

The question of boundaries and boundary-condition protocols is further discussed in a separate section below.

User Interface Program. The GGCM is intended as a research tool available to all interested researchers. Thus a user-friendly interface is a critical element. The user interface program must perform three functions: (1) allow the user to specify which of several operational modes is desired, (2) prompt the user for required input data, and (3) provide the user with the requested output data in the requested format with the requested time resolution.

A separate section below deals with the menu of operational modes that should be made available in order to provide efficient, open use of the GGCM by the research community. The input data format obviously depends on the mode of operation, ranging from a single set of solar-wind and geomagnetic parameters for the specification of a static quasi-steady magnetospheric configuration, to a time series of such parameters, and perhaps also of empirical over-rides, for a time-dependent simulation. The format and frequency of output data should be left as flexible as possible, consistent with resource constraints, and should be established by the GGCM development/operation center with frequent input from the user community.

The World-Wide Web is the most obvious mechanism for user access to the GGCM, at least for "standard" access in which a quasi-static configuration is needed for a constant set of inputs. For time simulations, and especially for model runs involving non-standard modules, it may be more efficient to include human intervention in the process, which is one reason that a permanent GGCM development/operation center is needed, as described below.



< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

One of the primary goals of GGCM development, strongly supported by the community survey (see Appendix), is to provide a community resource that enables model developers and users to easily incorporate new model algorithms into a global model. This means that users should be able to access and interface with the global model without having to first learn and conform to a predetermined numerical grid and computational scheme. Properly designed, a modular-progressive construction is ideally suited to meet this "flexibility" requirement.

The modular-progressive approach identifies physically distinct regions of geospace ("domain modules") and provides the mechanism for coupling them together. The coupling takes place within the "boundary modules". The boundary modules serve several purposes. At the simplest level, they serve as well defined transitions between distinct macroscopic physical domains. They also define the loci for information transfer between adjacent domains, thus providing for two-way feedback between regions. In addition, they provide sites for the insertion of microphysical boundary layer models. The boundaries themselves are not to be regarded as fixed, but as movable entities whose locations are determined by global constraints such as stress balance.

Communication across boundaries will be accomplished by stating boundary values on both sides of each boundary. For each boundary type, there will be a prescribed standard set of variables, with others available as a user option, and standardized protocols. The minimum set of variables to be specified at most boundaries would be the gyrotropic MHD variables: density, velocity, magnetic field, and parallel and perpendicular pressure. Some boundary types may require more information; for example, a ring-current module, which physically occupies the inner magnetosphere, may need to receive energy spectral information from the plasma sheet and provide it to the ionosphere. Dependent variables derivable from the minimum set can be provided as required, for example, the electric field from the ideal Ohm's law or the current density from Ampere's law (or from the Vasyliunas equation). In the case of passive boundary modules (those not occupied by actual microphysics algorithms), the GGCM control program must also be responsible for enforcing the continuity of the normal components of the magnetic field, mass flux, and current density and of the tangential components of the electric field across the boundary.

The location of each boundary, and the values of each variable along each boundary, will be tracked as functions of time by the control program and stored either in terms of basis functions or at points on a grid, as discussed in the following section. Irrespective of the numerical storage scheme that is adopted, it is first necessary to provide a precise physical definition of each boundary that can be translated unambiguously (and hence automatically) into a set of numbers. For some boundary types the definition is relatively simple; for example, the "top" of the ionosphere can be defined, for most geospace purposes, as a spherical shell at a fixed altitude (~500 km) above the surface. The magnetopause is somewhat more complicated (for example, it can move about in response to solar-wind pressure variations), but is still generally recognizable by a discontinuous change (on the global scale) of magnetic-field direction. The plasma-sheet boundary layer can be identified with the nightside topological boundary (separator surface) between open and closed magnetic field lines, which is straightforward to evaluate for a given field model. The plasma-sheet/ring-current boundary (called "tail-dipole transition region" in Figure 1) can be identified with the last closed drift shell circling the Earth, although the location of this shell depends on energy and pitch angle, so it will be necessary to assign representative values for these variables unless a given local boundary module tracks them explicitly. It each of the above cases, the two-dimensional surface so defined can be extended to a surrounding three-dimensional slab volume as required to accommodate explicit boundary physics modules. The remaining boundary modules labelled in Figure 1 (d, e, f, and g) are finite flux bundles of magnetic field lines that are readily defined for a given field model once the bounding regional modules are defined.

The existence of distinct regional domains, and of the corresponding network of inter-domain boundaries, is established by observations. However, the attempt to define these boundaries precisely, and to represent them numerically, is an enterprise of considerable subtlety and perhaps also some controversy. We therefore recommend that, if the decision is made to proceed with development of a modular-progressive GGCM, an early step in this process should be an open workshop where potential module developers could combine their expertise to devise a comprehensive, flexible, and as nearly as possible foolproof set of specifications and protocols for each boundary type (and indeed, as a first step, to decide if the set of boundaries listed in Figure 1 is the most appropriate set). The research community that will ultimately utilize the GGCM should thus play an active role in the earliest design phase, to insure that the GGCM will serve the needs for which it was intended.

Numerical Considerations

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

The GGCM will be a large code. Although the modular-progressive approach makes it feasible to contemplate stripped-down versions for workstation research applications, it is still true that a fully global, time-dependent magnetospheric simulation is going to require a large numerical code regardless of the technique employed. The science computations will be performed within the modules, and will be the responsibility of the module developers. A variety of powerful numerical techniques have been developed by individual research groups over the past decade or two, and will no doubt be exploited in the development of GGCM modules. The linking together of these modules, however, is the responsibility of the central GGCM control program, and is itself a problem of non-trivial numerical proportions.

Computing speed is not a serious issue for the GGCM control program because the time-consuming calculations (e.g., solution of differential equations) takes place within the modules. Speed will be an issue for module developers, and may well be a decisive descriminating factor between competing algorithms for a given module, but it is unlikely to be a decisive factor in the development of the control program, which is essentially a smart data network.

Utilization of computer memory resources is, however, a serious issue for the GGCM control program because it must keep track of boundary positions, boundary values, and at least a summary form of global information on several vector and scalar fields, while leaving sufficient memory available for the various modules to do their work. A modular-progressive approach does provide some efficiency in memory utilization because the high-resolution calculations are performed in restricted regions rather than globally, but in order to exploit this efficiency it is important for the control program to make the most efficient possible use of its computer memory allocation.

The basic problem is to represent complicated three-dimensional continuous functions as faithfully as possible in terms of a finite sequence of numbers. There are two traditional approaches to this problem: (1) to store the values of the function itself at a finite set of predetermined points in space (i.e., a grid), and (2) to represent the function as a finite linear combination of predetermined basis functions and store the coefficients. Each approach has its advantages and disadvantages.

Grid-Based Approach. Storage of variables on a grid is the most familiar approach in global-scale magnetospheric modeling; for example, global MHD codes use this approach exclusively. The spatial resolution of the variables is determined both by the resolution of the grid and by the order of the finite differencing scheme. If a grid is used by the GGCM control program to store boundary locations and boundary values, it will be necessary to interpolate from this control grid to the numerical grids used by the various science modules, because the module grids will typically have much finer resolution than that of the control grid. (The opposite translation will be equally necessary but less problematic when going from a finer to a coarser grid.) Interpolating from a coarse grid to a fine grid can produce discontinuities of derivatives at the cell boundaries of the original grid, and can also cause violations of the governing equations. For example, a divergence-free magnetic field on the coarser grid may interpolate to a non-divergence-free field on the finer grid, and a plasma-field configuration that satisfies mass conservation, force balance, etc. on the coarse grid may fail to do so on the finer grid. The computational fluid dynamics (CFD) community has experience in the use of overlapping (Chimera) grids [e.g. Thompson et al, 1985, p. 69]. The use of such grids allows complex geometries to be modeled, but requires interpolating back and forth between grids. The consistency and conservation problems noted above are usually dealt with by iterative techniques [e.g., Zang and Street, 1995]. This is a complex and time-consuming technique, which may explain why the overlapping grid approach is not used very often [Ferziger and Peric, 1996, p. 28 and 206].

The text box on the next page provides an annotated glossary of various styles of numerical grids that are potentially applicable to a GGCM control grid, each with its own advantages and disadvantages. Regardless of the style of grid that is adopted, the overriding issue with any grid-based approach to GGCM bookkeeping will be interpolation between the disparate grids of the control program and the various modules.

Basis-Function Approach. The alternative to the use of a control grid is to express the functional form of all physical variables as a linear sum over a set of pre-selected basis functions that are designed to conform to the known geometrical properties of the system. These basis functions are analogous to eigenfunctions, but unlike eigenfunctions, they are not required to precisely satisfy a predetermined set of boundary conditions, and they do not necessarily form a complete orthogonal set. There are no existence proofs for this approach, which may require a great deal of trial and error to find the optimum form of the basis functions. Nevertheless, it provides a viable, tested alternative to a grid-based approach to the problem of efficient storage of global information on geospace variables.

A true eigenfunction expansion, using spherical harmonics, is practical for representing two-dimensional functions (e.g., electrostatic potential) on a spherical shell representing the ionosphere [e.g., Richmond, 1992, Weimer, 1995]. These functions, to the extent that they can be assumed to be constant along magnetic field lines (e.g. electrostatic potential in ideal MHD), can then be mapped upward into three-dimensional geospace if one has an independent 3-D model of the magnetic field configuration. This approach, though widely used, is not suitable for a GGCM control structure because (a) some variables cannot be assumed constant along B (e.g., electrostatic potential in the auroral acceleration region, or plasma density and pressure when the latter is not isotropic), and because (b) the 3-D magnetic field structure is not independently given but is itself one of the vector fields that must be represented numerically throughout geospace.

Glossary of Grid Styles (in Increasing Order of Complexity)

Grid Scheme



Cartesian grid

Orthogonal with constant spacing; probably the simplest grid one can construct

Interpolation is simple and computationally inexpensive. The constant grid spacing prevents possible low-order truncation errors that can occur on a variable grid when the grid spacing changes rapidly between grid cells. However, this problem of rapidly changing grid spacing can be circumvented with the use of integral methods, such as finite element or finite volume methods [Fletcher, 1991]

Enormous storage requirements to achieve moderate resolution. Grid boundaries do not align with any natural geospace boundary. This approach is not widely used

Rectilinear grid

Orthogonal with variable spacing in one or more of the coordinate directions. Utilized in the MHD codes of Ogino et al. [1994), Winglee [1994], and Raeder [1995].

Almost as simple and inexpensive to work with as a Cartesian grid, and can provide increased resolution in regions of interest and lower resolution in other regions, resulting in a moderate saving on storage requirements.

Large storage requirements, grid boundaries do not align with any natural geospace boundary. This is a popular approach, its main appeal being simplicity.

Curvilinear grid

A structured grid that is topologically rectangular but is continuously distorted to match certain boundaries. This grid structure also allows the use of composite grids [Thompson et al., 1985, p. 56] so that the entire magnetosphere can be covered by several intersecting grids.

Grid lines can be made to approximately line up with natural boundaries (e.g., magnetopause, ionosphere) and grid spacing can be varied to concentrate on regions of interest [e.g., Fedder and Lyon, 1995, Toffoletto et al., 1994].

With the increased sophistication comes increased awkwardness and complexity of interpolation. Transformations from computational (grid-aligned) space to physical (x,y,z) space can be difficult and expensive. Metric singularities, where the coordinate system breaks down (e.g., the polar axis of a spherical grid) must be dealt with carefully. Interpolation on a curvilinear grid may not be as accurate as on a simple Cartesian grid. Curvilinear grids are also typically non-orthogonal, which requires the computation of a full metric. Highly skewed grids can lead to resolution difficulties in critical regions [e.g., Hoffman and Chang, 1993, p. 347]. Metric identities may not be exactly satisfied in three dimensions, resulting in spurious values when a gradient of a constant quantity is taken [e.g., Thompson et al., 1985, p. 158]. Nevertheless, this is a powerful approach when properly implemented.

Moving (Lagrangian) grid

A system in which the grid boundaries move with one or more of the physical boundaries.

This approach could be ideally suited for a module that has a dynamic boundary, or a GGCM control grid that changes rapidly.

The computational overhead is likely to be considerable. Work is still in progress on the development of these methods [e.g, Oran and Boris, 1987; Shyy et al., 1996].

Cartesian-based hierarchical grid

Cartesian grid whose cells can be subdivided indefinitely in regions of interest; utilized in the Univ. of Michigan MHD code [Gombosi et al., 1994].

Can be made arbitrarily fine in regions of physical interest and coarse in other regions, thus providing substantial savings in storage. The hierarchical structure allows the grid to change dynamically as the solution evolves.

The inherently rectangular boundaries do not generally line up with physical boundaries. The bookkeeping can be complex and expensive. Interpolation is necessary when refining the grid, with associated issues of consistency and conservation.

Unstructured grid

Usually used with finite-element or finite-volume methods, this approach is becoming more commonplace in the computational fluid dynamics (CFD) community [e.g., Ferziger and Peric, 1996, p. 29].

The grid can be made arbitrarily fine in regions of interest and coarse in others. The inherently unstructured nature can be exploited to adjust and dynamically adapt to numerous different coordinate boundaries.

The price to be paid is that bookkeeping and grid generation can be extremely complex and expensive. The potential power of this approach for magnetospheric modeling is the subject of ongoing research [e.g., Klouek and Toffoletto, 1998].

It is possible to construct an almost-pure eigenfunction representation of the 3-D magnetosphere if the magnetopause is assumed to have a particularly simple form. For example, Voigt [1981] represents the magnetopause as a hemisphere on the dayside joined continuously to a semi-infinite circular cylinder on the nightside, and develops independent eigenfunction representations of the dayside magnetic field in terms of spherical harmonics and of the tail field in terms of cylindrical harmonics, with an interpolation scheme to smooth out the transition between the two representations. Alternatively, the magnetopause can be represented as a paraboloid of revolution and the internal field represented in terms of paraboloidal harmonics [Stern, 1985]. Although the pure eigenfunction approach is numerically efficient, it has a major shortcoming insofar as a GGCM control structure is concerned: the magnetopause shape cannot be adjusted to maintain pressure balance across it.

It is possible to eliminate these shortcomings by replacing the rigorously constrained eigenfunctions with a set of basis functions whose form still contains information about the global geometry of the system, but with less mathematical rigor (and hence more flexibility) than a pure eigenfunction expansion. This approach has been exploited by N. Tsyganenko in his highly successful series of empirical magnetospheric magnetic-field models [Tsyganenko, 1987, 1989, 1993, 1995]. The basis functions are chosen to represent the magnetic-field contributions associated with known magnetospheric current distributions (e.g., dipole, magnetopause, ring currents, and tail currents), and the expansion coefficients are determined by a multivariate least-squares fitting procedure to a very large set of satellite magnetometer data. The procedure could easily be adapted to a GGCM control structure, with the magnetic-field outputs of the various regional modules replacing the satellite magnetometer data. The result would be a smooth, analytical representation of the global magnetic field that has sufficient accuracy for global-level field-line tracing and for linkage between modules, while providing interpolation-free translation to an arbitrarily fine module grid. Although the technique has thus far only been applied to magnetic-field modeling, there is nothing in the procedure that precludes its application to other scalar and vector fields of interest [N. Tsyganenko, private communication, 1997].

The obvious advantage of the basis-function approach is that it provides continuous, analytic, differentiable and integrable functions on a global scale, while a grid-based approach provides these only piecewise within each grid cell by the use of sometimes awkward interpolation schemes. Corollary advantages include more precise enforcement of div(B) = 0, more efficient field-line tracing and field-line integrals, and more reliable specification of high-order derivatives, as may be required for the calculation of finite gyroradius effects in a low-latitude boundary-layer (LLBL) or a plasma-sheet boundary-layer (PSBL) module. However, the success of the basis-function representation depends sensitively on the right choice of basis functions for a given geometry, and the procedure for obtaining the best-fit expansion coefficients can be time-consuming. Also, just as in a grid-based approach, issues of consistency would have to be resolved, e.g., if module-generated fields satisfy force balance, to what degree do the fitted fields also satisfy force balance? Another problem that can, in principle, arise with a basis-function approach is the Gibb's phenomenon, where the basis-function representation has "overshoots" in the neighborhood of discontinuities in the original function that is being represented. This should not be a serious problem for the GGCM control program because geospace has no true discontinuities, and the regions where gradients are sharp enough to appear discontinuous on the global scale are encompassed, by design, within boundary modules that do not utilize the global representation except as a boundary condition. Thus the Gibb's phenomenon, while a potential problem to watch out for, is not likely to be an insurmountable problem.

Comparison of Approaches. The numerical storage requirements for a basis-function approach are, in principle, about the same as for a grid-based approach, for a given degree of spatial resolution, provided that the basis functions are chosen appropriately. The grid-based approach has been more extensively utilized in global geospace modeling, partly, no doubt, because it has received more generous institutional support, particularly within the NASA ISTP, SPTP, and HPCC programs. The basis-function approach has been utilized less extensively, but quite effectively, in the Tsyganenko empirical models. We recommend that both approaches be considered on an equal footing in the selection of a numerical storage mechanism for a GGCM control program.

A Prototype GGCM

< Previous Section < Previous Subsection Table of Contents Next Subection > Next Section >

The preceding description of the modular-progressive approach has been rather abstract, emphasizing the general characteristics that distinguish it from the alternative global-MHD-spine approach. One should not, however, infer from this discussion that the modular-progressive approach is a pure abstraction, an elaborate form without substance. There are, already, a number of working research models that employ the modular progressive approach to a greater or lesser degree. A familiar example is the Rice Convection Model (RCM), which contains two of the domain modules in Figure 1 (the ring current and the ionosphere) and two of the boundary modules (the ring-current field lines and the tail-dipole transition region). Thus, the approach that we advocate is not revolutionary, but evolutionary: it would build upon the significant foundation that has already been laid by existing research models.

As a concrete example we offer the "prototype" GGCM pictured in Figure 2. The format is identical to that of Figure 1 with the important exception that each of the generic module designations of Figure 1 has been replaced by a specific, existing research model that could, with minimal effort, be cast in the form of a program module if it is not already in that form. We do not mean to imply, nor should the reader infer, that the examples listed in Figure 2 are the only viable candidates, nor necessarily even the most viable candidates, for each module. Tables 1 and 2 show a longer list of existing research models that could reasonably be considered as candidates for each category of GGCM module. The specific subset listed in Figure 2 is based largely on our familiarity with these models, and our resulting conviction that a working global GGCM could readily be constructed from these existing elements with nominal additional programming effort. Most of the other models listed in Tables 1 and 2 are probably equally viable candidates, but their details are less familiar to the authors. And there are probably other, equally viable, candidates that are not listed in Tables 1 and 2 owing to our ignorance or oversight.

The purpose of Figure 2 and Tables 1 and 2 is to convince the reader that viable candidates already exist for each of the generic module elements listed in Figure 1. The lists are intended to be illustrative rather than exclusive. Indeed, the greatest strength of the modular-progressive approach is its inclusiveness; it is designed to accommodate any idea, however unconventional, provided that the idea can be cast into the form of a quantitative predictive algorithm subject to a standardized set of boundary conditions appropriate to each domain or boundary module. As discussed above, the appropriate set of boundary conditions for each module type should be established, not by fiat, but by an open workshop attended by all interested participants.

Table 1: Some existing candidates for DOMAIN modules.

Module Type




Exterior Gasdynamics with

Convected B

Exterior MHD

Spreiter & Stahara [1980]

Grabbe [1996]

Wu [1992]

Cairns & Lyon [1995]

Spreiter & Stahara [1992]

Tail Lobes

Siscoe-Sanchez Expansion Fan


Test Particles with Mantle Source

Siscoe & Sanchez [1987] Toffoletto & Hill [1993]

Pillip & Morfill [1978]

Ashour-Abdalla et al. [1993]

Plasma Sheet

Magnetotail MHD






Hybrid simulations



PIC simulations


Test Particles


Empirical Model


Substorm Trigger Algorithm

Hesse & Birn [1994]

Cai et al. [1994]

Wiegelmann & Schindler [1995]

Lee et al. [1995]

Hesse et al. [1996a,1996b]

Ma & Bhattacharjee [1996]

Burkhart et al. [1993]

Krauss-Varban & Omidi [1995]

Kuznetsova et al. [1998]

Pritchett & Coroniti [1995, 1997]

Dreher et al. [1996]

Onsager & Mukai [1996]

Ashour-Abdalla et al. [1993]

Tsyganenko [1989,1995]

Hilmer & Voigt [1995]

Klimas et al. [1991]

Vassiliadas et al. [1994]

Ring Current

Rice Convection Model

Magnetospheric Specification Model


Fluid Convection Model

Static Equilibrium Model



Harel et al. [1981a,b]

Bales et al. [1993]

Wolf et al. [1996]


Peymirat & Fontaine [1994]

Heinemann & Pontius [1990] Hesse & Birn [1993]

Heinemann et al. [1994]

Cheng [1995]

Module Type



Ring Current


Ring-Current Transport/Loss









Radiation-Belt Model









Plasmasphere Model





Empirical Model

Sheldon & Hamilton [1993]

Jordanova et al. [1994, 1997]

Chen et al. [1994]

Fok et al. [1995]

Kozyra et al. [1995]

Bishop [1996]

Thorne et al. [1996]

Noël [1997]

Sheldon and Eastman [1997]

Beutier et al. [1995]

Rodgers [1995]

Boscher et al. [1996]

Albert [1996]

Gussenhoven et al. [1996]

Huston et el. [1996]

Li et al. [1996]

Hudson et al. [1997]

Kim & Chan [1997]

Gallagher et al. [1988]

Rasmussen & Schunk [1990]

Rasmussen et al. [1993]

Weiss et al. [1997]


Tsyganenko [1989, 1993, 1995]

Hilmer & Voigt [1995]


2-D Conductivity Tensor



M-I Coupling Model



Empirical Conductivity Model


Empirical Potential Patterns

Wolf et al. [1991]

Roble et al. [1988]

Richmond et al. [1992]

Kan [1993]

Emery et al. [1996]

Papitashvili et al. [1994]

Spiro et al. [1982]

Hardy et al. [1985]

Heppner & Maynard [1987]

Weimer [1996]

Table 2: Some existing candidates for BOUNDARY modules.

Module Type




Empirical Boundary



Self-consistent boundary

LLBL Model


Magnetopause Reconnection




Specified Boundary Bn

Kelvin-Helmholtz instability

Roelof & Sibeck [1993, 1994]

Petrinec & Russell [1993,1996]

Shue et al. [1997]

Sotirelis [1996]

Drakou et al. [1994]

Wei et al. [1996]

Ding et al. [1992]

Fu et al. [1995]

Otto [1995]

Lakhina & Schindler [1996]

Toffoletto & Hill [1989, 1993]

Wu [1986]

Miura [1992]

Thomas & Winske [1993]

Plasma-Sheet Boundary Layer

Test Particles with Lobe Source





FLR Theory

Onsager et al. [1991]

Onsager & Mukai [1995]

Schriver & Ashour-Abdalla[1990]

Ashour-Abdalla et al. [1993,


Heinemann & Erickson [1997]

Tail-Dipole Transition Region

RCM with Magneto-Relaxation

Current-Disruption Model

Toffoletto et al. [1996]

Lui [1994]

Ring-Current Field Lines

RCM with Magneto-Relaxation

Magnetic Mirror Effect

Toffoletto et al. [1996]

Knight [1973]


Test Particles

LLBL Model

Onsager et al. [1993]

Wei et al. [1996]

Polar-Cap Field Lines

Controlled Penetration of IMF


Source-Surface Model

Empirical Model

Toffoletto & Hill [1993]

Ding et al. [1996]

Schulz & McNab [1987]

Tsyganenko [1995]

Plasma-Sheet Field Lines

Magnetic-Mirror Effect

Empirical Model

Knight [1973]

Tsyganenko [1989, 1995]

Hilmer & Voigt [1995]


In order to succeed, the GGCM must be a broadly based community effort. This is reflected in the results of the community survey (Appendix) which show a strong sentiment in favor of a flexible, transparent code structure and an open-use policy. It is equally essential, however, that this widely distributed effort be closely coordinated, and coordination implies a certain amount of institutional structure. Essential elements of this institutional structure include a center for GGCM development and operations, a grant-supported program of module development, a chain of command for agency/community oversight, and a set of "rules of the road" for GGCM use.

Development/Operations Center

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

Responsibilities. The initial development and validation of a modular-progressive GGCM control program and user-interface program will require the dedicated effort of a small group of individuals with expertise both in geospace physics and in numerical modeling. Expert advice can and should be provided from other sources, including in particular the GGCM steering committee, but a single group must have the ultimate responsibility for creating a working GGCM, and sufficient funding to support the time commitment needed to carry out that responsibility. This "GGCM Development Center" should consist of a part-time senior geospace scientist (the "center director") managing the work of one or two full-time computational physicists, as well as the accumulated software assets developed or acquired in the course of GGCM development. The center director should be responsible not only for the development of these software assets but also for their secure storage and documentation, and ultimately for their on-line accessibility during the operations phase. A high-performance computer system suitable for GGCM development and execution must be continuously accessible for a nominal user fee, if not geographically co-located with the development center – the GEM program has neither the resources nor the need to own and operate its own computer system. The center personnel will, of course, have to have access to modern workstations, Internet servers, and peripheral devices, but it is to be expected that these are already available at the host institution for GGCM use at nominal charge. Bulk storage devices (e.g., hard disks, CD's) are the only equipment purchases that we foresee as being needed for GGCM development.

After the initial GGCM development and validation is complete, the GGCM Development Center should metamorphose into a GGCM Operations Center with responsibility for maintaining and upgrading the code and training others in its use. Although the nature of the day-to-day duties will change, the professional time commitment required during the operational phase will probably be about the same as during the development phase. And, although the success of the GGCM venture should never be tied too closely to a single individual or group, it is both natural and desirable that the same person(s) and institution(s) that are responsible for GGCM code development should also be responsible, at least initially, for GGCM operations. In addition to technical code maintenance and operation, the Operations Center should be responsible for providing user services and for implementing the rules of the road as established by the GGCM Steering Committee.

A working GGCM is capable of consuming enormous volumes of data as input and generating even more enormous volumes of data as output. The content and format of these data streams and other restrictions on data flow are issues that must be dealt with by the GGCM Steering Committee in consultation with both the user community and the GGCM Center. As a general rule, the GGCM operation should utilize existing graphics software wherever possible, software that is either in the public domain or is already widely accessible in the user community. The GGCM Development/Operations Center should focus its attention on geospace physics and numerical modeling, not on data-stream management. The Center will, however, be responsible for implementing the protocols and restrictions on data flow that are directed by the GGCM Steering Committee. This responsibility includes programmed warning messages to the user whenever the user's request would violate established data protocols or exceed established limits on data volume. It is neither feasible nor desirable for the Center to attempt to archive all outputs from all GGCM runs. The responsibility for storage and analysis of detailed GGCM outputs should ordinarily rest on the individual user. Certain standardized model outputs should, of course, be archived and made accessible on-line by the Development/Operations Center, to serve both as benchmark comparisons and as demonstrations of the system's functionality. (These standardized outputs would be a natural outgrowth of the present Phase 1 of the GGCM Implementation Plan [Wolf and Hesse, 1997].) The transfer of input and output data between user and Center should ordinarily take place through the Internet, although the Center should have the capability to read and write CD-roms when necessary to accommodate unusually large data streams or unusual user circumstances.

Operating Modes. Just as there are a variety of different hypotheses that can be tested by a modular-progressive GGCM, there are also a variety of ways in which a modular-progressive GGCM can be utilized. We focus here on the operating modes that are essential to the use of the GGCM as a research tool, which is its primary purpose, although we also note that the possible future applications in the realms of operational forecasting and educational outreach should always be kept in mind, and should be built into the system to the extent that is feasible without sacrificing its essential research function. We identify four operating modes that are essential to the research function:

Default Static Configuration. For many purposes, especially in the coordinated analysis of ground-based and satellite data, it is necessary and sufficient to have a best-guess specification of the global, static plasma-field configuration corresponding to a given set of static inputs. In this mode, the GGCM would provide the type of mapping tool that has traditionally been provided, in part, by the empirical Tsyganenko magnetic-field models. For this purpose, the GGCM Development/Operations center should maintain a default GGCM model that represents, at any given time, the best available consensus view, as determined by the GGCM Steering Committee. (Thus, the work of this steering committee does not end with the development of the first-generation GGCM; it only gets more interesting!) This mode of GGCM operation represents the natural culmination of the present Phase-1 part of the GGCM Implementation Plan [Wolf and Hesse, 1997], in that it would provide outputs of existing (but now coupled) models to a wide community of users who need have no knowledge of (or interest in) the computational details of the models.

Default Time-Dependent Execution. For some data-analysis purposes (e.g., event studies), one needs not just a best-guess static configuration, but a best-guess quasi-static dynamic evolution of the geospace system in response to given time-dependent inputs. This is another application of the default GGCM described above, and is the natural culmination of the Phase-2 part of the present GGCM Implementation Plan [Wolf and Hesse, 1997] in that it provides output of existing (but now coupled) models for any given time-dependent input data stream. This is also the mode of operation that most closely resembles the operational forecasting mode that is the ultimate, though not the immediate, goal of the GGCM effort. Widespread community use of the default GGCM in conjunction with observational campaigns and event studies will quickly reveal any shortcomings in "conventional wisdom" as codified in the default GGCM. We recommend that access to the "default" GGCM be unrestricted to legitimate scientific users, at least initially. If, at some future time, the usage level becomes so high that restrictions on use must be considered, this is an issue that the GGCM Steering Committee can deal with at that time. (It will also be a sure sign of the success of the GGCM enterprise.)

New Module Development. A separate mode of GGCM operation is needed for the development and testing of new modules, whether they consist of theoretical algorithms or empirical models. Once the default GGCM is up and running, it should be possible to "unplug" any of the default modules and replace it with a customized module of the user's choice – this ability is the essence of the modular-progressive approach. This operation requires a different set of diagnostics compared to a production run; one is interested not so much in the detailed physical results as in the behavior of the modified code itself, including the adherence of the new module to the standardized boundary-condition protocols, and its influence on the efficiency and numerical stability of the globally-coupled code. This de-bugging mode of operation will probably require the greatest level of user support services and consultation from the GGCM Center staff. On the other hand, it may require only a truncated version of the GGCM code itself, suitable for execution on the user's home workstation. A successful outcome in this de-bugging mode should be a pre-requisite for inclusion of the new module in a full production run of the GGCM.

Customized GGCM Execution. Once a new module has been developed and tested, it should be a simple matter for its author to assemble and execute a customized GGCM with the new module(s) replacing the corresponding default module(s) in an otherwise standard GGCM. This stage represents the culmination of the whole GGCM effort; it is in this stage that alternative hypotheses get tested against each other and against observations. Full production runs of the GGCM, either customized or default versions, would ordinarily be done by the high-performance computing system that is either located at, or continuously accessible to, the GGCM Operation Center. A password system is probably all that is required to restrict use to qualified users. If the level of use begins to tax the system resources, then it will become an issue for the GGCM Steering Committee to seek ways to increase the system capacity or, as a last resort, to ration the existing resources democratically among qualified users. (And, again, it will be a sure sign of the success of the GGCM effort.) Users who have the capacity to download and execute the full GGCM elsewhere should be encouraged to do so, in order to relieve the demand on the Center system.

Options. In each of the operating modes described above, the user should have the option to select either the maximum-resolution GGCM with all "bells and whistles", or a stripped-down, lower resolution version that is designed to be downloaded and experimented with at the individual user's workstation. The Development/Operations Center should be responsible for developing and maintaining on-line both the full version and the "smaller, faster" version of the default code. The latter version will be more efficient, and probably quite adequate for the majority of the day-to-day research use of the GGCM.

In the module de-bugging and custom-execution modes of operation, it should also be an option for the user to specify one or more "passive" modules that are not executed at each time step but are instead replaced by static algorithms or data arrays. Thus, for example, an investigation of magnetopause physics or magnetotail current-sheet physics can benefit from the global constraints provided by the coupled GGCM structure without having to tolerate the computational overhead of doing a ring-current or ionosphere-thermosphere calculation at each time step (and vice-versa). Similarly, in the module-debugging and custom-execution modes, it should be possible for the user to lump together two or more adjacent modules in case the user's algorithm already encompasses more than one of the designated domain or boundary modules. For example, as noted above, the Rice Convection Model already encompasses two domain modules and two boundary modules. As another example, a global MHD code could provide many, if not most, of the domain and boundary modules, with non-MHD effects being incorporated in just one or a few of the domain or boundary modules. Thus, the "MHD-spine" approach can be viewed, not as a separate alternative to the modular-progressive approach, but rather as a subset of a larger class of model-coupling problems that are facilitated by a modular-progressive GGCM approach. In each case, the guiding principle of the GGCM Development/Operations Center should be to facilitate, rather than to obstruct, the inclusion of pre-existing model algorithms.

Module Development Grants

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

It is implicit in any GGCM effort (and quite explicit in the modular-progressive approach) that the success of a globally-coupled GGCM model depends entirely on the availability of GGCM-compatible module algorithms for each regional domain and for each inter-domain boundary. As indicated in Figure 2 above, it is already possible to identify at least one candidate for each domain and boundary module from the existing suite of theoretical models, although many of these will need some further work to make them GGCM-compatible. Moreover, as suggested by Tables 1 and 2 above, there is a much larger class of theoretical models that could, and should, be incorporated into a global GGCM structure in order to test their observable global consequences. There will also, undoubtedly, be new ideas and hypotheses, not foreseen at present, that will become logical candidates for inclusion in a global GGCM. Thus it is essential that the development and implementation of the GGCM control program by the Development/Operations Center should be accompanied by an equally ambitious program of module development and implementation, supported by individual research grants.

As is the case for the GEM program generally, this goal does not pre-suppose any infusion of new funds into the NSF Magnetospheric Physics Program, but merely an institutional policy of encouraging future NSF-supported geospace modellers to seriously consider the scientific advantages of casting their results into the form of a GGCM plug-in module. This should not become a prerequisite for research support, but it can and should become a compelling programmatic factor in choosing among equally meritorious research proposals. Lacking any historical precedent, we provisionally recommend that the funding level designated for GGCM module development should equal that designated for GGCM control program development and operations.

To qualify as a GGCM module, an algorithm should be required to pass four tests:

Robustness: The code should perform reliably over the full range of boundary and initial conditions that it might encounter in the GGCM environment. It should have internal safeguards against division by zero, illegal function arguments, out-of-range inputs, and other errors that could bring down the module and hence the GGCM program.

Compatibility: The module must adhere precisely to the established protocols for the number and type of boundary conditions and input/output quantities for its given module type (except that, as noted above, the option should be provided for a single algorithm to encompass more than one of the designated domain/boundary modules). If other inputs are required beyond those provided routinely by the GGCM control program, these must be provided internally in the code, with appropriate documentation (or a built-in set-up program) for changing them.

Documentation: There should be sufficient comments, with appropriate references, so that the interested user (and the system manager) can ascertain what the code is doing.

Sample Outputs: A set of outputs for standard inputs should be provided to demonstrate the working of the module. Detailed validation studies or comparisons with observational data should not be required as entrance criteria – this is appropriately done after an algorithm qualifies as a GGCM module, not before – a simple test run should suffice to show that the code does what it claims to do.

To help ensure robustness of module codes, the GGCM program may want to consider adopting some of the protocols used by the Air Force in the development of its operational space weather codes. For example, in the Magnetospheric Specification Model [R. A. Wolf, private communication, 1998], the code developers were required to eliminate essentially all "stop" statements except for the one at the end of the program, and to provide an escape route to allow the program to continue execution, with appropriate error messages, after an otherwise fatal error.


< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

A working mechanism already exists for agency/community oversight of the GGCM effort, namely, the GGCM Steering Committee, which reports to the GEM Steering Committee, which in turn answers both to the GEM research community and to the NSF program management. We do not anticipate any need to re-invent the existing agency/community oversight structure. Rather, we endorse the existing structure and, as noted already, we anticipate that the job of the GGCM Steering Committee will become more interesting and more important as time goes on.

For example, the GGCM Steering Committee will soon have to assess the present round of concept studies, and to formulate a recommendation to NSF program management to pursue either the modular-progressive approach described here, or the MHD-spine approach described in the Dartmouth study, or perhaps elements of both. If the modular-progressive approach is adopted, it would then be appropriate for the GGCM Steering Committee to take on the further task of establishing, with community input, the most appropriate set of domain and boundary modules (i.e., an officially sanctioned version of Figure 1 above), and the most appropriate set of protocols for transferring boundary conditions between modules. In addition, regardless of the computational approach that is adopted, the GGCM Steering Committee will have to establish the most appropriate set of "rules of the road" for operation of the GGCM, and the most appropriate set of options and protocols for data transfer between GGCM users and the GGCM Development/Operations Center. Each of these questions requires open community input, and each could plausibly be the subject of a special GEM-sponsored workshop open to all interested parties. Some of these discussions could perhaps be worked into the agenda of pre-planned GEM workshops (e.g., the Summer Snowmass Workshops), but others may require dedicated venues because of their complexity. In particular, the questions of module definition and inter-module communication are sufficiently complex and subtle to justify, in our opinion, a dedicated two-day workshop.

Rules of the Road

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

The GGCM Steering Committee has the logical responsibility for establishing "rules of the road" for fair and appropriate use of the GGCM, once it exists. Our comments here are restricted to very general guidelines that we feel are important on the basis of our prior experience with collaborative scientific efforts and our time spent thinking about the GGCM problem.

The GGCM will be a scientific enterprise, an effort "of, by, and for" the scientific research community. Intellectual property rights should not be an issue. Irrespective of the scientific structure of the code (modular-progressive, MHD-spine, or whatever), it should be understood that any computer code that is included as part of the GGCM thereby enters the public domain. The only exception that should be made to this rule is in the case of codes that were developed by other government agencies (e.g., DoD) that have pre-existing legal restrictions on their dissemination and use. For example, the Magnetospheric Specification Model [Wolf et al., 1996], which might logically provide some elements of a GGCM, was developed under contract with the Air Force, and has not been released for public use with real-time input data (although its scientific use with non-real-time data is unrestricted). Restrictions on GGCM use may well become necessary for efficiency and security reasons (to prevent waste and vandalism), but should not be motivated by proprietary considerations.

Insofar as credit and co-authorship are concerned, an official "default" GGCM module should be treated exactly like a spacecraft instrument. Thus, the module developer(s) should naturally be entitled to co-author any paper that describes the module itself, and the first paper that describes the results of its use within the GGCM structure. Any subsequent paper that utilizes this module as part of the default GGCM should be authored by the person(s) responsible for the work, which may or may not include the module developer(s), depending on the extent to which their intervention is critical to the completion of the work. Any paper that includes or depends upon the results of a GGCM computer run should give an appropriate acknowledgment to the NSF in general and to the GGCM program in particular, but it should be understood that providing a module for the GGCM is like providing an instrument for a NASA spacecraft; it does not entitle the provider to be listed as a co-author on all subsequent papers that utilize the data (or numerical results) from the instrument (or module). The rules of the road, insofar as co-authorship is concerned, should preserve existing rights and protocols that are based on reason, but should not create new rights and protocols that are not.


Development Time

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

The objectives are clear, and the techniques are available. Given an adequate level of funding, as estimated below, we believe that a working first-generation modular-progressive GGCM could be assembled in 30 - 36 months. The first 6 - 9 months would be needed to select, with community input, a consensus set of module types and boundary-condition protocols and an optimal numerical storage scheme. An additional year should suffice to create and implement the control program, and the remaining 12 - 15 months would be devoted to de-bugging and bullet-proofing the control program and creating the kind of user interface program that would make the GGCM readily accessible to all scientific users, not just to its developers. Streamlining the code, and adapting it to new hardware platforms, are activities that can and should continue into the operations phase, following the initial 3-year development effort. In the meantime, the GEM research community at large can become engaged in the ongoing process of testing and improving the physics content of the code.

Experience has shown that large-scale computer programming projects can take twice as long, and cost twice as much, as they should. Applying this rule of thumb, one should be prepared to wait 5 - 6 years to see a fully functional, all-inclusive modular GGCM at the level of effort assumed here. On the other hand, with a modular-progressive approach it is likely that incremental progress in the control-program development will produce incremental science results. Many critical questions can be addressed with just two or three coupled modules. Thus, although a complete GGCM may well take ~5 years to develop, interesting physics results should be forthcoming from the effort after just 2 - 3 years.

The budget estimates given below assume a level of activity that we consider to be close to the minimum required to produce a working GGCM within a useful time frame (~ 3 - 5 years). There is, within limits, an inverse relationship between funding level and development time. If the funding level were doubled, the development time could be reduced significantly, but probably not by half. A minimum time ~ 1 year would probably be required for mere humans to accomplish an organizational and technical task of this magnitude even with unlimited funding. At the other end of the spectrum, there is a minimum "critical-mass" level of effort, below which a GGCM development program would probably have no visible impact on the progress of geospace research, regardless of how long one is willing to wait. This relationship between funding level and research impact is illustrated schematically in Figure 3. Funding levels less than "A" produce negligible impact, while funding levels exceeding "B" produce diminishing returns on additional investment. The level of effort assumed in the following budget estimates is probably closer to A than to B.

Development and Operations Cost

< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

Table 3 gives our estimate of the cost per year of establishing and maintaining a GGCM Development Center based on the 30 - 36 month development time discussed above. The salary amounts are representative only, and are meant to include fringe benefits but not overhead, which is listed separately. The travel budget is to support travel by the center personnel to workshops dedicated to GGCM development, as well as travel by external consultants to the center. It is assumed that basic equipment (workstations, web server, I/O devices) are available at nominal charge at the host institution, and that community/agency oversight will continue to be provided pro bono.

The first amount column in the table indicates the likely cost if the Center is located at a university or private laboratory, with fully-charged salaries and full overhead (assumed to be 50% of direct costs for the sake of illustration). The second amount column shows a "best-case" scenario in which the Center is located at a cooperating government laboratory and staffed by civil-service employees. An obvious candidate would be the Laboratory for Extraterrestrial Physics at the NASA Goddard Space Flight Center, whose mission already overlaps strongly with the goals of the GGCM effort. Comparison of the bottom lines of these two amount columns indicates that there is a strong financial motivation for the NSF to seek a cooperative agreement with NASA for the development of a GGCM Center. Without such cooperation, the cost to NSF of developing a GGCM (exclusive of modules) is estimated to be of the order of $400K/yr; with such cooperation, the cost to NSF could be reduced by a large factor in the best-case scenario, without impacting the NASA research grants program.

As noted earlier, we believe that a similar amount (~$400K/yr) should be invested in module development grants. In the likely event that new funds of this magnitude are not injected into the GEM program, a portion of existing research funding would have to be directed toward GGCM module development. (Some existing GEM-funded research grants could already be so designated with only minor change of emphasis.) The module-development funds could be favorably leveraged if NASA could be persuaded to make GGCM module development a theme of its Sun-Earth Connections Theory Program (formerly Space Physics Theory Program), in support of its ISTP/GGS and SEC Programs.

Table 3: Annual Cost Estimate for a GGCM

Development/Operations Center



Worst Case (1)

Best Case (2)

Salary & Fringe Benefits for Center Director, 1/3 time


Salary & Fringe Benefits for Computational Physicist,

full time




Salary & Fringe Benefits for Scientific Programmer,

full time




Hardware & Software Fees & Upgrades






Materials, Supplies, & Miscellaneous








Overhead @ 50%


Total (excluding Module-Development Grants)






(1) Assumes university or private lab, with fully-charged salaries and full

overhead @ 50% of direct costs.

(2) Assumes cooperating government lab with no salary burden, no overhead,

and subsidized computer rates.

Thus the total cost to NSF of a viable modular-progressive GGCM effort is estimated to be of the order of $0.8M/yr (or $0.5M/yr with NASA cooperation under a best-case scenario). Of this total, at least half would be distributed throughout the research community for module development. Less than half (and much less in the best-case scenario) would be devoted to the GGCM Development/Operations Center. We believe that the GEM Program would be ill-served, both scientifically and programmatically, if more than half of the designated GGCM funding were to be devoted to a central development/operations facility. As the Development Center transitions into an Operations Center, the professional salary costs might be reduced somewhat, but this is likely to be offset by increases in other cost categories so the total cost per year will probably stay about the same.



< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

Our study of the modular-progressive GGCM concept supports the following conclusions:

1. A modular GGCM is needed. A fully modular programming approach is well suited to many if not most of the outstanding problems of geospace dynamics that a GGCM will be expected to address. Indeed, many of these problems appear to require such an approach.

2. A modular GGCM is flexible and inclusive. A modular-progressive approach offers the surest route to achieving two key attributes of a successful GGCM, flexibility and inclusiveness. Flexibility includes the ability to accommodate a wide variety of physical hypotheses, numerical techniques, and empirical data, and to perform controlled computer experiments to investigate cause-and-effect relationships. Inclusiveness means engaging the efforts and talents of a broad segment of the theoretical and experimental GEM research community in both the development and the use of the GGCM.

3. A modular GGCM is feasible. The general concept is clearly defined and requires no new breakthroughs in hardware or software design. Most of the required modules already exist in stand-alone form; what is needed is standardization of boundary protocols and development of a control/user interface program. We see no technical roadblocks ahead, only dedicated, hard work.

4. Community input is needed. We have identified four technical issues that require further community input and concensus before a modular-progressive code design can be finalized:

Identification of an optimum set of domain and boundary modules;

Specification of the content and protocol of boundary conditions for each module type;

Choice of an appropriate set of basis functions and/or an appropriate grid type for storage of information by the control program; and

Specification of the format of input and output data streams.

5. A development/operations center is needed. The GGCM control program will not build itself, and it will not occur as a spontaneous outgrowth of loosely-coordinated individual efforts connected only by a web site. Serious funds will have to be invested in a center for GGCM development and operation, with a responsible scientist directing the work of one or more computational physicists. We estimate the funding requirement for center support to be ~$400K/yr if full salaries and overhead are charged, and perhaps as low as $60K/yr if a cooperating government laboratory could be persuaded to contribute salaries and waive overhead. We estimate the time required for initial deployment of a functioning modular-progressive GGCM to be about 30 - 36 months at this funding level.

6. Module development grants are equally needed. Existing regional models need further work to make them conform to GGCM module protocols, once these are formalized, and additional regional/boundary models need to be developed to a quantitative status. Module development is of equal importance to control-program development; neither is of much use without the other.

7. Community/agency oversight is important. But we do not view this as a problem because existing mechanisms for oversight and community involvement are considered adequate and responsive to the needs of the research community.


< Previous Section < Previous Subsection Table of Contents Next Subsection > Next Section >

On the basis of the above conclusions, we offer the following recommendations to the GGCM Steering Committee and to the GEM Program management:

1. We recommend development of a modular-progressive GGCM. The time is right, both scientifically and programmatically, to begin this important development effort in earnest.

2. We recommend establishment of a GGCM development/operation center. This is the next logical step in the overall GGCM implementation effort, following the review of the present set of concept definition studies. The GEM program should issue (by means of the GEM Messenger email newsletter) a public call for proposals from organizations wishing to host a GGCM development/operation center. The proposal deadline should be short (~ 3 months from the announcement of opportunity) and the proposals themselves should be short (~10 pages of text not counting budget and supporting materials). Electronic submission should be encouraged but not required. The proposals should not be required to repeat the scientific justification for, nor the logical structure of, the proposed GGCM product, except by reference to one or more of the present set of concept studies. They should, however, be allowed to depart from or amplify on these concept studies if desired, through the use of appended material not counted in the page limit. The GGCM Steering Committee should review these proposals and recommend one or more for funding to the GEM program management. If more than one development center is to be funded (e.g., one for a modular approach and another for a global-MHD-spine approach), then one should be designated as the lead center with responsibility for those elements of GGCM development (e.g., user interfaces, input/output data formats) that are common to both approaches. The lead center should assume responsibility for the administration of the ongoing Phase-1 and -2 parts of the three-phase GGCM implementation plan [Wolf and Hesse, 1997] in addition to the development effort that constitutes Phase 3. Proposers should be made aware of the fact that the source and amount of funding for the project are still unknown at the time of the solicitation, but should take as guidelines the funding requirements estimated in the concept studies.

3. We recommend workshops to develop a GGCM blueprint. In conclusion (4) above we identify four technical issues that need to be resolved, preferably with broad community input, before a "blueprint" can be drawn for the construction of a modular-progressive GGCM control/user interface program. Presumably, the alternative MHD-spine approach may also have unresolved issues. We recommend that each of these issues be the subject of a dedicated workshop with open participation by the prospective user community. A full-day workshop on each topic is justified, with each hour of workshop time being backed up by many hours of pre-workshop preparation. Logistically, it is probably most efficient to conduct these workshops back-to-back during a single week at a single location because most participants will probably want to participate in more than one (and because the topics are interrelated). Provision should also be made for remote participation via Internet for those unable to attend in person. The announcement of these workshops should be made by the GGCM steering committee as soon as possible after the selection of a GGCM development/operation center. The scheduling of the workshops should provide ample time (~4-6 months) for prospective participants to do their homework. An email exploder should be created, based on a subset of the GEM Messenger email list, to provide prospective workshop participants the opportunity to exchange background information prior to the workshops, either by means of unformatted text or by means of a dedicated web site, with a synopsis and url address being provided through the email exploder to the target group. The product of these workshops should be, not a glossy public document, but a specific set of instructions from the user community to the code developers.

4. We recommend a dedicated program of module-development grants. As noted in conclusion (6) above, the development of a GGCM control program (or "spine") would be a useless gesture without an equally aggressive program of module development. As a rule of thumb, we suggest that the funding made available for module development should be at least equal to that made available for control-program (or "spine") development. If the GGCM development/operation center is subject to full salary and overhead charges (implying an annual budget ~$400K by our estimate above), then a similar amount should be dedicated to module development. If the GGCM center can be supported much more economically through the cooperation of a government lab, as in the "best-case" scenario described above, it does not follow that the module-development costs should decrease in the same proportion. The 50-50 rule of thumb that we recommend applies to scientific work-hours expended, not dollars expended. We believe that, if more than half of the available GGCM resources were concentrated in a single development/operation center, not only would the effort be compromised scientifically, but it would also lose the enthusiastic support of the GEM research community, which is an indispensable resource.

5. We recommend continuity of oversight. We find no fault with the existing mechanisms for community/agency oversight of the GGCM development effort, and we recommend that these mechanisms be kept in place. In particular, we recommend that the GGCM Steering Committee continue to guide GGCM development with the same level of aggressiveness and inclusiveness as it has exercised in the past. We also recommend that the steering committee devise a set of rules of the road for GGCM use that are as open and inclusive as possible subject to prior legal restraints and physical resource constraints.


< Previous Section Table of Contents Next Section >

Albert, J. A., CRRES observations and radial diffusion theory of radiation belt protons, in Radiation Belts: Models and Standards, J. F. Lemaire, D. Heynderickx and D. N. Baker (eds.), p. 69, Geophys. Monogr. 97, Am. Geophys. Un., Washington, D. C., 1996.

Ashour-Abdalla, M., J. P. Berchem, J. Büchner, and L. M. Zelenyi, Shaping of the magnetotail from the mantle: global and local structuring, J. Geophys. Res., 98, 5651, 1993.

Ashour-Abdalla, M., L. M. Zelenyi, V. Peroomian, R. L. Richard, and J. M. Bosqued, The mosaic structure of plasma bulk flows in the Earth's magnetotail, J. Geophys. Res., 100, 19,191, 1995.

Bales, B., J. Freeman, B. Hausman, R. Hilmer, R. Lambour, A. Nagai, R. Spiro, G.-H. Voigt, R. Wolf, W. F. Denig, D. Hardy, M. Heinemann, N. Maynard, F. Rich, R. D. Belian, and T. Cayton, Status of the development of the Magnetospheric Specification and Forecast Model, in Solar-Terrestrial Predictions-IV: Proceedings of a Workshop at Ottawa, Canada, May 18-22, 1992, J. Hruska, M. A. Shea, D. F. Smart, and G. Heckman (eds.), p. 467, Boulder: NOAA, Environmental Res. Lab., 1993.

Beutier, T., D. Boscher, and M. France, SALAMMBO: A three-dimensional simulation of the proton radiation belt, J. Geophys. Res., 100, 17,181, 1995.

Bishop, J., Multiple charge exchange and ionization collisions within the ring current-geocorona-plasmasphere system: generation of a secondary ring current on inner L shells, J. Geophys. Res., 101, 17,325, 1996.

Boscher, D. M., T. Beutier, and S. Boudarie, A three-dimensional phase space dynamical model of the Earth's radiation belt, in Workshop on the Earth's Trapped Particle Environment, G. D. Reeves (ed.), p. 181, Woodbury, New York: AIP Press, 1996.

Burkhart, G. R., P. B. Dusenbery, T. W. Speiser, and R. E. Lopez, Hybrid simulations of thin current sheets, J. Geophys. Res., 98, 21,373, 1993.

Cai, H. J., D. Q. Ding, and L. C. Lee, Momentum transport near a magnetic X line in collisionless reconnection, J. Geophys. Res., 99, 35, 1994.

Cairns, I. H., and J. G. Lyon, MHD simulations of Earth's bow shock at low Mach numbers: standoff distances, J. Geophys. Res., 100, 17,173, 1995.

Chen, M. W., L. R. Lyons, and M. Schulz, Simulations of phase space distributions of storm time proton ring current, J. Geophys. Res., 99, 5745, 1994.

Cheng, C. Z., Three-dimensional magnetospheric equilibrium with isotropic pressure, Geophys. Res. Lett., 22, 2401, 1995.

Crooker, N. U., Dayside merging and cusp geometry, J. Geophys. Res., 84, 951, 1979.

Ding, C., T. W. Hill and F. R. Toffoletto, Improvement of the Toffoletto-Hill open magnetosphere model, in Physics of Space Plasmas (1995), T. Chang and J. R. Jasperse (ed.), Cambridge, MA: MIT Center for Theoretical Geo/Cosmo Plasma Physics., 1996.

Ding, D. Q., L. C. Lee, and D. W. Swift, Particle simulations of driven collisionless magnetic reconnection at the dayside magnetopause, J. Geophys. Res., 97, 8453, 1992.

Drakou, E., B. U. O. Sonnerup, and W. Lotko, Self-consistent steady state model of the low-latitude boundary layer, J. Geophys. Res., 99, 2351, 1994.

Dreher, J., U. Arendt, and K. Schindler, Particle simulations of collisionless reconnection in magnetotail configuration including electron dynamics, J. Geophys. Res., 101, 27,375, 1996.

Emery, B. A., G. Lu, E. P. Szuszczewicz, A. D. Richmond, R. G. Roble, P. G. Richards, K. L. Miller, R. Niciejewski, D. S. Evans, F. J. Rich, W. F. Denig, D. L. Chenette, P. Wilkinson, S. Pulinets, K. F. O'Loughlin, R. Hanbaba, M. Abdu, P. Jiao, K. Igarashi, and B. M. Reddy, Assimilative mapping of ionospheric electrodynamics in the thermosphere-ionosphere general circulation model: comparisons with global ionospheric and thermospheric observations during the GEM/SUNDIAL period of March 28-29, 1992, J. Geophys. Res., 101, 26,681, 1996.

Fedder, J. A., and J. G. Lyon, The Earth's magnetosphere is 165 RE long: self-consistent currents, convection, magnetospheric structure, and processes for northward interplanetary magnetic field, J. Geophys. Res., 100, 3623, 1995.

Ferziger, Joel H., and Milovan Peric, Computational Methods for Fluid Dynamics, Springer-Verlag, Berlin Heidelberg, 1996.

Fletcher, C. A. J., Computational Techniques for Fluid Dynamics, Vol. 1, Springer-Verlag, Berlin Heidelberg ,1991.

Fok, M.-C., T. W. Moore, J. U. Kozyra, G. C. Ho, and D. C. Hamilton, Three-dimensional ring current decay model, J. Geophys. Res., 100, 9619, 1995.

Fu, S. Y., Z. Y. Pu, Z. X. Liu, and Q. G. Zong, Simulation study on stochastic reconnection at the magnetopause, J. Geophys. Res., 100, 12,001, 1995.

Gallagher, D. L., P. D. Craven, and R. H. Comfort, An empirical model of the Earth's plasmasphere, Adv. Space Res., 8, 15, 1988.

Gombosi, T. I., K. G. Powell, and D. L. de Zeeuw, Axisymmetric modeling of cometary mass loading on an adaptively refined grid: MHD results, J. Geophys. Res., 99, 21,525, 1994.

Grabbe, C. L., MHD theory of Earth's magnetosheath for an axisymmetric model, Geophys. Res. Lett., 23, 7, 777, 1996.

Gussenhoven, M. S., E. G. Mullen , and D. H. Brautigam, Phillips Laboratory Space Physics Division Radiation Models, in Radiation Belts: Models and Standards, J. F. Lemaire, D. Heynderickx and D. N. Baker (eds.) , p. 93, Geophys. Monogr. 97, Am. Geophys. Un., Washington, D. C., 1996.

Hardy, D. A., M. S. Gussenhoven, and E. Holeman, A statistical model of auroral electron precipitation, J. Geophys. Res., 90, 4229, 1985.

Harel, M., R. A. Wolf, P. H. Reiff, R. W. Spiro, W. J. Burke, F. J. Rich, and M. Smiddy, Quantitative simulation of a magnetospheric substorm 1, model logic and overview, J. Geophys. Res., 86, 2217, 1981a.

Harel, M., R. A. Wolf, R. W. Spiro, P. H. Reiff, C.-K. Chen, W. J. Burke, F. J. Rich, and M. Smiddy, Quantitative simulation of a magnetospheric substorm 2, comparison with observations, J. Geophys. Res., 86, 2242, 1981b.

Heinemann, M., and D. H. Pontius, Jr., Representations of currents and magnetic fields in isotropic magnetohydrostatic plasma, J. Geophys. Res., 95, 251, 1990.

Heinemann, M., G. M. Erickson, and D. H. Pontius, Jr., Inertial currents in isotropic plasma, J. Geophys. Res., 99, 8635, 1994.

Heinemann, M., and G. M. Erickson, Field-aligned currents and parallel electric fields in the plasma sheet boundary layer, J. Geophys. Res., (submitted), 1997.

Heppner, J. P., and N. C. Maynard, Empirical high-latitude electric field models, J. Geophys. Res., 92, 4467, 1987.

Hesse, M., and J. Birn, Three-dimensional magnetotail equilibria by numerical relaxation techniques, J. Geophys. Res., 98, 3973, 1993.

Hesse, M., and J. Birn, MHD modeling of magnetotail instability for localized resistivity, J. Geophys. Res., 99, 8565, 1994.

Hesse, M., J. Birn, M. M. Kuzetsova, and J. Dreher, A simple model of core field generation during plasmoid evolution, J. Geophys. Res., 101, 10,797, 1996a.

Hesse, M., J. Birn, D. N. Baker, and J. A. Slavin, MHD simulations of the transition of magnetic reconnection from closed to open field lines, J. Geophys. Res., 101, 10,805, 1996b.

Hilmer, R. V., and G.-H. Voigt, A magnetospheric magnetic field model with flexible current systems driven by independent physical parameters, J. Geophys. Res., 100, 5613, 1995.

Hoffman, K. A., and S. T. Chang, Computational Fluid Dynamics for Engineers - Vol. 1, Engineering Education System, Wichita, KS, 1993.

Hudson, M. K., S. R. Elkington, J. G. Lyon, V. A. Marchenko, I. Roth, M. Temerin, J. B. Blake, M. S. Gussenhoven , and J. R. Wygant, Simulations of radiation belt formation during storm sudden commencements, J. Geophys. Res., 102, 14,087, 1997.

Huston, S. L., G. A. Kuck , and K. A. Pfitzer, Low altitude trapped radiation model using TIROS/NOAA data, in Radiation Belts: Models and Standards, J. F. Lemaire, D. Heynderickx and D. N. Baker (eds.), p. 119, Geophys. Monogr. 97, Am. Geophys. Un., Washington, D. C., 1996.

Jordanova, V. K., J. U. Kozyra, G. V. Khazanov, A. F. Nagy, C. E. Rasmussen, and M.-C. Fok, A bounce-averaged kinetic model of the ring current ion population, Geophys. Res. Lett., 21, 2785, 1994.

Jordanova, V. K., J. U. Kozyra, A. F. Nagy , and G. V. Khazanov, Kinetic model of ring current-atmosphere interactions, J. Geophys. Res., 102, 14,279, 1997.

Kan, J. R., A global magnetosphere-ionosphere coupling model of substorms, J. Geophys. Res., 98, 17,263, 1993.

Kim, H. J., and A. A. Chan, Fully-adiabatic changes in storm-time relativistic electron fluxes, J. Geophys. Res., 102, 22,107, 1997.

Klimas, A. J., D. N. Baker, D. A. Roberts, D. H. Fairfield, and J. Büchner, A nonlinear dynamic analogue model of substorms, in Magnetospheric Substorms, J. R. Kan, T. A. Potemra, S. Kokubun, and T. Iijima (eds.), p. 449, Washington, D. C.: Am. Geophys. Un., 1991.

Klouek, P., and F. R. Toffoletto, Three dimensional finite element modeling of the Earth's magnetosphere, J. Comp. Phys., (submitted), 1998.

Knight, S., Parallel electric fields, Planet. Space Sci., 21, 741, 1973.

Kozyra, J. U., C. E. Rasmussen, R. H. Miller, and E. Villalon, Interaction of ring current and radiation belt protons with ducted plasmaspheric hiss, 2, time evolution of the distribution function, J. Geophys. Res., 100, 21,911, 1995.

Krauss-Varban, D., and N. Omidi, Large-scale hybrid simulations of the magnetotail during reconnection, Geophys. Res. Lett., 22, 3271, 1995.

Lakhina, G. S., and K. Schindler, Tearing modes at the magnetopause, J. Geophys. Res., 101, 2707, 1996.

Lee, L. C., L. Zhang, G. S. Choe, and H. J. Cai, Formation of a very thin current sheet in the near-Earth magnetotail and the explosive growth phase of substorms, Geophys. Res. Lett., 22, 1137, 1995.

Li, X., M. K. Hudson, J. B. Blake, I. Roth, M. Temerin , and J. R. Wygant, Observation and simulation of the rapid formation of a new electron radiation belt during March 24, 1991 SSC, in Workshop on the Earth's Trapped Particle Environment, G. D. Reeves (ed.), p. 109, AIP Conference Proceedings 383, AIP Press, Woodbury, New York, 1996.

Lui, A., Mechanisms for the substorm current wedge, in Substorms 2: Proceedings of the Second International Conference on Substorms, Fairbanks, Alaska, March 7-11, 1994, J. R. Kan, J. D. Craven, and S.-I. Akasofu (eds.), p. 195, Fairbanks, Alaska: University of Alaska, 1994.

Lyon, J. G., M. Hojo, R. W. Spiro, F. R. Toffoletto, and R. A. Wolf, Toward a combined simulation model of the magnetosphere (abstract), EOS, Trans. AGU, 76, F489, 1995.

Ma, Z. W., and A. Bhattacharjee, Fast impulsive reconnection and current sheet intensification due to electron pressure gradients in semicollisional plasmas, Geophys. Res. Lett., 23, 1673, 1996.

Miura, A., Kelvin-Helmholtz instability at the magnetospheric boundary: dependence on the magnetosheath sonic mach number, J. Geophys. Res., 97, 10,655, 1992.

Noël, S., Decay of the magnetospheric ring current: a Monte Carlo simulation, J. Geophys. Res., 102, 2301, 1997.

Ogino, T., R. J. Walker, and M. Ashour-Abdalla, A global magnetohydrodynamic simulation of the response of the magnetosphere to a northward turning of the interplanetary magnetic field, J. Geophys. Res., 99, 11,027, 1994.

Onsager, T. G., and T. Mukai, Low-altitude signature of the plasma sheet boundary layer: observations and model, Geophys. Res. Lett., 22, 855, 1995.

Onsager, T. G., and T. Mukai, The structure of the plasma sheet and its boundary layers, J. Geogmagn. Geoelec., 48, 687, 1996.

Onsager, T. G., M. F. Thomsen, R. C. Elphic, and J. T. Gosling, Model of electron and ion distributions in the plasma sheet boundary layer, J. Geophys. Res., 96, 20,999, 1991.

Onsager, T. G., C. A. Kletzing, J. B. Austin, and H. MacKiernan, Model of magnetosheath plasma in the magnetosphere: cusp and mantle particles at low-altitudes, Geophys. Res. Lett., 20, 479, 1993.

Oran, E. S., and J. P. Boris, Numerical Simulation of Reactive Flow, Elsevier Publ. Co., New York, NY, 1987.

Otto, A., Forced three-dimensional magnetic reconnection due to linkage of magnetic flux tubes, J. Geophys. Res., 100, 11,863, 1995.

Papitashvili, V. O., B. A. Belov, D. S. Faermark, Y. I. Feldstein, S. A. Golyshev, L. I. Gromova, and A. E. Levitin, Electric potential patterns in the northern and southern polar regions parameterized by the interplanetary magnetic field, J. Geophys. Res., 99, 13,251, 1994.

Petrinec, S. M., and C. T. Russell, External and internal influences on the size of the dayside terrestrial magnetosphere, Geophys. Res. Lett., 20, 339, 1993.

Petrinec, S. M., and C. T. Russell, Near-Earth magnetotail shape and size as determined from the magnetopause flaring angle, J. Geophys. Res., 101, 137, 1996.

Peymirat, C., and D. Fontaine, Numerical simulation of magnetospheric convection including the effect of field-aligned currents and electron precipitation, J. Geophys. Res., 99, 11,155, 1994.

Pillip, W. G., and G. Morfill, The formation of the plasma sheet resulting from plasma mantle dynamics, J. Geophys. Res., 83, 5670, 1978.

Pritchett, P. L., and F. V. Coroniti, Formation of thin current sheets during plasma sheet convection, J. Geophys. Res., 100, 23,551, 1995.

Pritchett, P. L., and F. V. Coroniti, Convection-driven reconnection and interchange in the near-Earth plasma sheet, Geophys. Res. Lett., 24, 873, 1997.

Raeder, J., R. J. Walker, and M. Ashour-Abdalla, The structure of the distant geomagnetic tail during long periods of northward IMF, Geophys. Res. Lett., 22, 349, 1995.

Raeder, J., R. J. Walker, and M. Ashour-Abdalla, The structure of the distant geomagnetic tail during long periods of northward IMF, Geophys. Res. Lett., 22, 349, 1995.

Rasmussen, C. E., and R. W. Schunk, A three-dimensional time-dependent model of the plasmasphere, J. Geophys. Res., 95, 6133, 1990.

Rasmussen, C. E., S. M. Guiter, and S. G. Thomas, A two-dimensional model of the plasmasphere: refilling time constants, Planet. Space Sci., 41, 35, 1993.

Richmond, A. D., Assimilative mapping of ionospheric electrodynamics, Adv. Space Res., 6, 59, 1992.

Richmond, A. D., E. C. Ridley, and R. C. Roble, A thermosphere/ionosphere general circulation model with coupled electrodynamics, Geophys. Res. Lett., 19, 601, 1992.

Roble, R. G., E. C. Ridley, A. D. Richmond, and R. E. Dickinson, A coupled thermosphere/ionosphere general circulation model, Geophys. Res. Lett., 15, 1325, 1988.

Rodgers, D. J., A new empirical electron model, in Radiation Belts: Models and Standards, J. F. Lemaire, D. Heynderickx and D. N. Baker (eds.), p. 103, Geophys. Monogr. 97, Am. Geophys. Un., Washington, D. C., 1996.

Roederer, J. G. (Editor), GEM Geospace Environment Modeling: A Program of Solar-Terrestrial Research in Global Geosciences, University of Alaska, Fairbanks, 1988.

Roelof, E. C., and D. G. Sibeck, Magnetopause shape as a bivariate function of interplanetary magnetic field Bz and solar wind dynamic pressure, J. Geophys. Res., 98, 21,421, 1993.

Roelof, E. C., and D. G. Sibeck, Correction to "Magnetopause shape as a bivariate function of interplanetary magnetic field Bz and solar wind dynamic pressure", J. Geophys. Res., 99, 8787, 1994.

Schriver, D., and M. Ashour-Abdalla, Cold plasma heating in the plasma sheet boundary layer: theory and simulations, J. Geophys. Res., 95, 3987, 1990.

Schulz, M., and M. C. McNab, Source-surface model of the magnetosphere, Geophys. Res. Lett., 14, 182, 1987.

Sheldon, R. B. , and T. E. Eastman, Particle transport in the magnetosphere: a new diffusion model, Geophys. Res. Lett., 24, 811, 1997.

Sheldon, R. B., and D. C. Hamilton, Ion transport and loss in the Earth's quiet ring current, 1. data and standard model, J. Geophys. Res., 98, 13,491, 1993.

Shue, J., J. K Chao, H. C. Fu. C. T. Russell, P. Song. K. K. Khurana, and H. J.Singer, A new functional form to study the solar wind control of the magnetopause size and shape, J. Geophys. Res., 102, 9497, 1997.

Shyy, W, H. S. Udaykumar, M. M.Rao, and R. W. Smith, Compuational Fluid Dynamics with Moving Boundaries, Taylor and Francis series in Computational and Physical Processes in Mechanics and Thermal Sciences, Washington, D. C., 1996.

Siscoe, G. L., The magnetosphere: a union of interdependent parts, EOS, Trans. AGU, 72, 494, 1991.

Siscoe, G. L., Global view of the connection between magnetopause merging and the neutral sheet, presented at GEM Workshop, Snowmass, CO, June 16-20, 1997.

Siscoe, G. L., and E. Sanchez, An MHD model for the complete open magnetotail boundary, J. Geophys. Res., 92, 7405, 1987.

Sotirelis, T., The shape and field of the magnetopause as determined from pressure balance, J. Geophys. Res., 101, 15,255, 1996.

Spiro, R. W., P. H. Reiff, and L. J. Maher, Jr., Precipitating electron energy flux and auroral zone conductances -- an empirical model, J. Geophys. Res., 87, 8215, 1982.

Spreiter, J. R., and S. S. Stahara, A new predictive model for determining solar wind-terrestrial planet interactions, J. Geophys. Res., 85, 6769, 1980.

Spreiter, J. R., and S. S. Stahara, poster presentation, GEM Workshop, Snowmass, CO, June 28-July 3, 1992.

Stern, D. P., Parabolic harmonics in magnetospheric modeling: The main dipole and the ring current. J. Geophys. Res. , 90, 10,851, 1985.

Thomas, V. A., and D. Winske, Kinetic simulations of the Kelvin-Helmholtz instability at the magnetopause, J. Geophys. Res., 98, 11,425, 1993.

Thompson, Joe F., Z. U. A. Warsi, and C. Wayne Mastin, Numerical Grid Generation: Foundations and Applications, Elsevier Publ. Co., New York, NY, 1985.

Thorne, R. M., R. W. Abel , and D. Summers, Numerical simulation of asymmetric particle precipitation by pitch angle diffusion, J. Geophys. Res., 101, 24,847, 1996.

Toffoletto, F. R., and T. W. Hill, Mapping of the solar wind electric field to the Earth's polar caps, J. Geophys. Res., 94, 329, 1989.

Toffoletto, F. R., and T. W. Hill, A nonsingular model of the open magnetosphere, J. Geophys. Res., 98, 1339, 1993.

Toffoletto, F. R., R. V. Hilmer, T. W. Hill, and G.-H. Voigt, Solution of the Chapman-Ferraro problem with an arbitrary magnetopause, Geophys. Res. Lett., 21, 621, 1994.

Toffoletto, F. R., R. W. Spiro, R. A. Wolf, M. Hesse, and J. Birn, Self-consistent modeling of inner magnetospheric convection, in Third International Conference on Substorms (ICS-3), E. J. Rolfe and B. Kaldeich (eds.), p. 223, Noordwijk, The Netherlands: ESA Publications Division, 1996.

Tsyganenko, N. A., A magnetospheric magnetic field model with a warped tail current sheet, Planet. Space Sci., 37, 5, 1989.

Tsyganenko, N. A., A global analytical representation of the magnetic field produced by the region 2 Birkeland currents and the partial ring current, J. Geophys. Res., 98, 5677, 1993.

Tsyganenko, N. A., Modeling the Earth's magnetospheric magnetic field confined within a realistic magnetopause, J. Geophys. Res., 100, 5599, 1995.

Vassiliadis, D., A. J. Klimas, D. N. Baker, and D. A. Roberts, Classification and prediction of substorm conditions with nonlinear filters, in Substorms 2: Proceedings of the Second International Conference on Substorms, Fairbanks, Alaska, March 7-11, 1994, J. R. Kan, J. D. Craven, and S.-I. Akasofu (eds.), p. 473, Fairbanks, Alaska: University of Alaska, 1994.

Wei, C. Q., B. U. O. Sonnerup, and W. Lotko, Model of the low-latitude boundary layer with finite field-aligned potential drops and nonconstant mapping factors, J. Geophys. Res., 101, 21,463, 1996.

Weimer, D. R., Models of high-latitude electric potentials derived with a least error fit of spherical harmonic coefficients. J. Geophys. Res., 100, 19,595, 1995.

Weimer, D. R., A flexible, IMF-dependent model of high-latitude electric potentials having "space weather" applications, Geophys. Res. Lett., 23, 2549, 1996.

Weiss, L. A., R. L. Lambour, R. E. Elphic , and M. F. Thomsen, Study of plasmaspheric evolution using geosynchronous observations and global modeling, Geophys. Res. Lett., 24, 599, 1997.

Wiegelmann, T., and K. Schindler, Formation of thin current sheets in a quasi-static magnetotail model, Geophys. Res. Lett., 22, 2057, 1995.

Winglee, R. M., Non-MHD influences on the magnetospheric current system, J. Geophys. Res., 99, 13,437, 1994.

Wolf, R. A., R. W. Spiro, and F. J. Rich, Extension of the Rice Convection Model into the high-latitude ionosphere, J. Atm. Terrest. Phys., 53, 817, 1991.

Wolf, R. A., J. W. Freeman, Jr., B. A. Hausman, R. W. Spiro, R. V. Hilmer, and R. Lambour, Modeling convection effects in magnetic storms, in Magnetic Storms, B. T. Tsurutani, J. K. Arballo, W. D. Gonzalez, and Y. Kamide (eds.), Washington, D. C.: Am. Geophys. Un., 1996.

Wolf, R. A., J. F. Drake, M. A. Heineman, M. Hesse, J. G. Lyon, N. C. Maynard, and G. L. Siscoe, Plan for a Geospace General Circulation Model, August, 1996. Available at the web site ( or through the GEM home page (

Wolf, R. A., and M. Hesse, The Geospace General Circulation Model: A Status Report, GEM Messenger, Vol. 7, No. 36, September 3, 1997. Available through the GEM home page (

Wu, C. C., Kelvin-Helmholtz instability at the magnetopause boundary, J. Geophys. Res. , 91, 3042, 1986.

Wu, C. C., MHD flow past an obstacle: large-scale flow in the magnetosheath, Geophys. Res. Lett., 19, 87, 1992.

Zang, Y., and R. L. Street, A composite multigrid method for calculating unsteady incompressible flows in geometrically complex domains, Int. J. Num. Methods in Fluids, 20, 341, 1995.


< Previous Section Table of Contents Next Section >

Results of Community Survey

The following two pages show a replica of the questionnaire form that was used to conduct our survey of the needs and opinions of the GEM research community. The form was available on the web ( for the month preceding the Summer 1997 GEM Snowmass Workshop, with an announcement in the May 13, 1997 GEM Messenger email newsletter, and was circulated in hard copy at the workshop. There were 36 pre-workshop responses and 27 additional responses at the workshop. The statistical pattern of the responses did not change noticeably when the workshop responses were added to the pre-workshop responses; all responses are included in the statistical results shown below.

On the pages following the questionnaire, we show in graphical form the statistical summary of all responses and we reproduce (anonymously) all written comments received.

< Previous Section Table of Contents Next Section >


Three research groups have been selected to conduct feasibility studies for the development of a Geospace General Circulation Model (GGCM), as called for in the GGCM planning document, based on discussions at last year's Snowmass workshop. The three groups are at Dartmouth (John Lyon, PI), Rice (Tom Hill, PI) and TRW/Colorado Springs (Al Ronn, PI). The three groups have quite different approaches to the problem, but one thing they have in common (apart from the overall scientific objectives) is the need for timely input from the community of potential GGCM users, i.e., the GEM research community.

Thus, we ask each potential GGCM user to respond to the multiple-choice questions below, to provide us with a statistical sample of perceived user needs. We also solicit any comments or suggestions you care to make, general or specific, on any aspect of GGCM development and implementation.

All responses will be shared among the three study groups but will otherwise be held in confidence.


(1) Please indicate the mode(s) in which you would probably use a GGCM if one were available:

O use archived model outputs as a tool for data analysis and interpretation;

O run time-dependent model for event simulation and hypothesis testing;

O contribute new or improved numerical algorithms for inclusion of specific

physical effects or regions in the model;

O contribute new or improved data products for inclusion in the model, or

methods for their inclusion;

O education/outreach;

O other - Please Specify:

O not at all.

(2) What type of GGCM management structure would you prefer?

a. "GGCM on demand": A customized GGCM could be set up and run remotely by anyone at any time. (Would provide maximum freedom of use, but may make resource allocation and data archiving difficult.)

b. "GGCM by oversight committee": Proposals to use the GGCM would be selected by peer review, and resources would be allocated accordingly.

c. Something in between -- please specify:


(3) Any GGCM implementation will involve tradeoffs between a number of

desirable attributes. Please rank each of the following attributes on the

following scale:



Your Response (circle one)

(0=not important, 5=most important)

a. Global coverage of geospace







b. Rigorous self-consistency







c. Flexibility (ease of incorporating new model

algorithms and/or data products)







d. Data assimilation (ability to override computed

results with actual data)







e. Portability to modest-sized computing platforms







f. Web-based access for multiple simultaneous users







g. Detailed code documentation







h. On-line system status and messages







i. User consultation services







j. Graphics/visualization tools







k. Statistical analysis tools







l. Simple (e.g., point-and-click) user interface







m. Easy access to archived data sets for GGCM input







n. Other - please specify








Please provide any further comments/suggestions you care to make in the space below:

Name: E-mail Address:

Thank you for your time. Your input is important to the GEM research community. Please give your completed questionnaire to Frank Toffoletto or Tom Hill, or mail it to the

Department of Space Physics and Astronomy

MS 108, Rice University

Houston, TX 77005-1892.

A table of the survey results can be found here.

< Previous Section Table of Contents

1. Please indicate the mode(s) in which you would probably use a GGCM if one were available:

Additional write-in response to Question 1:

Use as a technology transition product candidate suitable for use by commercial or military end users.

2. What type of GGCM management structure would you prefer.

Additional write-in responses to Question 2:

Archived results of the "default" GGCM should be available "on demand" for data analysis, etc. The "default" time-dependent code should be available "on demand" for testing hypotheses, etc., at remote sites. Inserting new algorithms ("modules") should not be available "on demand" but should be coordinated with the "center", with a standard testing procedure to assure compatibility and reliability. NSF should provide funding (through peer review) for module development, but such funding should not be a prerequisite for participation.

I am for the "GGCM on demand" selection with the additional freedom to run some version of the GGCM at the user's location by the user himself. I don't think that data should be archived -- we have too many archived satellite data which can't be used because of lack of funding. I don't think adding simulation output to that is much useful, except maybe for isolated events/studies.

A simplified version available "on demand", full version, or customized services, or work on including a new module, by peer review.

The option of GGCM by oversight committee is not practical. It must be closer to option 1. An oversight committee needs to exist for maintenance and upgrades. Otherwise how will it improve.

The "big" model managed by committee, a workstation-sized model, if feasible, on demand. Data archiving is important.

I think that things could be set up very much like requesting data. GGCM run requests could be submitted (by e-mail or by web) and run on a time available basis. When the run results are finished the requester can be notified that results are ready to download. In addition the code (or modules within the code) could be made available for those with resources and know-how to run it themselves.

I honestly do not have a good feel for this question. Perhaps a high level version of the model "run by committee and peer review" for work requiring large amounts of computing time, but have a scaled down model (or set of scaled down models based on size or on the region of geospace the user is focusing on) which could be used as a GGCM on demand. This would allow the researchers and students a chance to "play around" with the model before submitting a request to run the higher level model. (This would enable them to zero in on the types of runs they would require *before* they invest the CPU time running the higher level model. In some cases, the lower level model might prove to be all that is required.).

Something like the AE/P-8 models available at the NSSDC space models page ( would be nice. Users looking for a simple, limited set of results (requiring some limited amount of computational effort) might go to a remote interface as described above. Those requiring more a more extensive set of results could download a source code or apply for a peer-reviewed allocation of resources. This assumes it's somehow possible to tell what an 'easy' calculation will be for the physical models proposed (as op the empirical AE/AP-8 models).

Distributed computing. Distribute the source code with 'make' file for the most common platforms. Then each user can use his/her own resources, and customize the program as much as he/she likes.

To be of practical use GGCM must be able to be taken to home institutions and executed in a distributed manner. Although limited central site facilities may be available, today's financial environment require that researcher supported projects fund the resources required and consumed by the project. This distributes costs and the unrealistic ambition of maintaining a central site capable of supporting the user community. A community program like GGCM should not be restricted to limited use by an oversight committee. It should be restricted only by the capacity of this research industry to support legitimate research where it is needed. The technology either exists today or is rapidly becoming available to support wide distributed access to code, e.g. the GGCM, and to the required input parameters. Minimize central site demands and maximize distributed use of researcher home facilities.

It is unrealistic to expect any individual in our community to simply grab a complex code and use it without training. It is essential that potential users have access to training courses (given yearly perhaps) in which they can learn the basics of running the code and manipulating the output data. Users should be required to take such a users' course before they gain access to code. Otherwise a tremendous amount of computer time and data storage space will be simply wasted.

A "GGCM on demand" type code should exist for purposes of validation by the community. Perhaps, ability to mix and match modules, or try different module types for both research and verification. However, as a research code, a GGCM would be an evolving code. There will be need for specialized runs, somewhat like a campaign. A GGCM cannot be everything to everybody. Certain uses will demand additional resources, such as direct involvement of the code builders.

You might consider two versions of the code. One that is "GGCM on demand" and another that may be larger and less portable for "GGCM by oversight committee".

Mixed access to GGCM, i.e. make GGCM available on demand for about 75%, but reserve the remaining 25% for campaigns/peer-reviewed proposals. If the 25% are unused, just add to the on demand pool. This approach might allow for larger projects/longer runs w/o afflicting other users.

Guaranteed access to GGCM, but with priority and archiving policies determined by a central authority or oversight committee. In this scenario perhaps only a small number of runs would be archived, as the GGCM will most likely generate huge.

I am interested in running the GGCM simultaneously with the NCAR TIME-GCM to study mutual couplings. This can be done by putting GGCM in the NCAR machines or running models separately on different machines controlled by a flux-coupler for interactions at each model time step.

I would prefer some controlled access (rather like 'b.'), however with a bit more freedom. I think people should be 'approved' for running the GGCM with appropriate training and then be able to run it wherever they can or whenever they need to.

a - is logistically impractical b - is too diff. - need an approach that can allow fairly free use, but with control to allocate resources and prioritize requests based on resource drain, feasibility and need for short term response.

A well-exercised, debugged version with very limited model inputs and probably limited outputs for open, general use on personal workstations. A state-of-the-art version (continually in an alpha or beta state) would require oversight. Both versions should be available. Aspects of the alpha/beta version would be periodically transitioned to the open version.

On demand for "small" jobs. Committee for "major" runs.

as close to "a" as resources allow. John Lyon's "community" and "personal" versions are good idea.

3. Any GGCM implementation will involve trade-offs between a number of desirable attributes. Please rank each of the following attributes on the following scale (0-5):

0 = not important, 5 = most important.

Additional write-in responses to Question 3:

Clear descriptions of limitations of the GGCM such as kinetic effects which are very important in the magnetopause and magnetotail. I hope that the GGCM will be open to a new model when it becomes available.

Robust, fail-safe and foolproof so it will be suitable for transitioning to commercial and military end users.

It is absolutely necessary that the GGCM be based on fully self-consistent dynamics. The MHD based model is the only approach which does this. The MHD model has limitations because it is missing critical physics such as the ring current and the non-ideal processes which describe magnetic reconnection. As computers become more powerful and our understanding of the physics in these critical areas improves, the MHD model can be upgraded to include more physics either through modules or by evolving into some form of hybrid approach (particle ions in key regions). In this sense, the MHD model has growth potential while the other models seem to be very limited. They are too specialized.

Rigorous self-consistency is not always achievable or practical. Rather, code should be consistent with physical principles.

For use by researchers, simple, but clear, comments in the code should suffice. A simple document made up of those comments, and an example of how to run the code, together with user prompts during execution, should be enough to run most codes without having to read source program. Modelers will want to download and modify code for research purposes.

Flexibility in determining which algorithm to use in which regions probably based on current parameters. For instance, U. Michigan's adaptive grid size. May also switch to special shock propagation codes where appropriate.

The GGCM needs (1) entrance criteria, (2) configuration management so people know what version of data or other modules they are interacting with, and (3) some measures of value of the module output so people can evaluate module improvements or comparisons with other modules.

Geographic Center where individuals or groups can work for brief to extended intervals on model development, analysis, science with appropriate hardware and software, visualization tools, etc. NCAR??

Strongly suggest keeping graphics separate from numerical calculations, i.e., don't make it required to have a particular graphics package to run GGCM.

annual/semiannual training sessions. Getting 10 G bytes of data per run is something you need some training to sort through!

Additional write-in Comments:

From the questions I could not tell if you intend this as a research tool, a real time analysis tool or something else. It is important to decide up front if it will be a research tool or 'real time' tool. The difference will come in the form of many trade-offs that must be made, once you chose the path.

My role is predominantly that of data provider, but as the GGCM develops I expect that my students and I would use it increasingly. I would definitely use it for educational purposes.

I think that the future of Space Weather lies in our ability to produce a practical GGCM that can ultimately serve the needs of the world beyond the scientific community. Those needs should be taken into account as early as possible in the GGCM design. 2) I think that for a group project such as this and for maximum flexibility and growth potential, it is important that the GGCM incorporate object oriented or at least modular design. A good deal of thought should go into module interface design early on.

Please make sure that there remains a good information flow and close contact between data analysis and the GCM. The greatest danger to GGCM would be a loss of communication between the modelers and experimenters/data analysis people.

The GGCM, at its best, will be an imperfect model. What compromises are acceptable will depend on the use/user. Flexibility and ruggedness are essential attributes.

I am interested in coupling to GGCM with the NCAR TIME-GCM to study mutual couplings between the Thermosphere/Ionosphere/Magnetosphere system.

Need to have well established protocols for inter module communication as well as the user interface.

If your model is not self-consistent, why bother? As Jeff Hughes suggested, implementing the TRW-style phase-1 previously run results distribution should be done now!

modular, modular, modular!


Table of Contents