Friday, 5 October 2018

A New Automated Solar Feature Recognition Facility: Sheffield Solar Catalogue (SSC)

N. Gyenge, H. Yu (余海东 ), V. Vu, M. K. Griffiths and R. Erdélyi

Regular sunspot observations (darker regions on the solar surface, see; https://en.wikipedia.org/wiki/Sunspot) were established as early as the 16th century. Since then, the revolution of IT techniques and tools has reshaped the daily routine of the solar observatories whose main task was and still is to build up various long-term catalogues of a wide range of solar features. The mostly manual workload became gradually being replaced by automated solutions, such as the development of robotic telescopes, automated feature recognition algorithms, etc. Nevertheless, some manual elements still remained a part of the normal daily routine of many astrophysical institutes. 

The Sheffield Solar Catalogue (SSC) project (https://github.com/gyengen/SheffieldSolarCatalog) is a free and open-source software package for the analysis of solar data intents to establish a fully automated solar feature recognition environment from the raw images of solar observations to a user-friendly and science-ready data source. The underlying core program is able to provide a real-time comprehensive solar features data analysis environment, aimed to assist researchers within the field of solar physics and astronomy. 

At this stage of development, SSC is suitable for generating sunspot data fully automatically, based on white light continuum and magnetogram observations by the Solar Dynamics Observatory (SDO) (https://en.wikipedia.org/wiki/Solar_Dynamics_Observatory) satellite [1]. Although, the project is currently focused on sunspot groups and sunspot identification, the database will be extended later to other solar features, such as solar pores, faculae, coronal holes, jets, spicules and other solar phenomena.

Figure 1 demonstrates the flowchart of the project, where the rectangles indicate the most important parts of the source code. The source code can be separated into three different layers, as is shown in the lower yellow rectangle. or data production from the raw solar images to the scientific data (i.e., data tables) the backend (or engine) is responsible. This program layer is fully written in Python 3 programming language.

Figure 1. The flowchart of the SSC project. The main part of the source code is distinguished by the colored rectangles.

At the first step, the raw observations are downloaded from the JSOC (http://jsoc.stanford.edu/) server which provides the SDO observations. Nonetheless, the data need to be amended before an actual scrutiny. The images must be validated and de-rotated, if necessary. In case of continuum images, limb darkening is corrected, which is an optical effect seen in the solar images, where the centre of the images appears significantly brighter than the edges. Similarly, the magnetogram is corrected as well. After the necessary corrections, the algorithm begins identify the physical boundaries of the sunspots. However, additional information is also required for identifying each sunspot within every active region (AR). Now, the data matrices (i.e. sub-images about the sunspots from the full observation) are selected for each AR by using the HARP data (again, see the JSOC server). The HARP data provide an approximate boundary for each AR, which is an appropriate initial condition for further analysis. The actual physical contour of umbra and penumbra (almost every sunspot can be decomposed into this two regions where the difference between them is the photon intensity) is now generated by the active contour model algorithm (https://en.wikipedia.org/wiki/Active_contour_model). The output is written in individual PDF and PNG files as Figure 2 (A/B) demonstrates. Finally, the scientifically valuable data are written into an SQL table, where the engine terminates. The appended SQL table is available for further services, however, every few minutes, the engine loops back to the first step with a new observation and so on.

Figure 2A. An example for demonstrating the result of the active contour model algorithm. The blue line demonstrates the boundaries of the sunspots, the continuum image is displayed. 
Figure 2B. An example for demonstrating the result of the active contour model algorithm. The panel shows the magnetogram observations with the projected sunspot boundaries.

At the first step, the raw observations are downloaded from the JSOC (http://jsoc.stanford.edu/) server which provides the SDO observations. Nonetheless, the data need to be amended before an actual scrutiny. The images must be validated and de-rotated, if necessary. In case of continuum images, limb darkening is corrected, which is an optical effect seen in the solar images, where the centre of the images appears significantly brighter than the edges. Similarly, the magnetogram is corrected as well. After the necessary corrections, the algorithm begins identify the physical boundaries of the sunspots. However, additional information is also required for identifying each sunspot within every active region (AR). Now, the data matrices (i.e. sub-images about the sunspots from the full observation) are selected for each AR by using the HARP data (again, see the JSOC server). The HARP data provide an approximate boundary for each AR, which is an appropriate initial condition for further analysis. The actual physical contour of umbra and penumbra (almost every sunspot can be decomposed into this two regions where the difference between them is the photon intensity) is now generated by the active contour model algorithm (https://en.wikipedia.org/wiki/Active_contour_model). The output is written in individual PDF and PNG files as Figure 2 demonstrates. Finally, the scientifically valuable data are written into an SQL table, where the engine terminates. The appended SQL table is available for further services, however, every few minutes, the engine loops back to the first step with a new observation and so on.

The next layer of processing is the data storage, which contains the output of the engine. Here, the raw scientific data are transformed and stored in a currently popular SQL format. Table 1 demonstrates a few lines of the database. Each line represents one sunspot in each sunspot group. The line contains the most important pieces of information about the spot such as the date, the time of the observation, the coordinates in Carrington Heliographics, Polar and Helioprojected reference systems [2] and the area of every sunspot. The columns on the right-hand side show some basic statistics (maximum, minimum pixels, standard deviation and also the average of the sample) of the pixels composing the sunspots. 

The server also stores images about the processed sunspot groups and contours in FITS and PNG format as demonstrated by Figure 2. The output for one set of observation takes around 50 Mb space on the hard drive. It means that with a 5-minute cadence (the currently chosen default cadence of the project), the program generates about 15 Gb data each day resulting in more than 5 Tb data per annum. Ultimately, a one-minute cadence (the desired temporal resolution) is going to write out 75 Gb daily and 25 Tb data annually, respectively, however, this cadence requires massive parallelisation in the source code.

2017-07-13
20:31:23
c
u
12666
1
292.34
127.06
318.75
23.49
11.7
103.06
18.35
0.18
8.63
26258672
7483371
42519
19808
54286
10787
2017-07-13
20:31:23
c
u
12666
2
267.61
123.16
294.59
24.71
11.5
101.45
16.73
0.01
0.53
1625085
572434
52039
48777
54496
1633.
2017-07-13
20:31:23
c
u
12666
3
273.33
134.4
304.58
26.18
12.17
101.86
17.15
0.5
24.2
73580522
23445947
47365
19811
55608
6900.6

Table 1. The columns are in order: date and time of the observation, type of the data (continuum or magnetogram), type of the sunspot (umbra or penumbra), NOAA number of the AR, the serial number of the sunspot within the sunspot group, x and y coordinates of the Helioprojected reference system, R and Theta coordinates of the Polar system, Latitude, Longitude and LCM of the Heliographics system. The next two columns show the area of the selected feature. The last 5 columns show the results of the pixels within the defined contours (total, mean, minimum, maximum and standard deviation of the pixels).


Finally, the last layer is the web facility, what is the user-friendly online frontend of the project. The frontend is based on a hybrid software solution, where the HTTPS server is supported by the Python Flask framework (templating HTML pages with CSS) and JavaScript. The web service is able to display, visualise and analyse the data received from the engine backend. The user can select, filter and sort the data. The selected data can be downloaded (via the HTML page or sFTP protocol) or analysed by the built-in plotting tool, powered by the Bokeh engine, which is able to provide elegant and interactive plots (a screenshot is shown in Figure 3).


Figure 3. The user-friendly web interface of the project with fully automatic software solutions based on Python Flask, Bokeh and JavaScript.

The project is going to be extended in the future with additional tools and types of observations. A jet recognition algorithm is now under development, based on SDO AIA images. Furthermore, parallelization techniques will be implemented in the source code in the near future, possibly, by using GPU and/or MPI architectures. 

The project is open-source, therefore, the developing team is constantly looking for researchers who would like to be involved.

[1] Pesnell, W. D. (2015). Solar dynamics observatory (SDO) (pp. 179-196). Springer International Publishing.
[2] Thompson, W. T. (2006). Coordinate systems for solar image data. Astronomy & Astrophysics, 449(2), 791-803.

Friday, 21 September 2018

MHD Code Using Multi Graphical Processing Units: SMAUG+

Numerical simulations are one of the most important tools for studying astrophysical magnetohydrodynamic (or MHD) problems since the birth of computer science. MHD modelling (https://en.wikipedia.org/wiki/Magnetohydrodynamics) the physical processes of a complex astrophysical observation frequently requires enormous computational efforts with high compute performance. The SMAUG+ is a numerical finite element solver, which is based on addressing the ideal fully non-linear 3-dimensional MHD equations. The MHD equations are described in details in Griffiths (2015) [1].

Advances in modern processing unit technology allows us to solve more and more complex physical problems by using faster and higher number of central processing units (CPUs) or accelerators, such as graphical processing units (GPUs). Multi-GPU (mGPU) systems are able to provide further benefits, such as larger computational domains and substantial compute time savings, the latter resulting in saving of operational costs. Many studies demonstrate the performance effectivity for solving various astrophysical problems with mGPU architecture [2]. The mGPU systems allow us to achieve orders of magnitude performance speed-up compared to CPU cores. However, the mGPU systems also enable to extend considerably the investigated model size or increase the resolution of the computational domain, therefore allowing to obtain more details.

Figure 1 shows the principles of an example system of architecture. The red boxes represent the computational domain of the initial model configuration. The original grid, however, is now divided into four equal sub-regions, as indicated by the successive serial numbers. Each sub-region is assigned to a CPU. The CPUs are able to communicate with each other using communication fabrics, such as MPI messaging, OMNI-Path technology. Exchanging information between the sub-domains with 'halo' layers is a common practice in parallel computation on CPUs. The halo layers are demonstrated by the white rectangle within the computation domains (red rectangles). The data are obtained from (or sent to) the buffer of the top (or bottom) neighbour processors (indicated by grey rectangles). By sending and receiving only 'halo' cells and not the full grid, we reduce the size of the communications. The CUDA platform, however, provides us to access GPU accelerated solutions. Here, the actual numerical methods are performed/applied by the GPUs.

We ran the simulations using two different HPC facilities, namely, the University of Cambridge (Wilkes) and the University of Sheffield (ShARC) architectures. The Wilkes computer consists of 128 Dell nodes with 256 Tesla K20c GPUs. Each Tesla K20c contains 2496 GPU cores, hence the total number of GPU cores is 638976 and the total number of CPU cores is 1536. The ShARC cluster provides 8 NVIDIA Tesla K80 (24 GiB) graphical units. The total number of GPU cores is 39936.

Figure 1. Flowchart outlining SMAUG+ with the implemented MPI parallelisation technique. The red boxes demonstrate the initial and distributed model configurations. The configuration is equally divided and spread around the different CPUs and GPUs, using MPI and CUDA. The white rectangles show the 'halo' cells, the grey rectangles demonstrate the buffers for storing the exchanged information. The red domains within each process show the actual mesh outline and the blue boundaries mark data, which is stored by different processes. The numerically intensive calculations are sent and performed by the GPUs. The GPUs distribute the subdomains further for calculating the actual numbers. These calculations are performed by the thousands of GPU cores (green rectangles). The figure is an example of a 2 x 2 configuration.
Figure 2. Orszag-Tang vortex results. The initial configuration contains 1000x1000 data and is distributed among 4 GPUs (2x2). The figure shows the temporal variation of the density for t1=0.04s, t2=0.25s. The simulations are performed at the ShARC Cluster.

The Orszag-Tang vortex is a common validation test employing two-dimensional non-linear MHD (). Figure 2 is a snapshot of an example Orszag-Tang simulation. The panel shows the temporal variation of the density on a linear colormap, various string waves pass through each other. This motion creates turbulent flow in different spatial scales. Figure 2 demonstrates that there is a convincing agreement between the results of SMAUG+ and other MHD solvers. We ran a series of simulations with different simulation box resolution as seen in the Table 1. 


Grig Size
Number of GPUs
Running Time [s]
1000 x 1000
1 x 1
34
1000 x 1000
2 x 2
11
1000 x 1000
4 x 4
13
2044 x 2044
2 x 2
41
2044 x 2044
4 x 4
43
4000 x 4000
4 x 4
77
8000 x 8000
8 x 8
61
8000 x 8000
10 x 10
41

Table 1. Timings for 100 iterations for the Orszag-Tang test. The results are based on the simulations performed at the Wilkes Cluster.

The actual parallel performance of the applied models is determined by various factors, such as: (i) the granularity of the parallelizable tasks, (ii) the communication overhead between the nodes, (iii) finally, the load balancing. The load balancing refers to the distribution of the data among the nodes. If the data distribution is not balanced, some of the GPUs with less load must wait until the heavily loaded GPUs finish the job. We always use equally divided configurations, hence all the GPUs are equally loaded. The granularity of the parallel task represents the amount of work that will be carried out by a certain node. The communication overhead is the cost of sending and receiving information between the different nodes. In our case, the overhead is built-up by two components: the MPI node communication and the CPU-GPU information transfer. From the GPU memory the data must be transferred to the system memory. From here, the CPU will send the information to another CPU node, finally this node transfers the data to the GPU and so on. This continuous data transfer significantly jeopardise the parallel performance. This is the consequence of using not computationally dense GPUs. Parallel slowdown could be the result of a communication bottleneck. More GPUs must spend more time for communication. As shown above, choosing the non-optimal configuration could cause massive wasting of computational power.

To avoid parallel slowdown the following must be considered: (i) Only increasing parallelism will not provide the best performance [3]. Increased parallelism with non-optimal data granularity could easily cause parallel slowdown. (ii) The amount of exchanged MPI messages must be reduced as much as possible for the best performance [4]. It also means that a single GPU could give better performance than multiple GPUs, if the applied model size is the same. (iii) Task must be enough to overleap the parallel communication overheads. The processes must have a higher task granularity if the number of applied GPUs increases. For avoiding the communications overhead, it is advisable to use always arithmetically dense GPUs. (iv) It is possible to improve communication performance by using higher-performance communication hardware.

By applying the above principle parallel performance speed-up is possible. For instance, the 1000 x 1000 Orszag-Tang test with 4 GPUs is around 3 times faster than the 1 GPU configuration or the 8000 x 8000 test shows 1.5 times speed-up between 64 and 100 GPUs (Table 1). However, the primary aim of our approach is to archive extended model size. A single GPU is only able to support limited memory but using our method an mGPU system provides as much memory as the total of GPUs. The only disadvantage is the communication overhead, however, an mGPU system may still be faster and significantly cheaper than a multi-CPU approach. For the same price, a GPU contains orders of magnitude more processing cores than a CPU. Our approach provides affordable desktop high-performance computing.

The developed software provides the opportunity to execute/perform large scale simulations of MHD wave propagation mimicking the strongly magnetised solar atmosphere, in particular, representing the lower solar atmosphere from photosphere to low corona. Such approach is important, as there are a number of high-resolution ground- (e.g. SST - Swedish Solar Telescope, La Palma; DKIST - Daniel K. Inouye Solar Telescope, USA to be commissioned in 2019 or the EST - European Solar Telescope, to be realised by the second half of the next decade) and space-based (e.g. Hinode, SDO - Solar Dynamics Observatory, IRIS - Interface Region Imaging Spectrograph) facilities providing a wealth of earlier unforeseen observational details that need now to be understood.

[1] Griffiths, M., Fedun, V., and Erdelyi, R. (2015). A fast MHD code for gravitationally stratified media using graphical processing units: SMAUG. Journal of Astrophysics and Astronomy, 36(1):197–223. 

[2] Stone, J. M. and Norman, M. L. (1992a). ZEUS-2D: A radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I - The hydrodynamic algorithms and tests. The Astrophysical Journal Supplement Series, 80:753–790. 

[3] Chen, D.-K., Su, H.-M., and Yew, P.-C. (1990). The impact of synchronization and granularity on parallel systems. SIGARCH Comput. Archit. News, 18(2SI):239–248.

[4] Thakur, R., Gropp, W. D., and Toonen, B. (2004). Minimizing synchronization overhead in the implementation of MPI one-sided communication. PVM/MPI, 3241:57–67.

Monday, 19 March 2018

p-mode oscillations in magnetic solar atmospheres

p-Mode Oscillations in Highly Gravitationally Stratified Magnetic Solar Atmospheres

Introduction

Observational, theoretical and computational studies of the sun reveal a diversity of  structures and complex dynamics. This is clearly revealed by imagery from solar telescopes, the AIA 171 angstrom image from SDO  illustrates this most clearly.


The culmination of studies of the dynamics of the coronal loop structures at different scales and heights in the solar atmosphere is illustrated by the sketch of the solar chromosphere by Wedemeyer-Bohm


This diversity of dynamics gives rise to a menagerie of waves providing powerful diagnostics to aid our understanding and advance our knowledge. One of the most famous oscillations is the p-mode oscillation we studied these to test our MHD code for gravitationally stratified atmospheres. Our initial models were hydrodynamic simulations of a realistically stratified model of the solar atmosphere representing its lower region from the photosphere to low corona. The objective was to model atmospheric perturbations, propagating from the photosphere into the chromosphere, transition region and low corona. The perturbations are caused by the photospheric global oscillations. The simulations use photospheric drivers mimicking the solar p-modes.

The studies revealed that.

  1. There is consistency between the frequency-dependence of the energy flux in the numerical simulations and power flux measurements obtained from SDO;
  2. energy propagation into the mid- to upper-atmosphere of the quiet Sun occurs for a range of frequencies and may explain observed intensity oscillations for periods greater than the well known 3-minute and 5-minute oscillations;
  3. energy flux propagation into the lower solar corona is strongly dependent on the particular wave modes;
  4. agreement between the energy flux predictions of our numerical simulations and that of the two layer Klein-Gordon model supports our interpretation of the interaction of solar global oscillations with the solar atmosphere.

Structures in the Solar Atmosphere

The 3-minute and 5-minute wave modes are influenced in various ways in the different regions of the solar atmosphere. Our initial studies were relevant for the quiet inter-network region of the non magnetic solar chromosphere. These are the regions between the magnetic flux concentrations. The quiet sun magnetic flux is typically in the range 5-10G . For the coronal holes these are cold regions of plasma  and  have open field lines allowing solar particles to escape during the solar minima these regions can cause space weather disturbances.

Regions of the magnetic chromosphere are referred to as the network or plage regions, these are bright areas near to sunspots, faculae and pores. Pores are smaller counterparts of sunspots upto a few Mm across. The faculae are bright spots forming in the canyons between solar granules, they constantly form and dissipate over time scales of several minutes. They are formed near magnetic field concentrations. The active network regions are plage like bright areas which extend away from the active regions. The magnetic fields in this area diffuse away into the quiet sun regions, they are constrained by the network boundaries.

The internetwork or inner network may contain super granules which are convective regions about 30Mm across with strong horizontal flows.  The field in the internetwork region is in the region of 100-300G for the mean photospheric field.  Solar active regions contain sunspots which have sizes from 1 to 50Mm. The solar active region 10652 comprised many features and extended beyond this this region produced many solar flares and had magnetic fields easily exceeding the normal range of 100-500G.

e.g. see solar monitor AR10652


 Given this variety of solar regions it is recognised that the 3 minute and 5 minute modes behave in different ways in the solar atmosphere network, inter network, plage and faculae regions. These differences have been summarised nicely by the tables presented by Khomenko et al in reference5 below. For each of these regions the wave periodicities
  • centre of magnetic elements
  • close surroundings
  • internetwork beyond the magnetic elements
For the faculae and plage oscillations they consider
  • the centre of the magnetic elements
  • close surroundings and
  • in the halo areas
These are considered for the photosphere, chromosphere and corona. What is striking is the varied behavour for different magnetic structures and the influence of reflecting layers such as the transition layer influencing upward and downward propagation. In summary, for the network and inter network regions short (3 minute) period waves ( 5−8 mHz) propagate from the photosphere to the chromosphere only in restricted areas of the network cell interiors,the spatial distribution of 3-minute chromospheric shocks is highly dependent on the local magnetic topology. The long (5 minute) period waves (ν1.2−4 mHz) propagate efficiently to the chromosphere in the close proximity of the magnetic network elements. These long-period network halos are most prominent in the photosphere, but are also present in the chromosphere; and are observed to be co-spatial with chromospheric “magnetic shadows” for 3 min waves. Plage and faculae regions possess more complex magnetic structures and exhibit a more complex pattern.  Observations show that the power of 5-min oscillations increase significantly in the chromosphere For example for the short (3 minute) period waves (5−8 mHz) there is an enhancement, both in the photosphere and in the chromosphere. These power enhancements are known as “halos” and have been widely reported.

Motivation

Before attempting to develop a model which we allege is a realistic representation of the solar atmosphere it is necessary to establish that our modelling tools give a consistent behaviour in idealised test cases bridging our theoretical understanding and computational tools.

In our earlier studies reported in reference 9 and this blog ( http://solarwavetheory.blogspot.co.uk/search/label/pmode ) we investigated the energy propagation into a model solar atmosphere using perturbed computational MHD for highly gravitationally stratified atmospheres. Simulation drivers were used in the model to represent p-mode oscillations with varying modes and periods. In this study we attempted to generate a more representative atmosphere by introducing uniform vertical magnetic fields. Simulations were run for different values for the magnitude of the magnetic field and the energy propagation into the corona examined.

For all the simulations we used a p-mode driver with period 300s and mode (2,2). The 300s driver  was used as this corresponds to the well known 5 minute mode. The (2,2) mode was used because our earlier study demonstrated its effectiveness with energy propagation.

The videos below show the solar plasma velocity in the vertical direction at different layers in the solar atmosphere. The first example shows the case when there is no magnetic field in this case we observe the pure acoustic modes of oscillation.


Magnetic field model for solar atmosphere
50G Maximum field

75G Maximum field
100G Maximum field








The case above corresponds to zero magnetic field.



The above case corresponds to a maximum vertical B-field of 50G at the centre of the simulation box.


The above case corresponds to a maximum vertical B-field of 75G at the centre of the simulation box.



The above case corresponds to a maximum vertical B-field of 100G at the centre of the simulation box.

For cases where there is a non zero magnetic field we observe  oscillation modes which are different in character from the purely acoustic mode. As the magnetic field is increased the vertical motion of the plasma is enhanced.




The plot above compares the sections through the model at a time of 76s (i.e. 1 quarter of the time period) for the different field cases (i.e. 50G, 75G and 100G from left to right respectively). The beta equal one isosurface is shown near the base of the model.




The plot above shows a vertical slice taken at the midpoint of the simulation box, it compares the energy flux at a time of 76s (i.e. 1 quarter of the time period) for the different field cases (i.e. 50G, 75G and 100G from left to right respectively).



The plot above compares the sections through the model at a time of 150s (i.e. half of the time period) for the different field cases (i.e. 50G, 75G and 100G from left to right respectively). The beta equal one isosurface is shown near the base of the model.






The plot above shows a vertical slice taken at the midpoint of the simulation box, it compares the energy flux at a time of 150s (i.e. half of the time period) for the different field cases (i.e. 50G, 75G and 100G from left to right respectively).




The plot above compares the sections through the model at a time of 225s (i.e.three quarters of the time period) for the different field cases (i.e. 50G, 75G and 100G from left to right respectively). The beta equal one isosurface is shown near the base of the model.





The plot above shows a vertical slice taken at the midpoint of the simulation box, it compares the energy flux at a time of 225s (i.e. three quarters of the time period) for the different field cases (i.e. 50G, 75G and 100G from left to right respectively).





The plot above compares the sections through the model at a time of 330s (i.e.more than one time period) for the different field cases (i.e.50G, 75G and 100G from left to right respectively). The beta equal one isosurface is shown near the base of the model.


The plot above shows a vertical slice taken at the midpoint of the simulation box, it compares the energy flux at a time of 330s (i.e. over one time period) for the different field cases (i.e. 50G, 75G and 100G from left to right respectively).

As well as influencing the motion of the plasma the field enhances the energy flux which is able to pass through the transition layer. After one period there is a reflection of energy from the top boundary.

The diagrams below show distance time plots taken at different vertical slices through the model. The first one shows a section taken through the middle of the box at 2Mm. The second one a section taken at 1Mm and the third one is a section taken at 0.5Mm. They show the different modes including purely acoustic modes for the 0G case and the magneto acoustic modes for the non-zero B field.

Distance time plot for section at 2Mm

Distance time plot for section at 1Mm

Distance time plot for section at 0.5Mm

From the distance time plot we computed the slopes and determined a propagation speed. These are tabulated below. The first table gives speeds computed from the trailing edge (trailing edge is on the left of the plot i.e. t=0 side). The results computed using the slope at the leading edge are shown in the second table.

position 0G 50G 75G 100G
2Mm 12.6 96.5 47.7 25.2
1Mm 10.1 64.1 44.4 45.4
0.5Mm 8.7 45.4 37.8 32.3
Trailing Edge Result Table

position 0G 50G 75G 100G
2Mm 13.2 194 15.6 17.0
1Mm 13.8 181.6 18.4 17.2
0.5Mm 12.8 169.2 16.6 9.4
Leading Edge Result Table

From these results, what is interesting in particular from the leading edge results is the observation that computed propagation speeds are higher for the reduced field value.


this
Alfven speed Computed for Different Heights and Sections for 50G(blue:0.5Mm, green:1Mm, red:2Mm)

Alfven speed Computed for Different Heights and Sections for 75G(blue:0.5Mm, green:1Mm, red:2Mm)

Alfven speed Computed for Different Heights and Sections for 100G(blue:0.5Mm, green:1Mm, red:2Mm)

Fast mode speed Computed for Different Heights and Sections for 50G(blue:0.5Mm, green:1Mm, red:2Mm)

Fast mode speed Computed for Different Heights and Sections for 75G(blue:0.5Mm, green:1Mm, red:2Mm)

Fast mode speed Computed for Different Heights and Sections for 100G(blue:0.5Mm, green:1Mm, red:2Mm)

Sound speed Computed for Different Heights and Sections for 50G(blue:0.5Mm, green:1Mm, red:2Mm)

Sound speed Computed for Different Heights and Sections for 75G(blue:0.5Mm, green:1Mm, red:2Mm)

Sound speed Computed for Different Heights and Sections for 100G(blue:0.5Mm, green:1Mm, red:2Mm)

Slow mode speed Computed for Different Heights and Sections for 50G(blue:0.5Mm, green:1Mm, red:2Mm)

Slow mode speed Computed for Different Heights and Sections for 75G(blue:0.5Mm, green:1Mm, red:2Mm)

Slow mode speed Computed for Different Heights and Sections for 100G(blue:0.5Mm, green:1Mm, red:2Mm)



References

  1. The Influence of the Magnetic Field on Running Penumbral Waves in the Solar Chromosphere http://adsabs.harvard.edu/abs/2013ApJ...779..168J
  2. Wave Damping Observed in Upwardly Propagating Sausage-mode Oscillations Contained within a Magnetic Pore, http://adsabs.harvard.edu/abs/2015ApJ...806..132G
  3. On the Source of Propagating Slow Magnetoacoustic Waves in Sunspots, http://adsabs.harvard.edu/abs/2015ApJ...812L..15K
  4. An Inside Look at Sunspot Oscillations with Higher Azimuthal Wavenumbers, http://adsabs.harvard.edu/abs/2017ApJ...842...59J
  5. Magnetohydrodynamic waves driven by p-modes
  6. Magnetohydrodynamic Waves in a Gravitationally Stratified Fluid
  7. The Frequency-dependent Damping of Slow Magnetoacoustic Waves in a Sunspot Umbral Atmosphere, http://adsabs.harvard.edu/abs/2017ApJ...847....5K
  8. High-frequency torsional Alfvén waves as an energy source for coronal heating, http://adsabs.harvard.edu/abs/2017NatSR...743147S
  9. Solar Atmosphere Wave Dynamics Generated by Solar Global Oscillating Eigenmodes
    1. https://doi.org/10.1016/j.asr.2017.10.053