diff --git a/_files/template.md b/_files/template.md
new file mode 100644
index 0000000..3626c77
--- /dev/null
+++ b/_files/template.md
@@ -0,0 +1,87 @@
+---
+jupyter:
+ jupytext:
+ text_representation:
+ extension: .md
+ format_name: markdown
+ format_version: '1.3'
+ jupytext_version: 1.16.0
+ kernelspec:
+ display_name: Python 3 (ipykernel)
+ language: python
+ name: python3
+---
+
+# Descriptive Title of the Content of the Notebook
+---
+Header Section: include the following information.
+
+- **Description:** A template on how to write a notebook tutorial on sciserver.
+- **Level:** Beginner | Intermediate | Advanced.
+- **Data:** Descirbe what data, if any will be use. If None, write: NA
+- **Requirements:** Describe what is needed to run the notebooks. For example: "Run in the (heasof) conda environment on Sciserver". Or "python packages: [`heasoftpy`, `astropy`, `numpy`]".
+- **Credit:** Who wrote the notebebook and when.
+- **Support:** How to get help.
+- **Last verified to run:** (00/00/000) When was this last tested.
+
+---
+
+
+## 1. Introduction
+Describe the content. It can contain 0plain text, bullets, and/or images as needed.
+Use `Markdown` when writing.
+
+The following are suggested subsections. Not all are needed:
+- Motivation / Science background.
+- Learning goals.
+- Details about the requirements, and on running the notebook outside Sciserver.
+- Type of outcome or end product.
+
+You may want to include the following section on how to run the notebook outise sciserver.
+
+In this example, reprocessing the data is not required. Instead the level 2 data products are sufficient. If you need to reprocess the data, the IXPE tools are available with from heasoftpy import ixpe
.
+
+
+```python
+import glob
+import matplotlib.pyplot as plt
+import heasoftpy as hsp
+import xspec
+```
-## Finding and exploring data
+## 3. Finding the Data
-All the heasarc data is mounted into the compute under /FTP/, so once we have the path to the data, we can directly access it without the need to download it.
+On Sciserver, all the HEASARC data ire mounted locally under `/FTP/`, so once we have the path to the data, we can directly access it without the need to download it.
-For our exploratory data analysis, we will use an observation of the blazar Mrk 501 (ObsID 01004701).
+For our exploratory data analysis, we will use an observation of the blazar **Mrk 501** (ObsID 01004701).
-(For more information on how to locate datasets of interest, see the [data access notebook](data_access.ipynb).)
+You can also see the [Getting Started](getting-started.md), [Data Access](data-access.md) and [Finding and Downloading Data](data-find-download.md) tutorials for examples on how to find data.
```python
-paths_txt = """
-/FTP/ixpe/data/obs/01/01004701
-"""
-paths = paths_txt.split('\n')[1:-1]
+data_path = "/FTP/ixpe/data/obs/01/01004701"
```
Check the contents of this folder
@@ -44,26 +81,22 @@ It should contain the standard IXPE data files, which include:
For a complete description of data formats of the level 1, level 2 and calibration data products, see the support documentation on the [IXPE Website](https://heasarc.gsfc.nasa.gov/docs/ixpe/analysis/#supportdoc)
```python
-import glob
-glob.glob(f'{paths[0]}/*')
+glob.glob(f'{data_path}/*')
```
----
-## Analyzing The Data
+## 4. Exploring The Data
To Analyze the data within the notebook, we use `heasoftpy`.
-In the folder for each observation, check for a README file. This file is included with a description of known issues (if any) with the processing for that observation.
+In the folder for each observation, check for a `README` file. This file is included with a description of known issues (if any) with the processing for that observation.
In this *IXPE* example, it is not necessary to reprocess the data. Instead the level 2 data products can be analysed directly.
```python
-import heasoftpy as hsp
-
# set some input
-indir = paths[0]
+indir = data_path
obsid = indir.split('/')[-1]
-filelist = glob.glob(f'{paths[0]}/event_l2/*')
+filelist = glob.glob(f'{indir}/event_l2/*')
filelist
```
@@ -74,16 +107,15 @@ det1_fits = filelist[0]
det2_fits = filelist[1]
det3_fits = filelist[2]
-#print the file structure for event 1 dectector file
+#print the file structure for event 1 detector file
out = hsp.fstruct(infile=det1_fits).stdout
print(out)
```
----
-## Extracting the spectro polarimetric data
+## 5. Extracting the spectro polarimetric data
-### Defining the source and background regions
+### 5.1 Defining the Source and Background Regions
To obtain the source and background spectra from the Level 2 files, we need to define a source region and background region for the extraction. This can also be done using `ds9`.
@@ -101,7 +133,7 @@ f.write('annulus(16:53:51.766,+39:45:44.41,132.000",252.000")')
f.close()
```
-### Running the extractor tools
+### 5.2 Running the extractor tools
The `extractor` tool from FTOOLS, can now be used to extract I,Q and U spectra from IXPE Level 2
event lists as shown below.
@@ -165,21 +197,20 @@ if out.returncode != 0:
raise Exception('extractor for det3 failed!')
```
----
-### Obtaining the response files
+### 5.3 Obtaining the Response Files
For the I spectra, you will need to include the RMF (Response Matrix File), and
the ARF (Ancillary Response File).
-For the Q and U spectra, you will need to include the RMF and MRF (Modulation Response File). The MRF is defined as defined by the product of the energy-dependent modulation factor, $\mu$(E) and the ARF.
+For the Q and U spectra, you will need to include the RMF and MRF (Modulation Response File). The MRF is defined by the product of the energy-dependent modulation factor, $\mu$(E) and the ARF.
The location of the calibration files can be obtained through the `hsp.quzcif` tool. Type in `hsp.quzcif?` to get more information on this function.
Note that the output of the `hsp.quzcif` gives the path to more than one file. This is because there are 3 sets of response files, corresponding to the different weighting schemes.
-For the 'NEFF' weighting, use 'alpha07_02'.
-For the 'SIMPLE' weighting, use 'alpha075simple_02'.
-For the 'UNWEIGHTED' version, use '20170101_02'.
+- For the 'NEFF' weighting, use 'alpha07_02'.
+- For the 'SIMPLE' weighting, use 'alpha075simple_02'.
+- For the 'UNWEIGHTED' version, use '20170101_02'.
```python
# hsp.quzcif?
@@ -233,11 +264,10 @@ res = hsp.quzcif(mission='ixpe', instrument='gpd',detector='DU3',
mrf3 = [x.split()[0] for x in res.output if 'alpha075_02' in x][0]
```
----
-### Load data into PyXSPEC and start fitting
+
+### 5.4 Load data into PyXSPEC and start fitting
```python
-import xspec
rmf_list = [rmf1,rmf2,rmf3]
mrf_list = [mrf1,mrf2,mrf3]
@@ -251,6 +281,7 @@ for (du, rmf_file, mrf_file, arf_file) in zip(du_list, rmf_list, mrf_list, arf_l
#Load the I data
xspec.AllData("%i:%i ixpe_det%i_src_I.pha"%(du, du+x, du))
+ xspec.AllData(f"{du}:{du+x} ixpe_det{du}_src_I.pha")
s = xspec.AllData(du+x)
# #Load response and background files
@@ -318,16 +349,11 @@ xspec.AllModels.show()
xspec.Fit.perform()
```
----
-### Plotting the results
+### 5.5 Plotting the results
-This is done through matplotlib.
+This is done through `matplotlib`.
```python
-import matplotlib.pyplot as plt
-
-
-%matplotlib inline
xspec.Plot.area=True
xspec.Plot.xAxis='keV'
xspec.Plot('lda')
@@ -366,8 +392,7 @@ ax.set_xlabel('Energy (keV)')
ax.set_ylabel(r'Polangle')
```
----
-## Interpreting the results from XSPEC
+## 6. Interpreting the results from XSPEC
There are two parameters of interest in our example. These given by the polarization fraction, A,
and polarization angle, $\psi$. The XSPEC error (or uncertainty) command can be used
@@ -414,7 +439,7 @@ plt.xlabel('A')
plt.errorbar(m1.polconst.A.values[0],m1.polconst.psi.values[0],fmt='+')
```
-### Determining the flux and calculating MDP
+### 6.1 Determining the flux and calculating MDP
Note that the detection is deemed "highly probable" (confidence C > 99.9%) as
@@ -439,7 +464,7 @@ mean of exposure time of 97243 s gives an MDP99 of 5.70% meaning that, for an un
This is consistent with the highly probable detection deduced here of a polarization fraction of 7.45$\pm$1.8%.
-## Additional Resources
+## 7. Additional Resources
Visit the IXPE [GOF Website](https://heasarcdev.gsfc.nasa.gov/docs/ixpe/analysis/) and the IXPE [Project Website at MSFC](https://ixpe.msfc.nasa.gov/for_scientists/index.html) for more resources.
diff --git a/nicer-example.md b/analysis-nicer-example.md
similarity index 62%
rename from nicer-example.md
rename to analysis-nicer-example.md
index 5576780..3a900f7 100644
--- a/nicer-example.md
+++ b/analysis-nicer-example.md
@@ -5,38 +5,82 @@ jupyter:
extension: .md
format_name: markdown
format_version: '1.3'
- jupytext_version: 1.15.2
+ jupytext_version: 1.16.0
kernelspec:
display_name: (heasoft)
language: python
name: heasoft
---
-# An Example Analysing NICER Data One Sciserver
+# Analysing NICER Data On Sciserver
+
+The background and response files are set in the header of each spectral file. So before reading a spectrum, we change directory to the location of the file so those files can be read correctly, then move back to the working directory.
+
+We also set the chatter
paramter to 0 to reduce the printed text given the large number of files we are reading.
+
+
+```python
+
+xspec.Xset.chatter = 0
+
+# other xspec settings
+xspec.Plot.area = True
+xspec.Plot.xAxis = "keV"
+xspec.Plot.background = True
+
+# save current working location
+cwd = os.getcwd()
+
+# number of spectra to read. We limit it to 500. Change as desired.
+nspec = 500
+
+# The spectra will be saved in a list
+spectra = []
+for file in filenames[:nspec]:
+ # clear out any previously loaded dataset
+ xspec.AllData.clear()
+ # move to the folder containing the spectrum before loading it
+ os.chdir(os.dirname(file))
+ spec = xspec.Spectrum(file)
+ os.chdir(cwd)
+
+ xspec.Plot("data")
+ spectra.append([xspec.Plot.x(), xspec.Plot.xErr(),
+ xspec.Plot.y(), xspec.Plot.yErr()])
+
+```
+
+```python
+# Now we plot the spectra
+
+fig = plt.figure(figsize=(10,6))
+for x,xerr,y,yerr in spectra:
+ plt.loglog(x, y, linewidth=0.2)
+plt.xlabel('Energy (keV)')
+plt.ylabel(r'counts cm$^{-2}$ s$^{-1}$ keV$^{-1}$')
+```
+
+You can at this stage start adding spectral models using `pyxspec`, or model the spectra in others ways that may include Machine Learning modeling similar to the [Machine Learning Demo](model-rxte-ml.md)
+
+If you prefer to use the Xspec built-in functionality, you can do so by plotting to a file (e.g. GIF as we show below).
+
+```python
+xspec.Plot.splashPage=None
+xspec.Plot.device='spectrum.gif/GIF'
+xspec.Plot.xLog = True
+xspec.Plot.yLog = True
+xspec.Plot.background = False
+xspec.Plot()
+xspec.Plot.device='/null'
+```
+
+```python
+from IPython.display import Image
+with open('spectrum.gif','rb') as f:
+ display(Image(data=f.read(), format='gif',width=500))
+```
+
+```python
+
+```
diff --git a/data-access.md b/data-access.md
new file mode 100644
index 0000000..e2d6a88
--- /dev/null
+++ b/data-access.md
@@ -0,0 +1,203 @@
+---
+jupyter:
+ jupytext:
+ text_representation:
+ extension: .md
+ format_name: markdown
+ format_version: '1.3'
+ jupytext_version: 1.16.0
+ kernelspec:
+ display_name: (heasoft)
+ language: python
+ name: heasoft
+---
+
+# HEASARC data access on SciServer
+
+
Please make sure the HEASARC data drive is mounted when initializing the sciserver compute container.
See details here.
Also, make sure to run the notebooks using the (heasoft) kernel
@@ -13,7 +13,6 @@ Please make sure the HEASARC data drive is mounted when initializing the sciserv
---
-
◈ [Getting Started](Getting-Started.ipynb): A quick guide on accessing the data and using [heasoftpy](https://github.com/HEASARC/heasoftpy) for analysis.
◈ [Data Access Tutorial](data_access.ipynb): Detailed examples of accessing HEASARC data holdings with the Virtual Observatory protocols using [pyvo](https://pyvo.readthedocs.io/en/latest/).
diff --git a/jdaviz-demo.md b/misc-jdaviz-demo.md
similarity index 69%
rename from jdaviz-demo.md
rename to misc-jdaviz-demo.md
index 527b99f..7acd97f 100644
--- a/jdaviz-demo.md
+++ b/misc-jdaviz-demo.md
@@ -5,35 +5,54 @@ jupyter:
extension: .md
format_name: markdown
format_version: '1.3'
- jupytext_version: 1.15.2
+ jupytext_version: 1.16.0
kernelspec:
display_name: (heasoft)
language: python
name: heasoft
---
+# A Demo for Using jdaviz on Sciserver
+
+
+- **Description:** A demo for using jdaviz for creating region files from an image during data analysis.
+- **Level:** Beginner.
+- **Data:** NuSTAR observation of **3C 382** (ObsID 60001084002).
+- **Requirements:** `heasoftpy`, `jdaviz`, `astropy`
+- **Credit:** Kavitha Arur (Jun 2023).
+- **Support:** Contact the [HEASARC helpdesk](https://heasarc.gsfc.nasa.gov/cgi-bin/Feedback).
+- **Last verified to run:** 02/01/2024.
+
+
+
-## An Demo for using jdaviz on Sciserver
+## 1. Introduction
[jdaviz](https://jdaviz.readthedocs.io/en/latest/) is a package of astronomical data analysis visualization tools based on the Jupyter platform. These GUI-based tools link data visualization and interactive analysis.
`jdaviz` includes several tools. Here, we will focus on using `Imviz`, which is a tool for visualization and quick-look analysis for 2D astronomical images, so it can be used to analyze images, create and modify regions files such as those needed in many X-ray analysis pipelines.
-We will walk through the simple steps of using `Imviz` on sciserver. For more details on using the tool, please refer to the main [jdaviz site](https://jdaviz.readthedocs.io/en/latest/).
+We will walk through the simple steps of using `Imviz` on Sciserver. For more details on using the tool, please refer to the main [jdaviz site](https://jdaviz.readthedocs.io/en/latest/).
+
+
Running On Sciserver:
+When running this notebook inside Sciserver, make sure the HEASARC data drive is mounted when initializing the Sciserver compute container.
See details here.
+
+Also, this notebook requires
heasoftpy
and
jdaviz
, which are available in the (heasoft) conda environment. You should see (heasoft) at the top right of the notebook. If not, click there and select it.
----
+
Running Outside Sciserver:
+If running outside Sciserver, some changes will be needed, including:
+• Make sure heasoftpy and heasoft are installed (
Download and Install heasoft).
+• Unlike on Sciserver, where the data is available locally, you will need to download the data to your machine.
+
-
-Say we are analyzing NuSTAR data of some point source and we want to extract the spectra. We typically need to either pass the source and background selection as RA and DEC positions along with selection region information such as the radius, or we can create the region files for the source and backgorund and pass those to the extraction pipeline. In this example, we will use the latter.
-For the purpose of this example, we will copy the cleaned event file for the FMPA detector from the archive. We will use observation `60001084002` of `3C 382`.
-
-Using [`xamin`](https://heasarc.gsfc.nasa.gov/xamin/) to search for NuSTAR observations of `3C 382`, we find that the data for this obsid is located in: `/FTP/nustar/data/obs/00/6//60001084002/`.
+
-First, we use the `extractor` tool from `heasoftpy` to extract an image from the event file
+## 2. Module Imports
+We need the following python modules:
```python
# import heasoftpy to use for image extraction
@@ -41,10 +60,22 @@ import heasoftpy as hsp
# Imviz for working with the images
from jdaviz import Imviz
+
+# WCS is need to handle image coordinates
from astropy.wcs import WCS
-%matplotlib inline
+
```
+## 3. Image Extraction
+
+Say we are analyzing NuSTAR data of some point source and we want to extract the spectra. We typically need to either pass the source and background selection as RA and DEC positions along with selection region information such as the radius, or we can create the region files for the source and backgorund and pass those to the extraction pipeline. In this example, we will use the latter.
+
+For the purpose of this example, we will copy the cleaned event file for the FMPA detector from the archive. We will use observation `60001084002` of `3C 382`.
+
+Using [`xamin`](https://heasarc.gsfc.nasa.gov/xamin/) to search for NuSTAR observations of `3C 382`, we find that the data for this obsid is located in: `/FTP/nustar/data/obs/00/6//60001084002/`.
+
+First, we use the `extractor` tool from `heasoftpy` to extract an image from the event file
+
```python
evt_file = '/FTP/nustar/data/obs/00/6//60001084002/event_cl/nu60001084002A01_cl.evt.gz'
@@ -55,6 +86,7 @@ inPars = {
'phafile' : 'NONE',
'xcolf' : 'X',
'ycolf' : 'Y',
+ # noprompt is set so the tool does not prompt for additional parameters
'noprompt' : True
}
@@ -62,6 +94,8 @@ inPars = {
res = hsp.extractor(**inPars)
```
+## 4. Create Source and Background Regions
+
After the image is extracted, we use `Imviz` to load the image, so we can create the region files.
We now proceed by creating a source and background region.
@@ -102,6 +136,8 @@ regions = imviz.get_interactive_regions()
print(regions)
```
+### 4.1 Save the Regions in Image Units
+
```python
# The following write region files in image units
regions['Subset 1'].write('source.reg', format='ds9', overwrite=True)
@@ -112,6 +148,8 @@ regions['Subset 2'].write('background.reg', format='ds9', overwrite=True)
!cat background.reg
```
+### 4.2 Save the Regions in WCS Coordinates
+
```python
# To save the image files in WCS coordinates, we can use WCS from astropy
wcs = WCS('nu_image.fits')
diff --git a/rxte_example_lightcurves.md b/rxte_example_lightcurves.md
deleted file mode 100644
index 3b329b0..0000000
--- a/rxte_example_lightcurves.md
+++ /dev/null
@@ -1,300 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.15.2
- kernelspec:
- display_name: (heasoft)
- language: python
- name: heasoft
----
-
-# RXTE example
-
-This notebook demonstrates an analysis of 16 years of RXTE data, which would be difficult outside of SciServer. We extract all of the standard product lightcurves, but then we decide that we need different channel boundaries. So we re-exctract light curves following the RXTE documentation and using the heasoftpy wrappers.
-
-```python
-import sys,os, shutil
-import pyvo as vo
-import numpy as np
-from astropy.io import fits
-import matplotlib.pyplot as plt
-%matplotlib inline
-import astropy.io.fits as pyfits
-import datetime
-
-# Ignore unimportant warnings
-import warnings
-warnings.filterwarnings('ignore', '.*Unknown element mirrorURL.*',
- vo.utils.xml.elements.UnknownElementWarning)
-```
-
-```python
-import subprocess as subp
-from packaging import version
-import importlib
-import heasoftpy as hsp
-print(hsp.__file__)
-```
-
-### Step 1: find the data
-
-We can use the Virtual Observatory interfaces to the HEASARC to find the data we're interested in. Specifically, we want to look at the observation tables. So first we get a list of all the tables HEASARC serves and then look for the ones related to RXTE:
-
-```python
-tap_services=vo.regsearch(servicetype='tap',keywords=['heasarc'])
-heasarc_tables=tap_services[0].service.tables
-```
-
-```python
-for tablename in heasarc_tables.keys():
- if "xte" in tablename:
- print(" {:20s} {}".format(tablename,heasarc_tables[tablename].description))
-
-```
-
-The "xtemaster" catalog is the one that we're interested in.
-
-Let's see what this table has in it. Alternatively, we can google it and find the same information here:
-
-https://heasarc.gsfc.nasa.gov/W3Browse/all/xtemaster.html
-
-
-```python
-for c in heasarc_tables['xtemaster'].columns:
- print("{:20s} {}".format(c.name,c.description))
-```
-
-We're interested in Eta Carinae, and we want to get the RXTE cycle, proposal, and observation ID etc. for every observation it took of this source based on its position. (Just in case the name has been entered differently, which can happen.) This constructs a query in the ADQL language to select the columns (target_name, cycle, prnb, obsid, time, exposure, ra, dec) where the point defined by the observation's RA and DEC lies inside a circle defined by our chosen source position. The results will be sorted by time. See the [NAVO website](https://heasarc.gsfc.nasa.gov/vo/summary/python.html) for more information on how to use these services with python and how to construct ADQL queries for catalog searches.
-
-```python
-# Get the coordinate for Eta Car
-import astropy.coordinates as coord
-pos=coord.SkyCoord.from_name("eta car")
-query="""SELECT target_name, cycle, prnb, obsid, time, exposure, ra, dec
- FROM public.xtemaster as cat
- where
- contains(point('ICRS',cat.ra,cat.dec),circle('ICRS',{},{},0.1))=1
- and
- cat.exposure > 0 order by cat.time
- """.format(pos.ra.deg, pos.dec.deg)
-```
-
-```python
-results=tap_services[0].search(query).to_table()
-results
-```
-
-Let's just see how long these observations are:
-
-```python
-plt.plot(results['time'],results['exposure'])
-```
-
-### Step 2: combine standard products and plot
-
-Let's collect all the standard product light curves for RXTE. (These are described on the [RXTE analysis pages](https://heasarc.gsfc.nasa.gov/docs/xte/recipes/cook_book.html).)
-
-```python
-## Need cycle number as well, since after AO9,
-## no longer 1st digit of proposal number
-ids=np.unique( results['cycle','prnb','obsid','time'])
-ids.sort(order='time')
-ids
-```
-
-```python
-## Construct a file list.
-## In this case, the name changes
-import glob
-# Though Jupyter Lab container
-rootdir="/FTP"
-# Through batch it shows up differently:
-#rootdir="/home/idies/workspace/HEASARC\ data"
-rxtedata="rxte/data/archive"
-filenames=[]
-for (k,val) in enumerate(ids['obsid']):
- fname="{}/{}/AO{}/P{}/{}/stdprod/xp{}_n2a.lc.gz".format(
- rootdir,
- rxtedata,
- ids['cycle'][k],
- ids['prnb'][k],
- ids['obsid'][k],
- ids['obsid'][k].replace('-',''))
- #print(fname)
- f=glob.glob(fname)
- if (len(f) > 0):
- filenames.append(f[0])
-print("Found {} out of {} files".format(len(filenames),len(ids)))
-```
-
-Let's collect them all into one light curve:
-
-```python
-hdul = fits.open(filenames.pop(0))
-data = hdul[1].data
-cnt=0
-lcs=[]
-for f in filenames:
- if cnt % 100 == 0:
- print("On file {}".format(f))
- hdul = fits.open(f)
- d = hdul[1].data
- data=np.hstack([data,d])
- plt.plot(d['TIME'],d['RATE'])
- lcs.append(d)
- cnt+=1
-```
-
-```python
-hdul = fits.open(filenames.pop(0))
-data = hdul[1].data
-cnt=0
-for f in filenames:
- hdul = fits.open(f)
- d = hdul[1].data
- data=np.hstack([data,d])
- if cnt % 100 == 0:
- print("On file {}".format(f))
- print(" adding {} rows from TSTART={}".format(d.shape[0],hdul[1].header['TSTARTI']))
- cnt+=1
-## The above LCs are merged per proposal. You can see that some proposals
-## had data added later, after other proposals, so you need to sort:
-data.sort(order='TIME')
-
-```
-
-```python
-plt.plot(data['TIME'],data['RATE'])
-
-```
-
-### Step 3: Re-extract a light-curve
-
-Now we go out and read about how to analyze RXTE data, and we decide that we need different channel boundaries than were used in the standard products. We can write a little function that does the RXTE data analysis steps for every observation to extract a lightcurve and read it into memory to recreate the above dataset. This function may look complicated, but it only calls three RXTE executables:
-
-* pcaprepobsid
-* maketime
-* pcaextlc2
-
-which extracts the Standard mode 2 data (not to be confused with the "standard products") for the channels you're interested in. It has a bit of error checking that'll help when launching a long job.
-
-Note that each call to this function will take 10-20 seconds to complete. So when we run a whole proposal, we'll have to wait a while.
-
-```python
-
-class XlcError( Exception ):
- pass
-
-
-# Define a function that, given an ObsID, does the rxte light curve extraction
-def rxte_lc( obsid=None, ao=None , chmin=None, chmax=None, cleanup=True):
- rootdir="/home/idies/workspace/headata/FTP"
- rxtedata="rxte/data/archive"
- obsdir="{}/{}/AO{}/P{}/{}/".format(
- rootdir,
- rxtedata,
- ao,
- obsid[0:5],
- obsid
- )
- #print("Looking for obsdir={}".format(obsdir))
- outdir="tmp.{}".format(obsid)
- if (not os.path.isdir(outdir)):
- os.mkdir(outdir)
-
- if cleanup and os.path.isdir(outdir):
- shutil.rmtree(outdir,ignore_errors=True)
-
- try:
- #print("Running pcaprepobsid")
- result=hsp.pcaprepobsid(indir=obsdir,
- outdir=outdir
- )
- print(result.stdout)
- # This one doesn't seem to return correctly, so this doesn't trap!
- if result.returncode != 0:
- raise XlcError("pcaprepobsid returned status {}".format(result.returncode))
- except:
- raise
- # Recommended filter from RTE Cookbook pages:
- filt_expr = "(ELV > 4) && (OFFSET < 0.1) && (NUM_PCU_ON > 0) && .NOT. ISNULL(ELV) && (NUM_PCU_ON < 6)"
- try:
- filt_file=glob.glob(outdir+"/FP_*.xfl")[0]
- except:
- raise XlcError("pcaprepobsid doesn't seem to have made a filter file!")
-
- try:
- #print("Running maketime")
- result=hsp.maketime(infile=filt_file,
- outfile=os.path.join(outdir,'rxte_example.gti'),
- expr=filt_expr, name='NAME',
- value='VALUE',
- time='TIME',
- compact='NO')
- #print(result.stdout)
- if result.returncode != 0:
- raise XlcError("maketime returned status {}".format(result.returncode))
- except:
- raise
-
- try:
- #print("Running pcaextlc2")
- result=hsp.pcaextlc2(src_infile="@{}/FP_dtstd2.lis".format(outdir),
- bkg_infile="@{}/FP_dtbkg2.lis".format(outdir),
- outfile=os.path.join(outdir,'rxte_example.lc'),
- gtiandfile=os.path.join(outdir,'rxte_example.gti'),
- chmin=chmin,
- chmax=chmax,
- pculist='ALL', layerlist='ALL', binsz=16)
- #print(result.stdout)
- if result.returncode != 0:
- raise XlcError("pcaextlc2 returned status {}".format(result.returncode))
- except:
- raise
-
- with pyfits.open(os.path.join(outdir,'rxte_example.lc'),memmap=False) as hdul:
- lc=hdul[1].data
- if cleanup:
- shutil.rmtree(outdir,ignore_errors=True)
- return lc
-
-```
-
-Let's look just at a small part of the time range, and look at only the first few for speed:
-
-```python
-break_at=10
-for (k,val) in enumerate(ids):
- if k>break_at: break
- l=rxte_lc(ao=val['cycle'], obsid=val['obsid'], chmin="5",chmax="10")
- try:
- lc=np.hstack([lc,l])
- except:
- lc=l
-
-```
-
-```python
-# Because the obsids won't necessarily be processed in time order
-lc.sort(order='TIME')
-```
-
-```python
-plt.plot(lc['TIME'],lc['RATE'])
-```
-
-```python
-hdu = pyfits.BinTableHDU(lc)
-pyfits.HDUList([pyfits.PrimaryHDU(),hdu]).writeto('eta_car.lc',overwrite=True)
-
-```
-
-You could then remove the break in the above loop and submit this job to the [batch queue](https://apps.sciserver.org/compute/jobs).
-
-```python
-
-```
diff --git a/rxte_example_spectral.md b/rxte_example_spectral.md
deleted file mode 100644
index a370f9e..0000000
--- a/rxte_example_spectral.md
+++ /dev/null
@@ -1,164 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.15.2
- kernelspec:
- display_name: (heasoft)
- language: python
- name: heasoft
----
-
-# A simple RXTE spectral extraction example
-
-Here we just show how to get a list of RXTE observations of a given source, construct a file list to the standard products, and extract spectra in physical units using PyXspec.
-
-```python
-import sys,os,glob
-import pyvo as vo
-import numpy as np
-import matplotlib.pyplot as plt
-%matplotlib inline
-import astropy.io.fits as fits
-import xspec
-xspec.Xset.allowPrompting = False
-# Ignore unimportant warnings
-import warnings
-warnings.filterwarnings('ignore', '.*Unknown element mirrorURL.*',
- vo.utils.xml.elements.UnknownElementWarning)
-```
-
-First query the HEASARC for its catalogs related to XTE. For more on using PyVO to find observations, see [NAVO's collection of notebook tutorials](https://nasa-navo.github.io/navo-workshop/).
-
-```python
-# First query the Registry to get the HEASARC TAP service.
-tap_services=vo.regsearch(servicetype='tap',keywords=['heasarc'])
-# Then query that service for the names of the tables it serves.
-heasarc_tables=tap_services[0].service.tables
-
-for tablename in heasarc_tables.keys():
- if "xte" in tablename:
- print(" {:20s} {}".format(tablename,heasarc_tables[tablename].description))
-
-```
-
-Query the xtemaster catalog for observations of Eta Car
-
-```python
-# Get the coordinate for Eta Car
-import astropy.coordinates as coord
-pos=coord.SkyCoord.from_name("eta car")
-query="""SELECT target_name, cycle, prnb, obsid, time, exposure, ra, dec
- FROM public.xtemaster as cat
- where
- contains(point('ICRS',cat.ra,cat.dec),circle('ICRS',{},{},0.1))=1
- and
- cat.exposure > 0 order by cat.time
- """.format(pos.ra.deg, pos.dec.deg)
-results=tap_services[0].search(query).to_table()
-results
-```
-
-```python
-## Need cycle number as well, since after AO9,
-## no longer 1st digit of proposal number
-ids=np.unique( results['cycle','prnb','obsid'])
-ids
-```
-
-At this point, you need to construct a file list. There are a number of ways to do this, but this one is just using our knowledge of how the RXTE archive is structured. This code block limits the results to a particular proposal ID to make this quick, but you could remove that restriction and wait longer:
-
-```python
-## Construct a file list.
-rootdir="/FTP"
-rxtedata="rxte/data/archive"
-filenames=[]
-for (k,val) in enumerate(ids['obsid']):
- # Skip some for a quicker test case
- if ids['prnb'][k]!=80001:
- continue
- fname="{}/{}/AO{}/P{}/{}/stdprod/xp{}_s2.pha.gz".format(
- rootdir,
- rxtedata,
- ids['cycle'][k],
- ids['prnb'][k],
- ids['obsid'][k],
- ids['obsid'][k].replace('-',''))
- #print(fname)
- f=glob.glob(fname)
- if (len(f) > 0):
- filenames.append(f[0])
-print("Found {} out of {} files".format(len(filenames),len(ids)))
-```
-
-```python
-print(type(ids['obsid'][k]))
-print(type('-'))
-import inspect,astropy
-inspect.getfile(astropy)
-```
-
-Now we have to use our knowledge of [PyXspec](https://heasarc.gsfc.nasa.gov/xanadu/xspec/python/html/quick.html) to convert the spectra into physical units. Then we use Matplotlib to plot, since the Xspec plotter is not available here.
-
-(Note that there will be errors when the code tries to read in the background and response files from the working directory. We then specify them explicitly.)
-
-```python
-dataset=[]
-xref=np.arange(0.,50.,1)
-for f in filenames[0:500]:
- xspec.AllData.clear() # clear out any previously loaded dataset
- ## Ignore the errors it will print about being unable
- ## to find response or background
- s = xspec.Spectrum(f)
- ## Then specify with the correct path.
- s.background=f.replace("_s2.pha","_b2.pha")
- s.response=f.replace("_s2.pha",".rsp")
- xspec.Plot.area=True
- xspec.Plot.xAxis = "keV"
- xspec.Plot.add = True
- xspec.Plot("data")
- xspec.Plot.background = True
- xVals = xspec.Plot.x()
- yVals = xspec.Plot.y()
- yref= np.interp(xref, xVals, yVals)
- dataset.append( yref )
-
-```
-
-```python
-fig, ax = plt.subplots(figsize=(10,6))
-
-for s in dataset:
- ax.plot(xref,s)
-ax.set_xlabel('Energy (keV)')
-ax.set_ylabel(r'counts/cm$^2$/s/keV')
-ax.set_xscale("log")
-ax.set_yscale("log")
-```
-
-And now you can put these into your favorite spectral analysis program like [PyXspec](https://heasarc.gsfc.nasa.gov/xanadu/xspec/python/html/quick.html) or into an AI/ML analysis following [our lightcurve example](rxte_example_lightcurves.ipynb).
-
-If you prefer to use the Xspec plot routines, you can do so but only using an output file. It cannot open a window through a notebook running on SciServer. So here's an example using a GIF output file and then displaying the result in the notebook:
-
-```python
-xspec.Plot.splashPage=None
-xspec.Plot.device='spectrum.gif/GIF'
-xspec.Plot.xLog = True
-xspec.Plot.yLog = True
-xspec.Plot.background = False
-xspec.Plot()
-xspec.Plot.device='/null'
-```
-
-```python
-from IPython.display import Image
-with open('spectrum.gif','rb') as f:
- display(Image(data=f.read(), format='gif',width=500))
-```
-
-```python
-
-```
diff --git a/source_list_querying.md b/source_list_querying.md
deleted file mode 100644
index 2fc37ec..0000000
--- a/source_list_querying.md
+++ /dev/null
@@ -1,151 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.15.2
- kernelspec:
- display_name: (heasoft)
- language: python
- name: heasoft
----
-
-# Example of a large catalog exploration with a list of sources
-
-In this example, a user has a catalog of several thousand sources they are interested in. They'd like to find out if they've been observed by HEASARC missions and what the total exposure each sources has for that mission. This can be done in a variety of inefficient ways such as writing a script to call one of the HEASARC APIs for each of the sources. But we encourage users to discover the power of querying databases with SQL.
-
-This tutorial is a HEASARC-specific example of a more general workflow querying astronomy databases with Virtual Observatory protocols as described in our
NASA Astronomical Virtual Observatories (NAVO)
workshop notebook.
-
-The step in this tutorial are:
-1. Prepare the input source list as VO table in XML format.
-2. Find the list of HEASARC missions to be queried.
-3. Submit and SQL query.
-
-```python
-# suppress some specific warnings that are not important
-import warnings
-warnings.filterwarnings("ignore", module="astropy.io.votable.*")
-warnings.filterwarnings("ignore", module="pyvo.utils.xml.*")
-warnings.filterwarnings("ignore", module="astropy.units.format.vounit")
-
-## Generic VO access routines
-import pyvo as vo
-from astropy.table import Table
-from astropy.io.votable import from_table, writeto
-from astropy.io import ascii
-```
-
-As described in the NAVO workshop notebooks linked above, the first step is to create an object that represents a tool to query the HEASARC catalogs.
-
-```python
-# Get HEASARC's TAP service:
-tap_services = vo.regsearch(servicetype='tap',keywords=['heasarc'])
-for s in tap_services:
- if 'heasarc' in s.ivoid:
- heasarc = s
- break
-heasarc.describe()
-```
-
----
-## 1. Prepare the input source list as VO table in XML format:
-
-VO protocols use the VOTable standard for tables, which is both powerful and complicated. But astropy has easy tools to convert to and from this XML format.
-
-Typically, you may start from a list of sources you want to query. In this tutorial, we first create this list in comma-separated value (CSV) format to be used as our input. The file `inlist_10k.csv` contains a list of 10000 RA and DEC values.
-
-We then create a VOTable version that can be used in our query below.
-
-```python
-## This is how I generated my input list in the first place. Comment out and replace with your own:
-result = heasarc.service.run_sync("select ra, dec from xray limit 10000")
-ascii.write(result.to_table(), "inlist_10k.csv", overwrite=True, format='csv')
-
-## Input a list of sources in CSV format
-input_table = Table.read("inlist_10k.csv",format="csv")
-
-# Convert to VOTable
-votable = from_table(input_table)
-writeto(votable,"longlist.xml")
-```
-
-## 2. Find the list of HEASARC missions to be queried.
-
-
-Note that you may also wish to generate a list of all of our master catalogs. In the case of the HEASARC, we have of order a thousand different catalogs, most of which are scientific results rather than mission observation tables. So you don't want to print all of our catalogs but a selection of them. For instance, you can do it this way:
-
-```python
-master_catalogs=[]
-for c in heasarc.service.tables:
- if "master" in c.name or "mastr" in c.name:
- master_catalogs.append(c.name)
-print(master_catalogs)
-```
-
-## 3. Submit and SQL query.
-
-The next step is to construct a query in the SQL language, specifically a dialect created for astronomical queries, the ADQL. This is also described briefly in the
workshop notebook among other places.
-
-Note also that each service can show you examples that its curators have put together to demonstrate, e.g.:
-
-```python
-for e in heasarc.service.examples:
- print(e['QUERY'])
-```
-
-
-
-For our use case, we need to do something a bit more complicated involving a *cross-match* between our source list and the HEASARC master catalog for a given mission. While it may be possible to construct an even more complicated query that does all of the HEASARC master catalogs in one go, that may overload the servers, as does repeating the same query 10 thousand times for individual sources. The recommended approch is to do a 10 thousand sources cross match in a few dozen queries to the master catalogs.
-
-So let's start with the Chandra master catalog `chanmaster`. You can then repeat the exercise for all of the others.
-
-For a cross-match, you can simply upload your catalog with your query as an XML file, and at that point, you tell the service what to name it. In this case, we call it `mytable`. Then in your SQL, the table name is `tap_upload.mytable` and it otherwise behaves like any other table. Our list of sources had two columns named RA and DEC, so they are likewise refered to that way in the SQL.
-
-To compare your source list coordinates with the coordinates in the given master observation table, you an use the special `ADQL` functions `POINT`, `CIRCLE`, and `CONTAINS`, which do basically what they sound like. The query below matches the input source list against `chanmaster` based on a radius of 0.01 degrees. For each source, it sums all `chanmaster` observations' exposures to give the total exposure and counts how many observations that was:
-
-```python
-# Construct a query to chanmaster to total the exposures
-# for all of the uploaded sources in the list:
-query="""
- SELECT cat.name, cat.ra, cat.dec, sum(cat.exposure) as total_exposure, count(*) as num_obs
- FROM chanmaster cat, tap_upload.mytable mt
- WHERE
- CONTAINS(POINT('ICRS',cat.ra,cat.dec),CIRCLE('ICRS',mt.ra,mt.dec,0.01))=1
- GROUP BY cat.name, cat.ra, cat.dec """
-```
-
-```python
-# Send the query to the HEASARC server:
-result = heasarc.service.run_sync(query, uploads={'mytable': 'longlist.xml'})
-# Convert the result to an Astropy Table
-mytable = result.to_table()
-mytable
-```
-
-The above shows that of our 10k sources, roughly a dozen (since the catalogs are updated daily and the row order may change, these numbers will change between runs of this notebook) were observed anywhere from once to over a thousand times.
-
-Lastly, you can convert the results back into CSV if you wish:
-
-```python
-ascii.write(mytable, "results_chanmaster.csv", overwrite=True, format='csv')
-```
-
-
-Note that sources with slightly different coordinates in the catalogs are summed separately here. If you want to group by the **average** `RA` and `DEC`, the query can be modified to the following, which will average the RA and DEC values that are slightly different for the same source.
-
-```python
-query="""
- SELECT cat.name, AVG(cat.ra) as avg_ra, AVG(cat.dec) as avg_dec, sum(cat.exposure) as total_exposure, count(*) as num_obs
- FROM chanmaster cat, tap_upload.mytable mt
- WHERE
- CONTAINS(POINT('ICRS',cat.ra,cat.dec),CIRCLE('ICRS',mt.ra,mt.dec,0.01))=1
- GROUP BY cat.name """
-
-```
-
-
-```python
-
-```