From ada0f0ce9c61b48703ff0d77af32b46666b6da44 Mon Sep 17 00:00:00 2001 From: Sauli Joro Date: Tue, 26 Mar 2019 09:06:48 +0000 Subject: [PATCH 1/7] Modify exercises to use new reader names. Update installation instructions. --- eum-2019-03/0-Installation.ipynb | 97 +++++ eum-2019-03/1-start-msg.ipynb | 246 +++++++++++ eum-2019-03/10-start-sentinel-3.ipynb | 68 +++ eum-2019-03/2-start-metop.ipynb | 221 ++++++++++ eum-2019-03/3-pyorbital-passes.ipynb | 69 ++++ eum-2019-03/4-custom-areas.ipynb | 104 +++++ eum-2019-03/5-custom-reader.ipynb | 390 ++++++++++++++++++ .../6-Metop_and_MSG_on_the_same_grid.ipynb | 167 ++++++++ eum-2019-03/7-A_Time_Loop_Animation.ipynb | 102 +++++ eum-2019-03/8-Saving_to_netcdf.ipynb | 32 ++ eum-2019-03/9-Computing_statistics.ipynb | 161 ++++++++ eum-2019-03/sample_data/.keep | 0 12 files changed, 1657 insertions(+) create mode 100644 eum-2019-03/0-Installation.ipynb create mode 100644 eum-2019-03/1-start-msg.ipynb create mode 100644 eum-2019-03/10-start-sentinel-3.ipynb create mode 100644 eum-2019-03/2-start-metop.ipynb create mode 100644 eum-2019-03/3-pyorbital-passes.ipynb create mode 100644 eum-2019-03/4-custom-areas.ipynb create mode 100644 eum-2019-03/5-custom-reader.ipynb create mode 100644 eum-2019-03/6-Metop_and_MSG_on_the_same_grid.ipynb create mode 100644 eum-2019-03/7-A_Time_Loop_Animation.ipynb create mode 100644 eum-2019-03/8-Saving_to_netcdf.ipynb create mode 100644 eum-2019-03/9-Computing_statistics.ipynb create mode 100644 eum-2019-03/sample_data/.keep diff --git a/eum-2019-03/0-Installation.ipynb b/eum-2019-03/0-Installation.ipynb new file mode 100644 index 0000000..8ff4bff --- /dev/null +++ b/eum-2019-03/0-Installation.ipynb @@ -0,0 +1,97 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laptop configuration for the training\n", + "\n", + "The training will be conducted using X2Go-servers available in the TCE. Instructions on how to install X2Go on your laptop can be found in DM Tool document `EUM/RSP/VWG/18/1018393`.\n", + "\n", + "1. You should have X2Go already installed on your laptop, so go ahead and start it up using your TCE credentials\n", + "2. Open a new console from the Start menu -> System Tools -> Konsole (font size can be made bigger with Ctrl +)\n", + "\n", + "\n", + "## Installation and use of `conda`\n", + "\n", + "### Installation of `miniconda3`\n", + "\n", + "Type the following commands to the terminal.\n", + "\n", + "1. `wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh`\n", + "2. `bash ~/miniconda.sh -b -p $HOME/miniconda3`\n", + "3. `echo 'export PATH=\"$PATH:$HOME/miniconda3/bin\"' >> ~/.profile.local`\n", + "4. `source ~/.profile.local`\n", + "\n", + "### Copy a ready-made `conda` environment for the training\n", + "\n", + "A ready made environment with all the necessary installations is available in `/tcenas/scratch/pytroll`. The name of the copied environment can be chosen freely, for the sake of simplicity we will use name `troll`for this training. Copying may take a few minutes...\n", + "\n", + "1. `cd ~/miniconda3/envs`\n", + "2. `cp -R /tcenas/scratch/pytroll/troll_env ./troll`\n", + "\n", + "### Activate the `troll` environment\n", + "\n", + "1. `source activate troll`\n", + "\n", + "You should have an indication in your terminal prompt that the conda environment is active, e.g.:\n", + "`(troll) joro@x2g13:~ $`. When the `conda` environment is active, all the installations after this point will go only to this specific environment. The environment can be deactivated with `source deactivate`-command.\n", + "\n", + "\n", + "## Using `jupyter`-notebook\n", + "\n", + "### Copy the exercises\n", + "\n", + "1. `mkdir ~/pytroll_training`\n", + "2. `cp /tcenas/scratch/pytroll/exercises/* ~/pytroll_training`\n", + "3. `cd ~/pytroll_training`\n", + "\n", + "### Starting headless `jupyter notebook` server\n", + "In order to get the best performance for the exercises, we'll start `jupyter notebook` with a headless server, i.e., we run the notebook on the X2Go-server, but connect to it from your own laptops.\n", + "\n", + "Find out the IP-address of your server, and start `jupyter notebook` with `--ip` and `--no-browser` options\n", + "1. `/sbin/ifconfig`\n", + "2. `jupyter notebook --ip= --no-browser`\n", + "\n", + "Copy and paste the given URL to your laptops browser\n", + "\n", + "## Information about conda installations\n", + "\n", + "Though `conda` installations are not necessary for this training, any missing modules and packages can be installed to your environment. Usually it's best to google the module, e.g., \"conda install pycoast\" and follow the instructions. In case of `pycoast`, the installation would be\n", + "\n", + "1. `conda install -c conda-forge pycoast`\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import satpy\n", + "print (satpy.__file__)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.2" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/1-start-msg.ipynb b/eum-2019-03/1-start-msg.ipynb new file mode 100644 index 0000000..5747fc0 --- /dev/null +++ b/eum-2019-03/1-start-msg.ipynb @@ -0,0 +1,246 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Getting started with MSG level 1.5 data, native format" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Summary\n", + "In this tutorial, we will read MSG data and display a few composites and channels, in satellite projection and on predefined areas." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Installation and setup\n", + "If you went through the installation procedure described in tutorial 0, you should be all set up :)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Loading some data\n", + "\n", + "The first step is to find the files to load, and to create a `Scene` instance" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "from satpy import Scene, find_files_and_readers\n", + "\n", + "files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex1/',\n", + " reader='seviri_l1b_native')\n", + "\n", + "scn = Scene(filenames=files)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To list the available datasets run 'available_composite_names' method" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "scn.available_dataset_names()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Then we decide on a composite to load and display it on screen. To get the list of available composites, use the `available_composite_names` method." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(scn.available_composite_names())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "\n", + "composite = 'natural_color'\n", + "scn.load([composite])\n", + "scn.show(composite)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The loaded composite is a `DataArray` from the `xarray` library, containing the data, the projection coordinates of the pixels and some metadata." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "scn[composite]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 1.1\n", + "Load and display other composite, like `fog` and `airmass`" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Resampling the data\n", + "\n", + "A simple call to the `resample` method is enough to resample the data to another area. Here, we use the predefined area `euro4`, but we will see in another tutorial how to customize the areas. Some other predifined areas are `euron1`, `eurol`, `germ`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "newscn = scn.resample('euro4', radius_of_influence=20000)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "newscn.show(composite)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Saving the image to disk\n", + "The `save_dataset` method can be used to save the image to disk. Formats supported for writing are some of the standard image formats (png, jpeg), GeoTIFF, NetCDF. The default will be to save to GeoTIFF with a filename based on the composite name and the start time of the scene." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "newscn.save_dataset(composite)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Alternatively, one can specify the filename to save to (and using the proper extension, the format)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "newscn.save_dataset(composite, filename='really_cool_pic.png')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Working with the individual channels\n", + "It is also possible to load the seviri channels individually (using their names or approximate wavelengths in µm) and work with them. To get the list of available channels, use the `available_dataset_names` method." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "scn.load([10.8, 12.0])\n", + "scn['t11-t12'] = scn[10.8] - scn[12.0]\n", + "scn.show('t11-t12')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 1.2\n", + "Try to resample `t11-t12`, and if it doesn't work, try to fix it :)\n", + "\n", + "(Hint: look at the metadata)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/10-start-sentinel-3.ipynb b/eum-2019-03/10-start-sentinel-3.ipynb new file mode 100644 index 0000000..c0f2849 --- /dev/null +++ b/eum-2019-03/10-start-sentinel-3.ipynb @@ -0,0 +1,68 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Sentinel 3 fun" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 10.1: Reading the OLCI data\n", + "Find out which reader to use from the satpy reader table on this page: https://satpy.readthedocs.io/en/latest/index.html\n", + "\n", + "Then create a True Color composite from it." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 10.2: Reading the SLSTR data\n", + "Find out which reader to use from the satpy reader table on this page: https://satpy.readthedocs.io/en/latest/index.html\n", + "\n", + "Then create a Overview Sun composite from it." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 10.3: Colorize\n", + "Choose a nice colormap and colorize the SLSTR thermal channel at ~11µm" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 10.4\n", + "Combine the results exercices 1 and 3 to create a sandwich composite" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/2-start-metop.ipynb b/eum-2019-03/2-start-metop.ipynb new file mode 100644 index 0000000..c7455c5 --- /dev/null +++ b/eum-2019-03/2-start-metop.ipynb @@ -0,0 +1,221 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Playing with Metop AVHRR data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Summary\n", + "In this tutorial, we will read Metop AVHRR data and display a few composites, in satellite projection and on a couple of areas, and add coastlines to some images." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Finding the files, filtering with time\n", + "The data for this tutorial is comprised of a handfull of Metop-B AVHRR/3 granules, that span over a couple of hours time. Since we just want work with a fraction of all that data, one solution is to define a start and end time and filter the files we provide to `Scene`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "from satpy import Scene, find_files_and_readers\n", + "from datetime import datetime\n", + "import pprint\n", + "\n", + "files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex2',\n", + " reader='avhrr_l1b_eps',\n", + " start_time=datetime(2018, 10, 7 ,9, 25),\n", + " end_time=datetime(2018, 10, 7 , 9, 30))\n", + "\n", + "#files=sorted(files)\n", + "scn = Scene(filenames=files)\n", + "pprint.pprint(files)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Then we decide on a composite to load and display it on screen" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(scn.available_composite_names())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "composite = 'overview'\n", + "scn.load([composite])\n", + "scn.show(composite)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Resampling and displaying with coastlines\n", + "Resampling is done exactly as in the previous tutorial. However this time we will display the data with coastlines on top." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "newscn = scn.resample('euro1')\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": false + }, + "outputs": [], + "source": [ + "newscn.show(composite, overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For more information on which parameters you can pass for the coastline burning, check the pycoast documentation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Dynamic area resampling\n", + "For polar data, it can be convenient do resample the data in a given projection, but adapt the area extent to the actual data being resampled. For this, we have at the moment two dynamic areas we can use: `omerc_bb` for the oblique mercator projection and `laea_bb` for the lambert azimuthal equal-area projection. _Note_: `laea_bb` here will not yield a sensible result for the used dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Resample and show the image\n", + "newscn = scn.resample('omerc_bb')\n", + "newscn.show(composite, overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Filtering the data by area\n", + "When we have many data granules, it can be convenient to load only those covering a given area. We can achieve this by using the parameter filtering during the `Scene` instanciation" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex2',\n", + " reader='avhrr_l1b_eps')\n", + "scn = Scene(filenames=files, filter_parameters={'area':'germ'})" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "composite = 'natural_color'\n", + "scn.load([composite])\n", + "newscn = scn.resample('eurol')\n", + "newscn.show(composite, overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Saving the data\n", + "Satpy allows you to save the data in various formats, among which GeoTIFF and NetCDF" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "newscn.save_dataset('my_fantastic_image.tiff')\n", + "newscn.save_dataset('my_fantastic_image.nc')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 2.1\n", + "Generate an `ndvi` product using the individual channels and display the result on the `euron1` area. Then save your work as PNG.\n", + "\n", + "NDVI formula: ndvi = (vis08 - vis06) / (vis08 + vis06)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/3-pyorbital-passes.ipynb b/eum-2019-03/3-pyorbital-passes.ipynb new file mode 100644 index 0000000..88462c5 --- /dev/null +++ b/eum-2019-03/3-pyorbital-passes.ipynb @@ -0,0 +1,69 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Predicting polar orbiter passes\n", + "\n", + "In this tutorial, we look at how to predict passes of polar satellites using pyorbital" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 3.1\n", + "Look at the pyorbital documentation, compute the current position of NOAA-20, and predict the passes for NOAA-20 for the next 24 hours over EUMETSAT HQ.\n", + "https://pyorbital.readthedocs.io/en/latest/\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 3.2 (Advanced)\n", + "Plot the first of the computed passes on a map. This can be achieved by using the pytroll-schedule package, that you will need to install, using the `feature-oo` branch. You also need to have basemap installed.\n", + "Use the `trollsched.satpass.Pass` class. Documentation on this one is scarce, so look in the source code to see how it works.\n", + "\n", + "And if you feel very advanced, please convert this plotting facility to cartopy (as basemap is deprecated)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 3.3\n", + "Compute the sun zenith angle at the previously computed sub-satellite coordinates." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/4-custom-areas.ipynb b/eum-2019-03/4-custom-areas.ipynb new file mode 100644 index 0000000..5629a7a --- /dev/null +++ b/eum-2019-03/4-custom-areas.ipynb @@ -0,0 +1,104 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# More about areas and pyresample\n", + "\n", + "## Summary\n", + "\n", + "In this tutorial, we will learn more about defining areas to work with in pytroll/satpy, and how to save them for later use.\n", + "\n", + "## Defining Area Definitions\n", + "\n", + "The areas in pytroll/satpy are defined through the pyresample package.\n", + "\n", + "To define an area, have a look at the pyresample documentation:\n", + "https://pyresample.readthedocs.io/en/latest/geo_def.html\n", + "\n", + "In this tutorial, we will create an area definition that covers Sweden, following the official SWEREF99TM.\n", + "\n", + "The projection to use is transverse mercator, with the following parameter:\n", + "- reference longitude is 15\n", + "- scale factor is 0.9996\n", + "\n", + "Sweden is included in a lon/lat box spanning approximately 10 to 25 degrees east and 55 to 70 degrees north. (If you're curious, look here: https://en.wikipedia.org/wiki/Geography_of_Sweden#Latitude_and_longitude)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 4.1\n", + "Use pyproj to determine the area extent in projection coordinate of the bounding box.\n", + "\n", + "Pyproj documentation: https://jswhit.github.io/pyproj/\n", + "\n", + "The transverse mercator projection parameters https://proj4.org/operations/projections/tmerc.html" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 4.2\n", + "Create a pyresample AreaDefinition object using your computed parameters, with a resolution of ~1km" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 4.3\n", + "Resample your metop data on your newly defined area definition." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Saving the area\n", + "The area definitions can be saved in yaml format. An `areas.yaml` file is provided in the satpy installation (https://github.com/pytroll/satpy/blob/feature-areas-yaml/satpy/etc/areas.yaml), but for areas you want to save, it is best to have an `areas.yaml` file in your working directory that will complete or replace the standard one. Alternatively, you can place your areas file in a directory pointed to by the `PPP_CONFIG_DIR` environment variable.\n", + "To export the area you created, use the area's `create_areas_def` method." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 4.4\n", + "Save the area you just defined into your own `areas.yaml` file. Resample your metop data using the saved area definition, ie by refering to it by name in the `scn.resample` call." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/5-custom-reader.ipynb b/eum-2019-03/5-custom-reader.ipynb new file mode 100644 index 0000000..b53dd1b --- /dev/null +++ b/eum-2019-03/5-custom-reader.ipynb @@ -0,0 +1,390 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Adding a custom reader to satpy\n", + "\n", + "\n", + "In order to add a reader to satpy, you will need to create two files:\n", + " - a YAML file for describing the files to read and the datasets that are available\n", + " - a python file implementing the actual reading of the datasets and metadata\n", + "\n", + "For this tutorial, we will implement a reader for the Eumetsat NetCDF format for SEVIRI data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## The YAML file\n", + "\n", + "The yaml file is composed of three sections:\n", + "- the `reader` section, that provides basic parameters for the reader\n", + "- the `file_types` section, which gives the patterns of the files this reader can handle\n", + "- the `datasets` section, describing the datasets available from this reader \n", + "\n", + "### The `reader` section\n", + "The `reader` section, that provides basic parameters for the reader.\n", + "\n", + "The parameters to provide in this section are:\n", + "- description: General description of the reader\n", + "- name: this is the name of the reader, it should be the same as the filename (without the .yaml extension obviously). This is the name used interactively in satpy, so choose it well! A loose convention is to use `__` as a template for the name\n", + "- sensors: the list of sensors this reader will support\n", + "- reader: the metareader to use, in most cases the `FileYAMLReader` is a good choice." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "```yaml\n", + "reader:\n", + " description: NetCDF4 reader for the Eumetsat MSG format\n", + " name: nc_seviri_l1b\n", + " sensors: [seviri]\n", + " reader: !!python/name:satpy.readers.yaml_reader.FileYAMLReader\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### The `file_types` section\n", + "\n", + "Each file type needs to provide:\n", + " - `file_reader`, the class that will handle the files for this reader, that you will implement in the corresponding python file (see next section)\n", + " - `file_patterns`, the patterns to match to find files this reader can handle. The syntax to use is basically the same as `format` with the addition of time. See the [trollsift package documentation](https://trollsift.readthedocs.io/en/latest/usage.html) for more details.\n", + " - Optionally, a file type can have a `requires` field: it is a list of file types that the current file types needs to function. For example, the HRIT MSG format segment files each need a prologue and epilogue file to be read properly, hence in this case we have added `requires: [HRIT_PRO, HRIT_EPI]` to the file type definition.\n", + " \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "```yaml\n", + "file_types:\n", + " nc_seviri_l1b:\n", + " file_reader: !!python/name:satpy.readers.nc_seviri_l1b.NCSEVIRIFileHandler\n", + " file_patterns: ['W_XX-EUMETSAT-Darmstadt,VIS+IR+IMAGERY,{satid:4s}+SEVIRI_C_EUMG_{processing_time:%Y%m%d%H%M%S}.nc']\n", + " nc_seviri_l1b_hrv:\n", + " file_reader: !!python/name:satpy.readers.nc_seviri_l1b.NCSEVIRIHRVFileHandler\n", + " file_patterns: ['W_XX-EUMETSAT-Darmstadt,HRV+IMAGERY,{satid:4s}+SEVIRI_C_EUMG_{processing_time:%Y%m%d%H%M%S}.nc']\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### The `datasets` section\n", + "\n", + "The datasets section describes each dataset available in the files. The parameters provided are made available to the methods of the implementing class.\n", + "\n", + "Parameters you can define for example are:\n", + " - name\n", + " - sensor\n", + " - resolution\n", + " - wavelength\n", + " - polarization\n", + " - standard_name: the name used for the dataset, that will be used for knowing what kind of data it is and handle it appropriately\n", + " - units: the units of the data, important to get consistent processing across multiple platforms/instruments\n", + " - modifiers: what modification have already been applied to the data, eg `sunz_corrected`\n", + " - file_type\n", + " - coordinates: this tells which datasets to load to navigate the current dataset\n", + " - and any other field that is relevant for the reader\n", + "\n", + "This section can be copied and adapted simply from existing seviri readers, like for example the `msg_native` reader." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "```yaml\n", + "\n", + "datasets:\n", + " HRV:\n", + " name: HRV\n", + " resolution: 1000.134348869\n", + " wavelength: [0.5, 0.7, 0.9]\n", + " calibration:\n", + " reflectance:\n", + " standard_name: toa_bidirectional_reflectance\n", + " units: \"%\"\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b_hrv\n", + "\n", + " IR_016:\n", + " name: IR_016\n", + " resolution: 3000.403165817\n", + " wavelength: [1.5, 1.64, 1.78]\n", + " calibration:\n", + " reflectance:\n", + " standard_name: toa_bidirectional_reflectance\n", + " units: \"%\"\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + " nc_key: 'ch3'\n", + "\n", + " IR_039:\n", + " name: IR_039\n", + " resolution: 3000.403165817\n", + " wavelength: [3.48, 3.92, 4.36]\n", + " calibration:\n", + " brightness_temperature:\n", + " standard_name: toa_brightness_temperature\n", + " units: K\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + " nc_key: 'ch4'\n", + "\n", + " IR_087:\n", + " name: IR_087\n", + " resolution: 3000.403165817\n", + " wavelength: [8.3, 8.7, 9.1]\n", + " calibration:\n", + " brightness_temperature:\n", + " standard_name: toa_brightness_temperature\n", + " units: K\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + "\n", + " IR_097:\n", + " name: IR_097\n", + " resolution: 3000.403165817\n", + " wavelength: [9.38, 9.66, 9.94]\n", + " calibration:\n", + " brightness_temperature:\n", + " standard_name: toa_brightness_temperature\n", + " units: K\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + "\n", + " IR_108:\n", + " name: IR_108\n", + " resolution: 3000.403165817\n", + " wavelength: [9.8, 10.8, 11.8]\n", + " calibration:\n", + " brightness_temperature:\n", + " standard_name: toa_brightness_temperature\n", + " units: K\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + "\n", + " IR_120:\n", + " name: IR_120\n", + " resolution: 3000.403165817\n", + " wavelength: [11.0, 12.0, 13.0]\n", + " calibration:\n", + " brightness_temperature:\n", + " standard_name: toa_brightness_temperature\n", + " units: K\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + "\n", + " IR_134:\n", + " name: IR_134\n", + " resolution: 3000.403165817\n", + " wavelength: [12.4, 13.4, 14.4]\n", + " calibration:\n", + " brightness_temperature:\n", + " standard_name: toa_brightness_temperature\n", + " units: K\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + "\n", + " VIS006:\n", + " name: VIS006\n", + " resolution: 3000.403165817\n", + " wavelength: [0.56, 0.635, 0.71]\n", + " calibration:\n", + " reflectance:\n", + " standard_name: toa_bidirectional_reflectance\n", + " units: \"%\"\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + "\n", + " VIS008:\n", + " name: VIS008\n", + " resolution: 3000.403165817\n", + " wavelength: [0.74, 0.81, 0.88]\n", + " calibration:\n", + " reflectance:\n", + " standard_name: toa_bidirectional_reflectance\n", + " units: \"%\"\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + "\n", + " WV_062:\n", + " name: WV_062\n", + " resolution: 3000.403165817\n", + " wavelength: [5.35, 6.25, 7.15]\n", + " calibration:\n", + " brightness_temperature:\n", + " standard_name: toa_brightness_temperature\n", + " units: \"K\"\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + "\n", + " WV_073:\n", + " name: WV_073\n", + " resolution: 3000.403165817\n", + " wavelength: [6.85, 7.35, 7.85]\n", + " calibration:\n", + " brightness_temperature:\n", + " standard_name: toa_brightness_temperature\n", + " units: \"K\"\n", + " radiance:\n", + " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", + " units: W m-2 um-1 sr-1\n", + " counts:\n", + " standard_name: counts\n", + " units: count\n", + " file_type: nc_seviri_l1b\n", + "\n", + " \n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The YAML file is now ready, let's go on with the corresponding python file.\n", + "## The python file\n", + "The python files needs to implement a file handler class for each file type that we want to read.\n", + "Such a class needs to implement a few methods:\n", + "- the `__init__` method, that takes as arguments \n", + " - the filename (string)\n", + " - the filename info (dict) that we get by parsing the filename using the pattern defined in the yaml file\n", + " - the filetype info that we get from the filetype definition in the yaml file\n", + " \n", + " This method can also recieve other file handler instances as parameter if the filetype at hand has requirements. (See the explanation in the YAML file filetype section above)\n", + "- the `get_dataset` method, which takes as arguments \n", + " - the dataset ID of the dataset to load\n", + " - the dataset info that is the description of the channel in the YAML file\n", + " \n", + " This method has to return an xarray.DataArray instance if the loading is successful, containing the data and metadata of the loaded dataset, or return None if the loading was unsuccessful.\n", + "- the `get_area_def` method, that takes as single argument the dataset ID for which we want the area. For the data that cannot be geolocated with an area definition, the pixel coordinates need to be loadable from `get_dataset` for the resulting scene to be navigated. That is, if the data cannot be geolocated with an area definition then the dataset section should specify `coordinates: [longitude_dataset, latitude_dataset]`\n", + "- Optionally, the `get_bounding_box` method can be implemented if filtering files by area is desirable for this data type \n", + "\n", + "On top of that, two attributes need to be defined: `start_time` and `end_time`, that define the start and end times of the sensing." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# this is nc_seviri_l1b.py\n", + "class NCSEVIRIFileHandler():\n", + " def __init__(self, filename, filename_info, filetype_info):\n", + " super(NCSEVIRIFileHandler, self).__init__(filename, filename_info, filetype_info)\n", + " self.nc = None\n", + " \n", + " def get_dataset(self, dataset_id, dataset_info):\n", + " if dataset_id.calibration != 'radiance':\n", + " # TODO: implement calibration to relfectance or brightness temperature\n", + " return\n", + " if self.nc is None:\n", + " self.nc = xr.open_dataset(self.filename,\n", + " decode_cf=True,\n", + " mask_and_scale=True,\n", + " chunks={'num_columns_vis_ir': CHUNK_SIZE,\n", + " 'num_rows_vis_ir': CHUNK_SIZE})\n", + " self.nc = self.nc.rename({'num_columns_vir_ir': 'x', 'num_rows_vir_ir': 'y'})\n", + " dataset = self.nc[dataset_info['nc_key']]\n", + " dataset.attrs.update(dataset_info)\n", + " return dataset\n", + " \n", + " def get_area_def(self, dataset_id):\n", + " # TODO\n", + " pass\n", + " \n", + "class NCSEVIRIHRVFileHandler():\n", + " # left as an exercise to the reader :)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/6-Metop_and_MSG_on_the_same_grid.ipynb b/eum-2019-03/6-Metop_and_MSG_on_the_same_grid.ipynb new file mode 100644 index 0000000..3179ff5 --- /dev/null +++ b/eum-2019-03/6-Metop_and_MSG_on_the_same_grid.ipynb @@ -0,0 +1,167 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Visualizing Metop and MSG data on the same grid\n", + "## Summary\n", + "In this tutorial, we well look at how we can put both Metop and MSG data on the same grid\n", + "\n", + "## First step, loading the data\n", + "Loading the data is done in the same way as presented in earlier tutorial\n", + "\n", + "## Exercise 6.1: Getting the data on the same grid\n", + "Create a `Scene` object for the Metop data and another one for the MSG data. Load the `day_microphysics` RGB composite from MSG and the 10.8µm channel from Metop. Resample both to the EastEurope area." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from satpy import Scene, find_files_and_readers\n", + "from datetime import datetime\n", + "\n", + "metop_files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex6',\n", + " reader='avhrr_l1b_eps',\n", + " start_time=datetime(2018, 10, 7 ,9, 25),\n", + " end_time=datetime(2018, 10, 7 ,9, 30))\n", + "scn_metop = Scene(filenames=metop_files)\n", + "scn_metop.load([10.8])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "msg_files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex6',\n", + " reader='seviri_l1b_native')\n", + "\n", + "scn = Scene(filenames=msg_files)\n", + "scn_msg = Scene(filenames=msg_files)\n", + "scn_msg.load(['day_microphysics','IR_108'])\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "newscn_metop = scn_metop.resample('eurol')\n", + "newscn_metop.show(10.8)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "newscn_msg = scn_msg.resample('eurol')\n", + "newscn_msg.show('IR_108')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 6.2: Manipulating images\n", + "Now we want to put both datasets onto one image." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from satpy.writers import get_enhanced_image\n", + "import xarray as xr\n", + "from trollimage.xrimage import XRImage\n", + "\n", + "# Lets compare MSG and METOP channel 10.8 data\n", + "msg_image = get_enhanced_image(newscn_msg['IR_108'])\n", + "\n", + "metop_image = get_enhanced_image(newscn_metop[10.8])\n", + "\n", + "array1 = msg_image.data.where(metop_image.data.isnull(), metop_image.data)\n", + "\n", + "XRImage(array1)\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# No Using MSG composite\n", + "msg_image = get_enhanced_image(newscn_msg['day_microphysics'])\n", + "color_array = xr.concat((metop_image.data, metop_image.data, metop_image.data), 'bands')\n", + "color_array['bands'] = ['R', 'G', 'B']\n", + "final_array = msg_image.data.where(color_array.isnull(), color_array.data)\n", + "XRImage(final_array)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## The `Multiscene` class\n", + "In order to manipulate multiple scenes at once, in some circumstances, it maybe advantageous to make use of satpy's `Multiscene` class. Let's create a multiscene instance, load some data and resample the data to the `EastEurope` area" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from satpy import MultiScene\n", + "mscn = MultiScene([scn_msg, scn_metop])\n", + "mscn.load(['overview'])\n", + "new_mscn = mscn.resample('eurol')\n", + "blended_scene = new_mscn.blend()\n", + "blended_scene.show('overview', overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/7-A_Time_Loop_Animation.ipynb b/eum-2019-03/7-A_Time_Loop_Animation.ipynb new file mode 100644 index 0000000..ce93b56 --- /dev/null +++ b/eum-2019-03/7-A_Time_Loop_Animation.ipynb @@ -0,0 +1,102 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## MPEF MSG Animation" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Summary\n", + "In this tutorila we will load 6 hours worth of MSG Level1.5 data and create an animation, saved in mp4 format" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import sys\n", + "import satpy\n", + "from satpy import Scene\n", + "from satpy.multiscene import MultiScene\n", + "import imageio\n", + "\n", + "import glob" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# locate and store then Image files in a list\n", + "base_dir='/tcenas/scratch/pytroll/ex7/'\n", + "files = glob.glob('{}*'.format(base_dir))\n", + "files=sorted(files)\n", + "\n", + "# Create a scene for each file and store in scenes list\n", + "scenes = [Scene(sensor=\"seviri\",reader='seviri_l1b_native', filenames=[filenames]) for filenames in files]\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "mscn = MultiScene(scenes)\n", + "# load a channel\n", + "mscn.load(['VIS006'])\n", + "mscn.save_animation('full_msg4_test.mp4', fps=2)\n", + "# View the animation in your browser\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 1 Create an animated composite resampled over Europe" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/8-Saving_to_netcdf.ipynb b/eum-2019-03/8-Saving_to_netcdf.ipynb new file mode 100644 index 0000000..3f8415b --- /dev/null +++ b/eum-2019-03/8-Saving_to_netcdf.ipynb @@ -0,0 +1,32 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/9-Computing_statistics.ipynb b/eum-2019-03/9-Computing_statistics.ipynb new file mode 100644 index 0000000..d057e99 --- /dev/null +++ b/eum-2019-03/9-Computing_statistics.ipynb @@ -0,0 +1,161 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 9: Analysing data for an area of interest\n", + "\n", + "In previous exercises we've learned how to load and display satellite data using a `Scene`-object. In this exercise we will extract a piece of data from that scene and do simple analysis on it. Here we learn how to play with `dask`-arrays. Furthermore, we will touch some Python basics, i.e., string formatting, use of `glob`-module, and using slices." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.1: Find the file\n", + "\n", + "Use string formatting and `glob`-module ot get get a list of files in directory `/tcenas/scratch/pytroll/ex9`. Example of string formatting is given below" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "import glob\n", + "from satpy import Scene\n", + "\n", + "# String formatting example\n", + "str = '{} world!'\n", + "print (str.format('Hello'))\n", + "\n", + "# Using glob, find *.nat files in the given directory\n", + "base_dir = '/tcenas/scratch/pytroll/ex9'\n", + "files = \n", + "\n", + "# print found files\n", + "print (files)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.2: Create a `Scene`-object and load some data\n", + "\n", + "The resulting `files`-variable should only contain one file. Use the variable as a list to initiate a `Scene`-object and load `day_microphysics`-composite and channel data for 0.6um and 1.6um. _Hint:_ `Scene`-object has a convenience method `available_dataset_names` to check how the different channels are named in the reader." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": false + }, + "outputs": [], + "source": [ + "# Create a Scene-object\n", + "scn = \n", + "# Let's define a composite variable for convenience\n", + "composite = 'day_microphysics'\n", + "# Make a list with the composite, and 0.6um and 1.6um channel data\n", + "datasets = \n", + "# Load datasets to the Scene-object\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.3: Resampling the `Scene`\n", + "\n", + "Resample the created `Scene`-object to `EastEurope` area. Show the `day_microphysics`-composite." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# Resample the scene to the new area\n", + "ee = \n", + "# Show composite\n", + "ee.show(composite)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.4: Data slicing and statistics\n", + "\n", + "Ńow we're goint to work with the resampled image. The left hand side shows a big thunderstorm system over East Europe. Take a data slice of `[245:275, 80:120]` from both 0.6um and 1.6um channels and calculate the mean reflectance. _Hint_: The extracted data slice is a `dask`-array. These arrays are _lazy_. Also, we're using `slice`-objects here for convenient ROI definition." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Area of interest\n", + "rows = slice()\n", + "cols = slice()\n", + "# Slice the area of interest to a variable\n", + "vis06 = \n", + "ir16 = \n", + "# Compute and print the mean reflectances\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.5: Utilizing the data loaded for the composite\n", + "\n", + "The `day_microphysics` composite is a three component data array. Here's how the composite is build: http://www.eumetrain.org/rgb_quick_guides/quick_guides/DaymicroRGB.pdf\n", + "Calculate and print the mean of the 0.8um channel using the data loaded for the `day_microphysics` composite. Furthermore, calculate and print the minimum, maximum, and standard deviation of the 10.8um channel." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Slice the area of interest to variables\n", + "vis08 = \n", + "ir108 =\n", + "# Compute and print the required statistics\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/sample_data/.keep b/eum-2019-03/sample_data/.keep new file mode 100644 index 0000000..e69de29 From 6bfa437db2c3bbf3c9a16986b94bee2e277951cc Mon Sep 17 00:00:00 2001 From: Sauli Joro Date: Tue, 26 Mar 2019 11:36:19 +0000 Subject: [PATCH 2/7] Update installation instructions. --- eum-2019-03/0-Installation.ipynb | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/eum-2019-03/0-Installation.ipynb b/eum-2019-03/0-Installation.ipynb index 8ff4bff..3fa6f4d 100644 --- a/eum-2019-03/0-Installation.ipynb +++ b/eum-2019-03/0-Installation.ipynb @@ -23,20 +23,25 @@ "3. `echo 'export PATH=\"$PATH:$HOME/miniconda3/bin\"' >> ~/.profile.local`\n", "4. `source ~/.profile.local`\n", "\n", - "### Copy a ready-made `conda` environment for the training\n", + "### Create a new `conda` environment for the training\n", "\n", - "A ready made environment with all the necessary installations is available in `/tcenas/scratch/pytroll`. The name of the copied environment can be chosen freely, for the sake of simplicity we will use name `troll`for this training. Copying may take a few minutes...\n", + "The name of the environment can be chosen freely, for the sake of simplicity we will use name `troll` for this training.\n", "\n", - "1. `cd ~/miniconda3/envs`\n", - "2. `cp -R /tcenas/scratch/pytroll/troll_env ./troll`\n", + "1. `conda create -n troll python=3.6`\n", "\n", - "### Activate the `troll` environment\n", + "Activate the `troll` environment with a command\n", "\n", - "1. `source activate troll`\n", + "2. `source activate troll`\n", "\n", "You should have an indication in your terminal prompt that the conda environment is active, e.g.:\n", "`(troll) joro@x2g13:~ $`. When the `conda` environment is active, all the installations after this point will go only to this specific environment. The environment can be deactivated with `source deactivate`-command.\n", "\n", + "### Installation of required libraries and modules\n", + "\n", + "Install the required libraries and modules with this monster one-liner:\n", + "\n", + "1. `conda install -c conda-forge gdal satpy jupyter ffmpeg imageio imageio-ffmpeg shapely cartopy scipy rasterio mysql-connector-python matplotlib cartopy scipy pycoast pyorbital`\n", + "\n", "\n", "## Using `jupyter`-notebook\n", "\n", @@ -53,13 +58,7 @@ "1. `/sbin/ifconfig`\n", "2. `jupyter notebook --ip= --no-browser`\n", "\n", - "Copy and paste the given URL to your laptops browser\n", - "\n", - "## Information about conda installations\n", - "\n", - "Though `conda` installations are not necessary for this training, any missing modules and packages can be installed to your environment. Usually it's best to google the module, e.g., \"conda install pycoast\" and follow the instructions. In case of `pycoast`, the installation would be\n", - "\n", - "1. `conda install -c conda-forge pycoast`\n" + "Copy and paste the given URL to your laptops Google Chrome browser\n" ] }, { @@ -89,7 +88,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.2" + "version": "3.6.7" } }, "nbformat": 4, From b2ae11bb4d67bdbb6bcc64c2b56527cbb5373b10 Mon Sep 17 00:00:00 2001 From: Sauli Joro Date: Tue, 26 Mar 2019 11:55:34 +0000 Subject: [PATCH 3/7] Unify exercise naming. Update installation instructions. --- eum-2019-03/0-Installation.ipynb | 96 ---------- .../6-Metop_and_MSG_on_the_same_grid.ipynb | 167 ------------------ eum-2019-03/7-A_Time_Loop_Animation.ipynb | 102 ----------- eum-2019-03/8-Saving_to_netcdf.ipynb | 32 ---- eum-2019-03/9-Computing_statistics.ipynb | 161 ----------------- 5 files changed, 558 deletions(-) delete mode 100644 eum-2019-03/0-Installation.ipynb delete mode 100644 eum-2019-03/6-Metop_and_MSG_on_the_same_grid.ipynb delete mode 100644 eum-2019-03/7-A_Time_Loop_Animation.ipynb delete mode 100644 eum-2019-03/8-Saving_to_netcdf.ipynb delete mode 100644 eum-2019-03/9-Computing_statistics.ipynb diff --git a/eum-2019-03/0-Installation.ipynb b/eum-2019-03/0-Installation.ipynb deleted file mode 100644 index 3fa6f4d..0000000 --- a/eum-2019-03/0-Installation.ipynb +++ /dev/null @@ -1,96 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Laptop configuration for the training\n", - "\n", - "The training will be conducted using X2Go-servers available in the TCE. Instructions on how to install X2Go on your laptop can be found in DM Tool document `EUM/RSP/VWG/18/1018393`.\n", - "\n", - "1. You should have X2Go already installed on your laptop, so go ahead and start it up using your TCE credentials\n", - "2. Open a new console from the Start menu -> System Tools -> Konsole (font size can be made bigger with Ctrl +)\n", - "\n", - "\n", - "## Installation and use of `conda`\n", - "\n", - "### Installation of `miniconda3`\n", - "\n", - "Type the following commands to the terminal.\n", - "\n", - "1. `wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh`\n", - "2. `bash ~/miniconda.sh -b -p $HOME/miniconda3`\n", - "3. `echo 'export PATH=\"$PATH:$HOME/miniconda3/bin\"' >> ~/.profile.local`\n", - "4. `source ~/.profile.local`\n", - "\n", - "### Create a new `conda` environment for the training\n", - "\n", - "The name of the environment can be chosen freely, for the sake of simplicity we will use name `troll` for this training.\n", - "\n", - "1. `conda create -n troll python=3.6`\n", - "\n", - "Activate the `troll` environment with a command\n", - "\n", - "2. `source activate troll`\n", - "\n", - "You should have an indication in your terminal prompt that the conda environment is active, e.g.:\n", - "`(troll) joro@x2g13:~ $`. When the `conda` environment is active, all the installations after this point will go only to this specific environment. The environment can be deactivated with `source deactivate`-command.\n", - "\n", - "### Installation of required libraries and modules\n", - "\n", - "Install the required libraries and modules with this monster one-liner:\n", - "\n", - "1. `conda install -c conda-forge gdal satpy jupyter ffmpeg imageio imageio-ffmpeg shapely cartopy scipy rasterio mysql-connector-python matplotlib cartopy scipy pycoast pyorbital`\n", - "\n", - "\n", - "## Using `jupyter`-notebook\n", - "\n", - "### Copy the exercises\n", - "\n", - "1. `mkdir ~/pytroll_training`\n", - "2. `cp /tcenas/scratch/pytroll/exercises/* ~/pytroll_training`\n", - "3. `cd ~/pytroll_training`\n", - "\n", - "### Starting headless `jupyter notebook` server\n", - "In order to get the best performance for the exercises, we'll start `jupyter notebook` with a headless server, i.e., we run the notebook on the X2Go-server, but connect to it from your own laptops.\n", - "\n", - "Find out the IP-address of your server, and start `jupyter notebook` with `--ip` and `--no-browser` options\n", - "1. `/sbin/ifconfig`\n", - "2. `jupyter notebook --ip= --no-browser`\n", - "\n", - "Copy and paste the given URL to your laptops Google Chrome browser\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import satpy\n", - "print (satpy.__file__)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.7" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/eum-2019-03/6-Metop_and_MSG_on_the_same_grid.ipynb b/eum-2019-03/6-Metop_and_MSG_on_the_same_grid.ipynb deleted file mode 100644 index 3179ff5..0000000 --- a/eum-2019-03/6-Metop_and_MSG_on_the_same_grid.ipynb +++ /dev/null @@ -1,167 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Visualizing Metop and MSG data on the same grid\n", - "## Summary\n", - "In this tutorial, we well look at how we can put both Metop and MSG data on the same grid\n", - "\n", - "## First step, loading the data\n", - "Loading the data is done in the same way as presented in earlier tutorial\n", - "\n", - "## Exercise 6.1: Getting the data on the same grid\n", - "Create a `Scene` object for the Metop data and another one for the MSG data. Load the `day_microphysics` RGB composite from MSG and the 10.8µm channel from Metop. Resample both to the EastEurope area." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from satpy import Scene, find_files_and_readers\n", - "from datetime import datetime\n", - "\n", - "metop_files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex6',\n", - " reader='avhrr_l1b_eps',\n", - " start_time=datetime(2018, 10, 7 ,9, 25),\n", - " end_time=datetime(2018, 10, 7 ,9, 30))\n", - "scn_metop = Scene(filenames=metop_files)\n", - "scn_metop.load([10.8])" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "msg_files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex6',\n", - " reader='seviri_l1b_native')\n", - "\n", - "scn = Scene(filenames=msg_files)\n", - "scn_msg = Scene(filenames=msg_files)\n", - "scn_msg.load(['day_microphysics','IR_108'])\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "newscn_metop = scn_metop.resample('eurol')\n", - "newscn_metop.show(10.8)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "\n", - "newscn_msg = scn_msg.resample('eurol')\n", - "newscn_msg.show('IR_108')\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 6.2: Manipulating images\n", - "Now we want to put both datasets onto one image." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from satpy.writers import get_enhanced_image\n", - "import xarray as xr\n", - "from trollimage.xrimage import XRImage\n", - "\n", - "# Lets compare MSG and METOP channel 10.8 data\n", - "msg_image = get_enhanced_image(newscn_msg['IR_108'])\n", - "\n", - "metop_image = get_enhanced_image(newscn_metop[10.8])\n", - "\n", - "array1 = msg_image.data.where(metop_image.data.isnull(), metop_image.data)\n", - "\n", - "XRImage(array1)\n", - "\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# No Using MSG composite\n", - "msg_image = get_enhanced_image(newscn_msg['day_microphysics'])\n", - "color_array = xr.concat((metop_image.data, metop_image.data, metop_image.data), 'bands')\n", - "color_array['bands'] = ['R', 'G', 'B']\n", - "final_array = msg_image.data.where(color_array.isnull(), color_array.data)\n", - "XRImage(final_array)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## The `Multiscene` class\n", - "In order to manipulate multiple scenes at once, in some circumstances, it maybe advantageous to make use of satpy's `Multiscene` class. Let's create a multiscene instance, load some data and resample the data to the `EastEurope` area" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from satpy import MultiScene\n", - "mscn = MultiScene([scn_msg, scn_metop])\n", - "mscn.load(['overview'])\n", - "new_mscn = mscn.resample('eurol')\n", - "blended_scene = new_mscn.blend()\n", - "blended_scene.show('overview', overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.7" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/eum-2019-03/7-A_Time_Loop_Animation.ipynb b/eum-2019-03/7-A_Time_Loop_Animation.ipynb deleted file mode 100644 index ce93b56..0000000 --- a/eum-2019-03/7-A_Time_Loop_Animation.ipynb +++ /dev/null @@ -1,102 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## MPEF MSG Animation" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Summary\n", - "In this tutorila we will load 6 hours worth of MSG Level1.5 data and create an animation, saved in mp4 format" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "import sys\n", - "import satpy\n", - "from satpy import Scene\n", - "from satpy.multiscene import MultiScene\n", - "import imageio\n", - "\n", - "import glob" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "# locate and store then Image files in a list\n", - "base_dir='/tcenas/scratch/pytroll/ex7/'\n", - "files = glob.glob('{}*'.format(base_dir))\n", - "files=sorted(files)\n", - "\n", - "# Create a scene for each file and store in scenes list\n", - "scenes = [Scene(sensor=\"seviri\",reader='seviri_l1b_native', filenames=[filenames]) for filenames in files]\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "mscn = MultiScene(scenes)\n", - "# load a channel\n", - "mscn.load(['VIS006'])\n", - "mscn.save_animation('full_msg4_test.mp4', fps=2)\n", - "# View the animation in your browser\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 1 Create an animated composite resampled over Europe" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.7" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/eum-2019-03/8-Saving_to_netcdf.ipynb b/eum-2019-03/8-Saving_to_netcdf.ipynb deleted file mode 100644 index 3f8415b..0000000 --- a/eum-2019-03/8-Saving_to_netcdf.ipynb +++ /dev/null @@ -1,32 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.3" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/eum-2019-03/9-Computing_statistics.ipynb b/eum-2019-03/9-Computing_statistics.ipynb deleted file mode 100644 index d057e99..0000000 --- a/eum-2019-03/9-Computing_statistics.ipynb +++ /dev/null @@ -1,161 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 9: Analysing data for an area of interest\n", - "\n", - "In previous exercises we've learned how to load and display satellite data using a `Scene`-object. In this exercise we will extract a piece of data from that scene and do simple analysis on it. Here we learn how to play with `dask`-arrays. Furthermore, we will touch some Python basics, i.e., string formatting, use of `glob`-module, and using slices." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.1: Find the file\n", - "\n", - "Use string formatting and `glob`-module ot get get a list of files in directory `/tcenas/scratch/pytroll/ex9`. Example of string formatting is given below" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "import glob\n", - "from satpy import Scene\n", - "\n", - "# String formatting example\n", - "str = '{} world!'\n", - "print (str.format('Hello'))\n", - "\n", - "# Using glob, find *.nat files in the given directory\n", - "base_dir = '/tcenas/scratch/pytroll/ex9'\n", - "files = \n", - "\n", - "# print found files\n", - "print (files)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.2: Create a `Scene`-object and load some data\n", - "\n", - "The resulting `files`-variable should only contain one file. Use the variable as a list to initiate a `Scene`-object and load `day_microphysics`-composite and channel data for 0.6um and 1.6um. _Hint:_ `Scene`-object has a convenience method `available_dataset_names` to check how the different channels are named in the reader." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": false - }, - "outputs": [], - "source": [ - "# Create a Scene-object\n", - "scn = \n", - "# Let's define a composite variable for convenience\n", - "composite = 'day_microphysics'\n", - "# Make a list with the composite, and 0.6um and 1.6um channel data\n", - "datasets = \n", - "# Load datasets to the Scene-object\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.3: Resampling the `Scene`\n", - "\n", - "Resample the created `Scene`-object to `EastEurope` area. Show the `day_microphysics`-composite." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "# Resample the scene to the new area\n", - "ee = \n", - "# Show composite\n", - "ee.show(composite)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.4: Data slicing and statistics\n", - "\n", - "Ńow we're goint to work with the resampled image. The left hand side shows a big thunderstorm system over East Europe. Take a data slice of `[245:275, 80:120]` from both 0.6um and 1.6um channels and calculate the mean reflectance. _Hint_: The extracted data slice is a `dask`-array. These arrays are _lazy_. Also, we're using `slice`-objects here for convenient ROI definition." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Area of interest\n", - "rows = slice()\n", - "cols = slice()\n", - "# Slice the area of interest to a variable\n", - "vis06 = \n", - "ir16 = \n", - "# Compute and print the mean reflectances\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.5: Utilizing the data loaded for the composite\n", - "\n", - "The `day_microphysics` composite is a three component data array. Here's how the composite is build: http://www.eumetrain.org/rgb_quick_guides/quick_guides/DaymicroRGB.pdf\n", - "Calculate and print the mean of the 0.8um channel using the data loaded for the `day_microphysics` composite. Furthermore, calculate and print the minimum, maximum, and standard deviation of the 10.8um channel." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Slice the area of interest to variables\n", - "vis08 = \n", - "ir108 =\n", - "# Compute and print the required statistics\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} From 4ae45b89b6b6f0e82c5add546525692414095665 Mon Sep 17 00:00:00 2001 From: Sauli Joro Date: Tue, 26 Mar 2019 11:56:04 +0000 Subject: [PATCH 4/7] Unify exercise naming. Update installation instructions. --- eum-2019-03/0-installation.ipynb | 102 ++++++++++++ eum-2019-03/6-metop-and-msg-same-grid.ipynb | 167 ++++++++++++++++++++ eum-2019-03/7-animation.ipynb | 102 ++++++++++++ eum-2019-03/8-saving-to-netcdf.ipynb | 32 ++++ eum-2019-03/9-hands-on-data.ipynb | 161 +++++++++++++++++++ 5 files changed, 564 insertions(+) create mode 100644 eum-2019-03/0-installation.ipynb create mode 100644 eum-2019-03/6-metop-and-msg-same-grid.ipynb create mode 100644 eum-2019-03/7-animation.ipynb create mode 100644 eum-2019-03/8-saving-to-netcdf.ipynb create mode 100644 eum-2019-03/9-hands-on-data.ipynb diff --git a/eum-2019-03/0-installation.ipynb b/eum-2019-03/0-installation.ipynb new file mode 100644 index 0000000..e6b36b5 --- /dev/null +++ b/eum-2019-03/0-installation.ipynb @@ -0,0 +1,102 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laptop configuration for the training\n", + "\n", + "The training will be conducted using X2Go-servers available in the TCE. Instructions on how to install X2Go on your laptop can be found in DM Tool document `EUM/RSP/VWG/18/1018393`.\n", + "\n", + "1. You should have X2Go already installed on your laptop, so go ahead and start it up using your TCE credentials\n", + "2. Open a new console from the Start menu -> System Tools -> Konsole (font size can be made bigger with Ctrl +)\n", + "\n", + "\n", + "## Installation and use of `conda`\n", + "\n", + "### Installation of `miniconda3`\n", + "\n", + "Type the following commands to the terminal.\n", + "\n", + "1. `wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh`\n", + "2. `bash ~/miniconda.sh -b -p $HOME/miniconda3`\n", + "3. `echo 'export PATH=\"$PATH:$HOME/miniconda3/bin\"' >> ~/.profile.local`\n", + "4. `source ~/.profile.local`\n", + "\n", + "### Create a new `conda` environment for the training\n", + "\n", + "The name of the environment can be chosen freely, for the sake of simplicity we will use name `troll` for this training.\n", + "\n", + "1. `conda create -n troll python=3.6`\n", + "\n", + "Activate the `troll` environment with a command\n", + "\n", + "2. `source activate troll`\n", + "\n", + "You should have an indication in your terminal prompt that the conda environment is active, e.g.:\n", + "`(troll) joro@x2g13:~ $`. When the `conda` environment is active, all the installations after this point will go only to this specific environment. The environment can be deactivated with `source deactivate`-command.\n", + "\n", + "### Installation of required libraries and modules\n", + "\n", + "Install the required libraries and modules with this monster one-liner:\n", + "\n", + "1. `conda install -c conda-forge gdal satpy jupyter ffmpeg imageio imageio-ffmpeg shapely cartopy scipy rasterio mysql-connector-python matplotlib cartopy scipy pycoast pyorbital`\n", + "\n", + "### Update SEVIRI l1b-reader\n", + "\n", + "PyTroll packages are constantly being developed. In order to carry out some the exercises, we need to get an updated version of one of the readers.\n", + "\n", + "1. `cp /tcenas/scratch/pytroll/seviri_l1b_native.py ~/miniconda3/envs/troll/lib/python3.6/site-packages/satpy/readers/`\n", + "\n", + "\n", + "## Using `jupyter`-notebook\n", + "\n", + "### Copy the exercises\n", + "\n", + "1. `mkdir ~/pytroll_training`\n", + "2. `cp /tcenas/scratch/pytroll/exercises/* ~/pytroll_training`\n", + "3. `cd ~/pytroll_training`\n", + "\n", + "### Starting headless `jupyter notebook` server\n", + "In order to get the best performance for the exercises, we'll start `jupyter notebook` with a headless server, i.e., we run the notebook on the X2Go-server, but connect to it from your own laptops.\n", + "\n", + "Find out the IP-address of your server, and start `jupyter notebook` with `--ip` and `--no-browser` options\n", + "1. `/sbin/ifconfig`\n", + "2. `jupyter notebook --ip= --no-browser`\n", + "\n", + "Copy and paste the given URL to your laptops Google Chrome browser\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import satpy\n", + "print (satpy.__file__)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/6-metop-and-msg-same-grid.ipynb b/eum-2019-03/6-metop-and-msg-same-grid.ipynb new file mode 100644 index 0000000..3179ff5 --- /dev/null +++ b/eum-2019-03/6-metop-and-msg-same-grid.ipynb @@ -0,0 +1,167 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Visualizing Metop and MSG data on the same grid\n", + "## Summary\n", + "In this tutorial, we well look at how we can put both Metop and MSG data on the same grid\n", + "\n", + "## First step, loading the data\n", + "Loading the data is done in the same way as presented in earlier tutorial\n", + "\n", + "## Exercise 6.1: Getting the data on the same grid\n", + "Create a `Scene` object for the Metop data and another one for the MSG data. Load the `day_microphysics` RGB composite from MSG and the 10.8µm channel from Metop. Resample both to the EastEurope area." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from satpy import Scene, find_files_and_readers\n", + "from datetime import datetime\n", + "\n", + "metop_files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex6',\n", + " reader='avhrr_l1b_eps',\n", + " start_time=datetime(2018, 10, 7 ,9, 25),\n", + " end_time=datetime(2018, 10, 7 ,9, 30))\n", + "scn_metop = Scene(filenames=metop_files)\n", + "scn_metop.load([10.8])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "msg_files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex6',\n", + " reader='seviri_l1b_native')\n", + "\n", + "scn = Scene(filenames=msg_files)\n", + "scn_msg = Scene(filenames=msg_files)\n", + "scn_msg.load(['day_microphysics','IR_108'])\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "newscn_metop = scn_metop.resample('eurol')\n", + "newscn_metop.show(10.8)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "newscn_msg = scn_msg.resample('eurol')\n", + "newscn_msg.show('IR_108')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 6.2: Manipulating images\n", + "Now we want to put both datasets onto one image." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from satpy.writers import get_enhanced_image\n", + "import xarray as xr\n", + "from trollimage.xrimage import XRImage\n", + "\n", + "# Lets compare MSG and METOP channel 10.8 data\n", + "msg_image = get_enhanced_image(newscn_msg['IR_108'])\n", + "\n", + "metop_image = get_enhanced_image(newscn_metop[10.8])\n", + "\n", + "array1 = msg_image.data.where(metop_image.data.isnull(), metop_image.data)\n", + "\n", + "XRImage(array1)\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# No Using MSG composite\n", + "msg_image = get_enhanced_image(newscn_msg['day_microphysics'])\n", + "color_array = xr.concat((metop_image.data, metop_image.data, metop_image.data), 'bands')\n", + "color_array['bands'] = ['R', 'G', 'B']\n", + "final_array = msg_image.data.where(color_array.isnull(), color_array.data)\n", + "XRImage(final_array)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## The `Multiscene` class\n", + "In order to manipulate multiple scenes at once, in some circumstances, it maybe advantageous to make use of satpy's `Multiscene` class. Let's create a multiscene instance, load some data and resample the data to the `EastEurope` area" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from satpy import MultiScene\n", + "mscn = MultiScene([scn_msg, scn_metop])\n", + "mscn.load(['overview'])\n", + "new_mscn = mscn.resample('eurol')\n", + "blended_scene = new_mscn.blend()\n", + "blended_scene.show('overview', overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/7-animation.ipynb b/eum-2019-03/7-animation.ipynb new file mode 100644 index 0000000..ce93b56 --- /dev/null +++ b/eum-2019-03/7-animation.ipynb @@ -0,0 +1,102 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## MPEF MSG Animation" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Summary\n", + "In this tutorila we will load 6 hours worth of MSG Level1.5 data and create an animation, saved in mp4 format" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import sys\n", + "import satpy\n", + "from satpy import Scene\n", + "from satpy.multiscene import MultiScene\n", + "import imageio\n", + "\n", + "import glob" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# locate and store then Image files in a list\n", + "base_dir='/tcenas/scratch/pytroll/ex7/'\n", + "files = glob.glob('{}*'.format(base_dir))\n", + "files=sorted(files)\n", + "\n", + "# Create a scene for each file and store in scenes list\n", + "scenes = [Scene(sensor=\"seviri\",reader='seviri_l1b_native', filenames=[filenames]) for filenames in files]\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "mscn = MultiScene(scenes)\n", + "# load a channel\n", + "mscn.load(['VIS006'])\n", + "mscn.save_animation('full_msg4_test.mp4', fps=2)\n", + "# View the animation in your browser\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 1 Create an animated composite resampled over Europe" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/8-saving-to-netcdf.ipynb b/eum-2019-03/8-saving-to-netcdf.ipynb new file mode 100644 index 0000000..3f8415b --- /dev/null +++ b/eum-2019-03/8-saving-to-netcdf.ipynb @@ -0,0 +1,32 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/eum-2019-03/9-hands-on-data.ipynb b/eum-2019-03/9-hands-on-data.ipynb new file mode 100644 index 0000000..d057e99 --- /dev/null +++ b/eum-2019-03/9-hands-on-data.ipynb @@ -0,0 +1,161 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Exercise 9: Analysing data for an area of interest\n", + "\n", + "In previous exercises we've learned how to load and display satellite data using a `Scene`-object. In this exercise we will extract a piece of data from that scene and do simple analysis on it. Here we learn how to play with `dask`-arrays. Furthermore, we will touch some Python basics, i.e., string formatting, use of `glob`-module, and using slices." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.1: Find the file\n", + "\n", + "Use string formatting and `glob`-module ot get get a list of files in directory `/tcenas/scratch/pytroll/ex9`. Example of string formatting is given below" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "import glob\n", + "from satpy import Scene\n", + "\n", + "# String formatting example\n", + "str = '{} world!'\n", + "print (str.format('Hello'))\n", + "\n", + "# Using glob, find *.nat files in the given directory\n", + "base_dir = '/tcenas/scratch/pytroll/ex9'\n", + "files = \n", + "\n", + "# print found files\n", + "print (files)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.2: Create a `Scene`-object and load some data\n", + "\n", + "The resulting `files`-variable should only contain one file. Use the variable as a list to initiate a `Scene`-object and load `day_microphysics`-composite and channel data for 0.6um and 1.6um. _Hint:_ `Scene`-object has a convenience method `available_dataset_names` to check how the different channels are named in the reader." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": false + }, + "outputs": [], + "source": [ + "# Create a Scene-object\n", + "scn = \n", + "# Let's define a composite variable for convenience\n", + "composite = 'day_microphysics'\n", + "# Make a list with the composite, and 0.6um and 1.6um channel data\n", + "datasets = \n", + "# Load datasets to the Scene-object\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.3: Resampling the `Scene`\n", + "\n", + "Resample the created `Scene`-object to `EastEurope` area. Show the `day_microphysics`-composite." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# Resample the scene to the new area\n", + "ee = \n", + "# Show composite\n", + "ee.show(composite)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.4: Data slicing and statistics\n", + "\n", + "Ńow we're goint to work with the resampled image. The left hand side shows a big thunderstorm system over East Europe. Take a data slice of `[245:275, 80:120]` from both 0.6um and 1.6um channels and calculate the mean reflectance. _Hint_: The extracted data slice is a `dask`-array. These arrays are _lazy_. Also, we're using `slice`-objects here for convenient ROI definition." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Area of interest\n", + "rows = slice()\n", + "cols = slice()\n", + "# Slice the area of interest to a variable\n", + "vis06 = \n", + "ir16 = \n", + "# Compute and print the mean reflectances\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 9.5: Utilizing the data loaded for the composite\n", + "\n", + "The `day_microphysics` composite is a three component data array. Here's how the composite is build: http://www.eumetrain.org/rgb_quick_guides/quick_guides/DaymicroRGB.pdf\n", + "Calculate and print the mean of the 0.8um channel using the data loaded for the `day_microphysics` composite. Furthermore, calculate and print the minimum, maximum, and standard deviation of the 10.8um channel." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Slice the area of interest to variables\n", + "vis08 = \n", + "ir108 =\n", + "# Compute and print the required statistics\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From aced934ce0e043c7cc47f1c0ad991ed5615f81bf Mon Sep 17 00:00:00 2001 From: Sauli Joro Date: Wed, 27 Mar 2019 09:41:16 +0000 Subject: [PATCH 5/7] Update installation instructions. Comment out netCDF saving in ex2. --- eum-2019-03/0-installation.ipynb | 7 ++++--- eum-2019-03/2-start-metop.ipynb | 8 ++++---- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/eum-2019-03/0-installation.ipynb b/eum-2019-03/0-installation.ipynb index e6b36b5..6ce4276 100644 --- a/eum-2019-03/0-installation.ipynb +++ b/eum-2019-03/0-installation.ipynb @@ -40,13 +40,14 @@ "\n", "Install the required libraries and modules with this monster one-liner:\n", "\n", - "1. `conda install -c conda-forge gdal satpy jupyter ffmpeg imageio imageio-ffmpeg shapely cartopy scipy rasterio mysql-connector-python matplotlib cartopy scipy pycoast pyorbital`\n", + "1. `conda install -c conda-forge gdal satpy jupyter ffmpeg \"imageio<2.5\" imageio-ffmpeg shapely cartopy scipy rasterio mysql-connector-python matplotlib cartopy scipy pycoast pyorbital`\n", "\n", - "### Update SEVIRI l1b-reader\n", + "### Apply updates to packages\n", "\n", "PyTroll packages are constantly being developed. In order to carry out some the exercises, we need to get an updated version of one of the readers.\n", "\n", - "1. `cp /tcenas/scratch/pytroll/seviri_l1b_native.py ~/miniconda3/envs/troll/lib/python3.6/site-packages/satpy/readers/`\n", + "1. `pip install git+https://github.com/pytroll/satpy.git`\n", + "2. `cp /tcenas/scratch/pytroll/seviri_l1b_native.py ~/miniconda3/envs/troll/lib/python3.6/site-packages/satpy/readers/`\n", "\n", "\n", "## Using `jupyter`-notebook\n", diff --git a/eum-2019-03/2-start-metop.ipynb b/eum-2019-03/2-start-metop.ipynb index c7455c5..9346703 100644 --- a/eum-2019-03/2-start-metop.ipynb +++ b/eum-2019-03/2-start-metop.ipynb @@ -166,7 +166,7 @@ "metadata": {}, "source": [ "## Saving the data\n", - "Satpy allows you to save the data in various formats, among which GeoTIFF and NetCDF" + "Satpy allows you to save the data in various formats, e.g. GeoTIFF" ] }, { @@ -175,8 +175,8 @@ "metadata": {}, "outputs": [], "source": [ - "newscn.save_dataset('my_fantastic_image.tiff')\n", - "newscn.save_dataset('my_fantastic_image.nc')" + "newscn.save_dataset(composite,filename='my_fantastic_image.tiff')\n", + "#newscn.save_dataset(composite,filename='my_fantastic_image.nc')" ] }, { @@ -213,7 +213,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.7" + "version": "3.6.8" } }, "nbformat": 4, From ff9fec31c7d2276f42d6c583deb82088310779ce Mon Sep 17 00:00:00 2001 From: Martin Raspaud Date: Wed, 27 Mar 2019 16:14:32 +0100 Subject: [PATCH 6/7] Fix exercise 3 --- eum-2019-03/3-pyorbital-passes.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/eum-2019-03/3-pyorbital-passes.ipynb b/eum-2019-03/3-pyorbital-passes.ipynb index 88462c5..b66fe5d 100644 --- a/eum-2019-03/3-pyorbital-passes.ipynb +++ b/eum-2019-03/3-pyorbital-passes.ipynb @@ -23,7 +23,7 @@ "metadata": {}, "source": [ "## Exercise 3.2 (Advanced)\n", - "Plot the first of the computed passes on a map. This can be achieved by using the pytroll-schedule package, that you will need to install, using the `feature-oo` branch. You also need to have basemap installed.\n", + "Plot the first of the computed passes on a map. This can be achieved by using the pytroll-schedule package, that you will need to install with pip. You also need to have basemap installed.\n", "Use the `trollsched.satpass.Pass` class. Documentation on this one is scarce, so look in the source code to see how it works.\n", "\n", "And if you feel very advanced, please convert this plotting facility to cartopy (as basemap is deprecated)." @@ -61,7 +61,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.6" + "version": "3.6.7" } }, "nbformat": 4, From d382603638fea568c41ff8bc8c6027b0cec11d63 Mon Sep 17 00:00:00 2001 From: David Hoese Date: Tue, 21 May 2019 11:52:16 -0500 Subject: [PATCH 7/7] Move eum exercises to pytroll-exercises-two-day directory --- pytroll-exercise-2018-10/0-Installation.ipynb | 91 ---- pytroll-exercise-2018-10/1-start-msg.ipynb | 246 ----------- .../10-start-sentinel-3.ipynb | 68 --- pytroll-exercise-2018-10/2-start-metop.ipynb | 221 ---------- .../3-pyorbital-passes.ipynb | 69 ---- pytroll-exercise-2018-10/4-custom-areas.ipynb | 104 ----- .../5-custom-reader.ipynb | 390 ------------------ .../6-Metop_and_MSG_on_the_same_grid.ipynb | 167 -------- .../7-A_Time_Loop_Animation.ipynb | 102 ----- .../8-Saving_to_netcdf.ipynb | 32 -- .../9-Computing_statistics.ipynb | 161 -------- pytroll-exercise-2018-10/sample_data/.keep | 0 .../0-installation.ipynb | 0 .../1-start-msg.ipynb | 0 .../10-start-sentinel-3.ipynb | 0 .../2-start-metop.ipynb | 0 .../3-pyorbital-passes.ipynb | 0 .../4-custom-areas.ipynb | 0 .../5-custom-reader.ipynb | 0 .../6-metop-and-msg-same-grid.ipynb | 0 .../7-animation.ipynb | 0 .../8-saving-to-netcdf.ipynb | 0 .../9-hands-on-data.ipynb | 0 .../sample_data/.keep | 0 24 files changed, 1651 deletions(-) delete mode 100644 pytroll-exercise-2018-10/0-Installation.ipynb delete mode 100644 pytroll-exercise-2018-10/1-start-msg.ipynb delete mode 100644 pytroll-exercise-2018-10/10-start-sentinel-3.ipynb delete mode 100644 pytroll-exercise-2018-10/2-start-metop.ipynb delete mode 100644 pytroll-exercise-2018-10/3-pyorbital-passes.ipynb delete mode 100644 pytroll-exercise-2018-10/4-custom-areas.ipynb delete mode 100644 pytroll-exercise-2018-10/5-custom-reader.ipynb delete mode 100644 pytroll-exercise-2018-10/6-Metop_and_MSG_on_the_same_grid.ipynb delete mode 100644 pytroll-exercise-2018-10/7-A_Time_Loop_Animation.ipynb delete mode 100644 pytroll-exercise-2018-10/8-Saving_to_netcdf.ipynb delete mode 100644 pytroll-exercise-2018-10/9-Computing_statistics.ipynb delete mode 100644 pytroll-exercise-2018-10/sample_data/.keep rename {eum-2019-03 => pytroll-exercises-two-day}/0-installation.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/1-start-msg.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/10-start-sentinel-3.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/2-start-metop.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/3-pyorbital-passes.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/4-custom-areas.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/5-custom-reader.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/6-metop-and-msg-same-grid.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/7-animation.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/8-saving-to-netcdf.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/9-hands-on-data.ipynb (100%) rename {eum-2019-03 => pytroll-exercises-two-day}/sample_data/.keep (100%) diff --git a/pytroll-exercise-2018-10/0-Installation.ipynb b/pytroll-exercise-2018-10/0-Installation.ipynb deleted file mode 100644 index e23c2b4..0000000 --- a/pytroll-exercise-2018-10/0-Installation.ipynb +++ /dev/null @@ -1,91 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Laptop configuration for the training\n", - "\n", - "The training will be conducted using X2Go-servers available in the TCE. Instructions on how to install X2Go on your laptop can be found in DM Tool document `EUM/RSP/VWG/18/1018393`.\n", - "\n", - "1. You should have X2Go already installed on your laptop, so go ahead and start it up using your TCE credentials\n", - "2. Open a new console from the Start menu -> System Tools -> Konsole (font size can be made bigger with Ctrl +)\n", - "\n", - "\n", - "## Installation of `miniconda3`\n", - "\n", - "Type the following commands to the terminal\n", - "\n", - "1. `wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh`\n", - "2. `bash ~/miniconda.sh -b -p $HOME/miniconda3`\n", - "3. `echo 'export PATH=\"$PATH:$HOME/miniconda3/bin\"' >> ~/.profile.local`\n", - "4. `source ~/.profile.local`\n", - "\n", - "## Create a new `conda` environment for the training\n", - "\n", - "The name of the environment can be chosen freely, for the sake of simplicity we will use name `troll`for this training.\n", - "\n", - "1. `conda create -n troll python=3.6`\n", - "2. `source activate troll`\n", - "\n", - "You should have an indication in your terminal prompt that the conda environment is active, e.g.:\n", - "`(troll) joro@x2g13:~ $`. When the `conda` environment is active, all the installations after this point will go only to this specific environment. The environment can be deactivated with `source deactivate`-command.\n", - "\n", - "\n", - "## Installation of 3rd party libraries\n", - "\n", - "1. `conda install -c conda-forge gdal`\n", - "\n", - "\n", - "## Installation of necessary Python modules\n", - "\n", - "1. `conda install -c anaconda jupyter`\n", - "2. `conda install -c menpo imageio`\n", - "3. `conda install -c menpo ffmpeg`\n", - "\n", - "## Installation of PyTroll packages\n", - "\n", - "PyTroll packages are constantly being developed. In order to carry out some the exercises, we need to use the latest versions of `satpy` and `geotiepoints`. Hence, the installation of these packages will be done directly from GitHub.\n", - "\n", - "1. `conda install -c conda-forge pyorbital`\n", - "2. `conda install -c conda-forge pycoast`\n", - "3. `conda install -c conda-forge satpy`\n", - "4. `conda uninstall satpy python-geotiepoints`\n", - "5. `pip install git+https://github.com/pytroll/python-geotiepoints.git`\n", - "6. `pip install git+https://github.com/pytroll/satpy.git`\n", - "7. `pip install git+https://github.com/pytroll/pyspectral.git`\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import satpy\n", - "print (satpy.__file__)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/1-start-msg.ipynb b/pytroll-exercise-2018-10/1-start-msg.ipynb deleted file mode 100644 index 07a1c50..0000000 --- a/pytroll-exercise-2018-10/1-start-msg.ipynb +++ /dev/null @@ -1,246 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Getting started with MSG level 1.5 data, native format" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Summary\n", - "In this tutorial, we will read MSG data and display a few composites and channels, in satellite projection and on predefined areas." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Installation and setup\n", - "If you went through the installation procedure described in tutorial 0, you should be all set up :)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Loading some data\n", - "\n", - "The first step is to find the files to load, and to create a `Scene` instance" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "from satpy import Scene, find_files_and_readers\n", - "\n", - "files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex1/',\n", - " reader='native_msg')\n", - "\n", - "scn = Scene(filenames=files)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To list the available datasets run 'available_composite_names' method" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "scn.available_dataset_names()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Then we decide on a composite to load and display it on screen. To get the list of available composites, use the `available_composite_names` method." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(scn.available_composite_names())" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "\n", - "composite = 'natural_color'\n", - "scn.load([composite])\n", - "scn.show(composite)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The loaded composite is a `DataArray` from the `xarray` library, containing the data, the projection coordinates of the pixels and some metadata." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "scn[composite]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 1.1\n", - "Load and display other composite, like `fog` and `airmass`" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Resampling the data\n", - "\n", - "A simple call to the `resample` method is enough to resample the data to another area. Here, we use the predefined area `euro4`, but we will see in another tutorial how to customize the areas. Some other predifined areas are `euron1`, `eurol`, `germ`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "newscn = scn.resample('euro4', radius_of_influence=20000)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "newscn.show(composite)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Saving the image to disk\n", - "The `save_dataset` method can be used to save the image to disk. Formats supported for writing are some of the standard image formats (png, jpeg), GeoTIFF, NetCDF. The default will be to save to GeoTIFF with a filename based on the composite name and the start time of the scene." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "newscn.save_dataset(composite)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Alternatively, one can specify the filename to save to (and using the proper extension, the format)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "newscn.save_dataset(composite, filename='really_cool_pic.png')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Working with the individual channels\n", - "It is also possible to load the seviri channels individually (using their names or approximate wavelengths in µm) and work with them. To get the list of available channels, use the `available_dataset_names` method." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "scn.load([10.8, 12.0])\n", - "scn['t11-t12'] = scn[10.8] - scn[12.0]\n", - "scn.show('t11-t12')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 1.2\n", - "Try to resample `t11-t12`, and if it doesn't work, try to fix it :)\n", - "\n", - "(Hint: look at the metadata)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/10-start-sentinel-3.ipynb b/pytroll-exercise-2018-10/10-start-sentinel-3.ipynb deleted file mode 100644 index c0f2849..0000000 --- a/pytroll-exercise-2018-10/10-start-sentinel-3.ipynb +++ /dev/null @@ -1,68 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Sentinel 3 fun" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 10.1: Reading the OLCI data\n", - "Find out which reader to use from the satpy reader table on this page: https://satpy.readthedocs.io/en/latest/index.html\n", - "\n", - "Then create a True Color composite from it." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 10.2: Reading the SLSTR data\n", - "Find out which reader to use from the satpy reader table on this page: https://satpy.readthedocs.io/en/latest/index.html\n", - "\n", - "Then create a Overview Sun composite from it." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 10.3: Colorize\n", - "Choose a nice colormap and colorize the SLSTR thermal channel at ~11µm" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 10.4\n", - "Combine the results exercices 1 and 3 to create a sandwich composite" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.3" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/2-start-metop.ipynb b/pytroll-exercise-2018-10/2-start-metop.ipynb deleted file mode 100644 index b86a943..0000000 --- a/pytroll-exercise-2018-10/2-start-metop.ipynb +++ /dev/null @@ -1,221 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Playing with Metop AVHRR data" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Summary\n", - "In this tutorial, we will read Metop AVHRR data and display a few composites, in satellite projection and on a couple of areas, and add coastlines to some images." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Finding the files, filtering with time\n", - "The data for this tutorial is comprised of a handfull of Metop-B AVHRR/3 granules, that span over a couple of hours time. Since we just want work with a fraction of all that data, one solution is to define a start and end time and filter the files we provide to `Scene`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "from satpy import Scene, find_files_and_readers\n", - "from datetime import datetime\n", - "import pprint\n", - "\n", - "files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex2',\n", - " reader='avhrr_eps_l1b',\n", - " start_time=datetime(2018, 10, 7 ,9, 25),\n", - " end_time=datetime(2018, 10, 7 , 9, 30))\n", - "\n", - "#files=sorted(files)\n", - "scn = Scene(filenames=files)\n", - "pprint.pprint(files)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Then we decide on a composite to load and display it on screen" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(scn.available_composite_names())" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "composite = 'overview'\n", - "scn.load([composite])\n", - "scn.show(composite)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Resampling and displaying with coastlines\n", - "Resampling is done exactly as in the previous tutorial. However this time we will display the data with coastlines on top." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "newscn = scn.resample('euro1')\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": false - }, - "outputs": [], - "source": [ - "newscn.show(composite, overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For more information on which parameters you can pass for the coastline burning, check the pycoast documentation." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Dynamic area resampling\n", - "For polar data, it can be convenient do resample the data in a given projection, but adapt the area extent to the actual data being resampled. For this, we have at the moment two dynamic areas we can use: `omerc_bb` for the oblique mercator projection and `laea_bb` for the lambert azimuthal equal-area projection. _Note_: `laea_bb` here will not yield a sensible result for the used dataset." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Resample and show the image\n", - "newscn = scn.resample('omerc_bb')\n", - "newscn.show(composite, overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Filtering the data by area\n", - "When we have many data granules, it can be convenient to load only those covering a given area. We can achieve this by using the parameter filtering during the `Scene` instanciation" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex2',\n", - " reader='avhrr_eps_l1b')\n", - "scn = Scene(filenames=files, filter_parameters={'area':'germ'})" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "composite = 'natural_color'\n", - "scn.load([composite])\n", - "newscn = scn.resample('eurol')\n", - "newscn.show(composite, overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Saving the data\n", - "Satpy allows you to save the data in various formats, among which GeoTIFF and NetCDF" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "newscn.save_dataset('my_fantastic_image.tiff')\n", - "newscn.save_dataset('my_fantastic_image.nc')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 2.1\n", - "Generate an `ndvi` product using the individual channels and display the result on the `euron1` area. Then save your work as PNG.\n", - "\n", - "NDVI formula: ndvi = (vis08 - vis06) / (vis08 + vis06)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/3-pyorbital-passes.ipynb b/pytroll-exercise-2018-10/3-pyorbital-passes.ipynb deleted file mode 100644 index 88462c5..0000000 --- a/pytroll-exercise-2018-10/3-pyorbital-passes.ipynb +++ /dev/null @@ -1,69 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Predicting polar orbiter passes\n", - "\n", - "In this tutorial, we look at how to predict passes of polar satellites using pyorbital" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 3.1\n", - "Look at the pyorbital documentation, compute the current position of NOAA-20, and predict the passes for NOAA-20 for the next 24 hours over EUMETSAT HQ.\n", - "https://pyorbital.readthedocs.io/en/latest/\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 3.2 (Advanced)\n", - "Plot the first of the computed passes on a map. This can be achieved by using the pytroll-schedule package, that you will need to install, using the `feature-oo` branch. You also need to have basemap installed.\n", - "Use the `trollsched.satpass.Pass` class. Documentation on this one is scarce, so look in the source code to see how it works.\n", - "\n", - "And if you feel very advanced, please convert this plotting facility to cartopy (as basemap is deprecated)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 3.3\n", - "Compute the sun zenith angle at the previously computed sub-satellite coordinates." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/4-custom-areas.ipynb b/pytroll-exercise-2018-10/4-custom-areas.ipynb deleted file mode 100644 index 5629a7a..0000000 --- a/pytroll-exercise-2018-10/4-custom-areas.ipynb +++ /dev/null @@ -1,104 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# More about areas and pyresample\n", - "\n", - "## Summary\n", - "\n", - "In this tutorial, we will learn more about defining areas to work with in pytroll/satpy, and how to save them for later use.\n", - "\n", - "## Defining Area Definitions\n", - "\n", - "The areas in pytroll/satpy are defined through the pyresample package.\n", - "\n", - "To define an area, have a look at the pyresample documentation:\n", - "https://pyresample.readthedocs.io/en/latest/geo_def.html\n", - "\n", - "In this tutorial, we will create an area definition that covers Sweden, following the official SWEREF99TM.\n", - "\n", - "The projection to use is transverse mercator, with the following parameter:\n", - "- reference longitude is 15\n", - "- scale factor is 0.9996\n", - "\n", - "Sweden is included in a lon/lat box spanning approximately 10 to 25 degrees east and 55 to 70 degrees north. (If you're curious, look here: https://en.wikipedia.org/wiki/Geography_of_Sweden#Latitude_and_longitude)\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 4.1\n", - "Use pyproj to determine the area extent in projection coordinate of the bounding box.\n", - "\n", - "Pyproj documentation: https://jswhit.github.io/pyproj/\n", - "\n", - "The transverse mercator projection parameters https://proj4.org/operations/projections/tmerc.html" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 4.2\n", - "Create a pyresample AreaDefinition object using your computed parameters, with a resolution of ~1km" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 4.3\n", - "Resample your metop data on your newly defined area definition." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Saving the area\n", - "The area definitions can be saved in yaml format. An `areas.yaml` file is provided in the satpy installation (https://github.com/pytroll/satpy/blob/feature-areas-yaml/satpy/etc/areas.yaml), but for areas you want to save, it is best to have an `areas.yaml` file in your working directory that will complete or replace the standard one. Alternatively, you can place your areas file in a directory pointed to by the `PPP_CONFIG_DIR` environment variable.\n", - "To export the area you created, use the area's `create_areas_def` method." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 4.4\n", - "Save the area you just defined into your own `areas.yaml` file. Resample your metop data using the saved area definition, ie by refering to it by name in the `scn.resample` call." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/5-custom-reader.ipynb b/pytroll-exercise-2018-10/5-custom-reader.ipynb deleted file mode 100644 index b53dd1b..0000000 --- a/pytroll-exercise-2018-10/5-custom-reader.ipynb +++ /dev/null @@ -1,390 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Adding a custom reader to satpy\n", - "\n", - "\n", - "In order to add a reader to satpy, you will need to create two files:\n", - " - a YAML file for describing the files to read and the datasets that are available\n", - " - a python file implementing the actual reading of the datasets and metadata\n", - "\n", - "For this tutorial, we will implement a reader for the Eumetsat NetCDF format for SEVIRI data" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## The YAML file\n", - "\n", - "The yaml file is composed of three sections:\n", - "- the `reader` section, that provides basic parameters for the reader\n", - "- the `file_types` section, which gives the patterns of the files this reader can handle\n", - "- the `datasets` section, describing the datasets available from this reader \n", - "\n", - "### The `reader` section\n", - "The `reader` section, that provides basic parameters for the reader.\n", - "\n", - "The parameters to provide in this section are:\n", - "- description: General description of the reader\n", - "- name: this is the name of the reader, it should be the same as the filename (without the .yaml extension obviously). This is the name used interactively in satpy, so choose it well! A loose convention is to use `__` as a template for the name\n", - "- sensors: the list of sensors this reader will support\n", - "- reader: the metareader to use, in most cases the `FileYAMLReader` is a good choice." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "```yaml\n", - "reader:\n", - " description: NetCDF4 reader for the Eumetsat MSG format\n", - " name: nc_seviri_l1b\n", - " sensors: [seviri]\n", - " reader: !!python/name:satpy.readers.yaml_reader.FileYAMLReader\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### The `file_types` section\n", - "\n", - "Each file type needs to provide:\n", - " - `file_reader`, the class that will handle the files for this reader, that you will implement in the corresponding python file (see next section)\n", - " - `file_patterns`, the patterns to match to find files this reader can handle. The syntax to use is basically the same as `format` with the addition of time. See the [trollsift package documentation](https://trollsift.readthedocs.io/en/latest/usage.html) for more details.\n", - " - Optionally, a file type can have a `requires` field: it is a list of file types that the current file types needs to function. For example, the HRIT MSG format segment files each need a prologue and epilogue file to be read properly, hence in this case we have added `requires: [HRIT_PRO, HRIT_EPI]` to the file type definition.\n", - " \n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "```yaml\n", - "file_types:\n", - " nc_seviri_l1b:\n", - " file_reader: !!python/name:satpy.readers.nc_seviri_l1b.NCSEVIRIFileHandler\n", - " file_patterns: ['W_XX-EUMETSAT-Darmstadt,VIS+IR+IMAGERY,{satid:4s}+SEVIRI_C_EUMG_{processing_time:%Y%m%d%H%M%S}.nc']\n", - " nc_seviri_l1b_hrv:\n", - " file_reader: !!python/name:satpy.readers.nc_seviri_l1b.NCSEVIRIHRVFileHandler\n", - " file_patterns: ['W_XX-EUMETSAT-Darmstadt,HRV+IMAGERY,{satid:4s}+SEVIRI_C_EUMG_{processing_time:%Y%m%d%H%M%S}.nc']\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### The `datasets` section\n", - "\n", - "The datasets section describes each dataset available in the files. The parameters provided are made available to the methods of the implementing class.\n", - "\n", - "Parameters you can define for example are:\n", - " - name\n", - " - sensor\n", - " - resolution\n", - " - wavelength\n", - " - polarization\n", - " - standard_name: the name used for the dataset, that will be used for knowing what kind of data it is and handle it appropriately\n", - " - units: the units of the data, important to get consistent processing across multiple platforms/instruments\n", - " - modifiers: what modification have already been applied to the data, eg `sunz_corrected`\n", - " - file_type\n", - " - coordinates: this tells which datasets to load to navigate the current dataset\n", - " - and any other field that is relevant for the reader\n", - "\n", - "This section can be copied and adapted simply from existing seviri readers, like for example the `msg_native` reader." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "```yaml\n", - "\n", - "datasets:\n", - " HRV:\n", - " name: HRV\n", - " resolution: 1000.134348869\n", - " wavelength: [0.5, 0.7, 0.9]\n", - " calibration:\n", - " reflectance:\n", - " standard_name: toa_bidirectional_reflectance\n", - " units: \"%\"\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b_hrv\n", - "\n", - " IR_016:\n", - " name: IR_016\n", - " resolution: 3000.403165817\n", - " wavelength: [1.5, 1.64, 1.78]\n", - " calibration:\n", - " reflectance:\n", - " standard_name: toa_bidirectional_reflectance\n", - " units: \"%\"\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - " nc_key: 'ch3'\n", - "\n", - " IR_039:\n", - " name: IR_039\n", - " resolution: 3000.403165817\n", - " wavelength: [3.48, 3.92, 4.36]\n", - " calibration:\n", - " brightness_temperature:\n", - " standard_name: toa_brightness_temperature\n", - " units: K\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - " nc_key: 'ch4'\n", - "\n", - " IR_087:\n", - " name: IR_087\n", - " resolution: 3000.403165817\n", - " wavelength: [8.3, 8.7, 9.1]\n", - " calibration:\n", - " brightness_temperature:\n", - " standard_name: toa_brightness_temperature\n", - " units: K\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - "\n", - " IR_097:\n", - " name: IR_097\n", - " resolution: 3000.403165817\n", - " wavelength: [9.38, 9.66, 9.94]\n", - " calibration:\n", - " brightness_temperature:\n", - " standard_name: toa_brightness_temperature\n", - " units: K\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - "\n", - " IR_108:\n", - " name: IR_108\n", - " resolution: 3000.403165817\n", - " wavelength: [9.8, 10.8, 11.8]\n", - " calibration:\n", - " brightness_temperature:\n", - " standard_name: toa_brightness_temperature\n", - " units: K\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - "\n", - " IR_120:\n", - " name: IR_120\n", - " resolution: 3000.403165817\n", - " wavelength: [11.0, 12.0, 13.0]\n", - " calibration:\n", - " brightness_temperature:\n", - " standard_name: toa_brightness_temperature\n", - " units: K\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - "\n", - " IR_134:\n", - " name: IR_134\n", - " resolution: 3000.403165817\n", - " wavelength: [12.4, 13.4, 14.4]\n", - " calibration:\n", - " brightness_temperature:\n", - " standard_name: toa_brightness_temperature\n", - " units: K\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - "\n", - " VIS006:\n", - " name: VIS006\n", - " resolution: 3000.403165817\n", - " wavelength: [0.56, 0.635, 0.71]\n", - " calibration:\n", - " reflectance:\n", - " standard_name: toa_bidirectional_reflectance\n", - " units: \"%\"\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - "\n", - " VIS008:\n", - " name: VIS008\n", - " resolution: 3000.403165817\n", - " wavelength: [0.74, 0.81, 0.88]\n", - " calibration:\n", - " reflectance:\n", - " standard_name: toa_bidirectional_reflectance\n", - " units: \"%\"\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - "\n", - " WV_062:\n", - " name: WV_062\n", - " resolution: 3000.403165817\n", - " wavelength: [5.35, 6.25, 7.15]\n", - " calibration:\n", - " brightness_temperature:\n", - " standard_name: toa_brightness_temperature\n", - " units: \"K\"\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - "\n", - " WV_073:\n", - " name: WV_073\n", - " resolution: 3000.403165817\n", - " wavelength: [6.85, 7.35, 7.85]\n", - " calibration:\n", - " brightness_temperature:\n", - " standard_name: toa_brightness_temperature\n", - " units: \"K\"\n", - " radiance:\n", - " standard_name: toa_outgoing_radiance_per_unit_wavelength\n", - " units: W m-2 um-1 sr-1\n", - " counts:\n", - " standard_name: counts\n", - " units: count\n", - " file_type: nc_seviri_l1b\n", - "\n", - " \n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The YAML file is now ready, let's go on with the corresponding python file.\n", - "## The python file\n", - "The python files needs to implement a file handler class for each file type that we want to read.\n", - "Such a class needs to implement a few methods:\n", - "- the `__init__` method, that takes as arguments \n", - " - the filename (string)\n", - " - the filename info (dict) that we get by parsing the filename using the pattern defined in the yaml file\n", - " - the filetype info that we get from the filetype definition in the yaml file\n", - " \n", - " This method can also recieve other file handler instances as parameter if the filetype at hand has requirements. (See the explanation in the YAML file filetype section above)\n", - "- the `get_dataset` method, which takes as arguments \n", - " - the dataset ID of the dataset to load\n", - " - the dataset info that is the description of the channel in the YAML file\n", - " \n", - " This method has to return an xarray.DataArray instance if the loading is successful, containing the data and metadata of the loaded dataset, or return None if the loading was unsuccessful.\n", - "- the `get_area_def` method, that takes as single argument the dataset ID for which we want the area. For the data that cannot be geolocated with an area definition, the pixel coordinates need to be loadable from `get_dataset` for the resulting scene to be navigated. That is, if the data cannot be geolocated with an area definition then the dataset section should specify `coordinates: [longitude_dataset, latitude_dataset]`\n", - "- Optionally, the `get_bounding_box` method can be implemented if filtering files by area is desirable for this data type \n", - "\n", - "On top of that, two attributes need to be defined: `start_time` and `end_time`, that define the start and end times of the sensing." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# this is nc_seviri_l1b.py\n", - "class NCSEVIRIFileHandler():\n", - " def __init__(self, filename, filename_info, filetype_info):\n", - " super(NCSEVIRIFileHandler, self).__init__(filename, filename_info, filetype_info)\n", - " self.nc = None\n", - " \n", - " def get_dataset(self, dataset_id, dataset_info):\n", - " if dataset_id.calibration != 'radiance':\n", - " # TODO: implement calibration to relfectance or brightness temperature\n", - " return\n", - " if self.nc is None:\n", - " self.nc = xr.open_dataset(self.filename,\n", - " decode_cf=True,\n", - " mask_and_scale=True,\n", - " chunks={'num_columns_vis_ir': CHUNK_SIZE,\n", - " 'num_rows_vis_ir': CHUNK_SIZE})\n", - " self.nc = self.nc.rename({'num_columns_vir_ir': 'x', 'num_rows_vir_ir': 'y'})\n", - " dataset = self.nc[dataset_info['nc_key']]\n", - " dataset.attrs.update(dataset_info)\n", - " return dataset\n", - " \n", - " def get_area_def(self, dataset_id):\n", - " # TODO\n", - " pass\n", - " \n", - "class NCSEVIRIHRVFileHandler():\n", - " # left as an exercise to the reader :)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/6-Metop_and_MSG_on_the_same_grid.ipynb b/pytroll-exercise-2018-10/6-Metop_and_MSG_on_the_same_grid.ipynb deleted file mode 100644 index 8e349f0..0000000 --- a/pytroll-exercise-2018-10/6-Metop_and_MSG_on_the_same_grid.ipynb +++ /dev/null @@ -1,167 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Visualizing Metop and MSG data on the same grid\n", - "## Summary\n", - "In this tutorial, we well look at how we can put both Metop and MSG data on the same grid\n", - "\n", - "## First step, loading the data\n", - "Loading the data is done in the same way as presented in earlier tutorial\n", - "\n", - "## Exercise 6.1: Getting the data on the same grid\n", - "Create a `Scene` object for the Metop data and another one for the MSG data. Load the `day_microphysics` RGB composite from MSG and the 10.8µm channel from Metop. Resample both to the EastEurope area." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from satpy import Scene, find_files_and_readers\n", - "from datetime import datetime\n", - "\n", - "metop_files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex6',\n", - " reader='avhrr_eps_l1b',\n", - " start_time=datetime(2018, 10, 7 ,9, 25),\n", - " end_time=datetime(2018, 10, 7 ,9, 30))\n", - "scn_metop = Scene(filenames=metop_files)\n", - "scn_metop.load([10.8])" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "msg_files = find_files_and_readers(base_dir='/tcenas/scratch/pytroll/ex6',\n", - " reader='native_msg')\n", - "\n", - "scn = Scene(filenames=msg_files)\n", - "scn_msg = Scene(filenames=msg_files)\n", - "scn_msg.load(['day_microphysics','IR_108'])\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "newscn_metop = scn_metop.resample('eurol')\n", - "newscn_metop.show(10.8)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "\n", - "newscn_msg = scn_msg.resample('eurol')\n", - "newscn_msg.show('IR_108')\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 6.2: Manipulating images\n", - "Now we want to put both datasets onto one image." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from satpy.writers import get_enhanced_image\n", - "import xarray as xr\n", - "from trollimage.xrimage import XRImage\n", - "\n", - "# Lets compare MSG and METOP channel 10.8 data\n", - "msg_image = get_enhanced_image(newscn_msg['IR_108'])\n", - "\n", - "metop_image = get_enhanced_image(newscn_metop[10.8])\n", - "\n", - "array1 = msg_image.data.where(metop_image.data.isnull(), metop_image.data)\n", - "\n", - "XRImage(array1)\n", - "\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# No Using MSG composite\n", - "msg_image = get_enhanced_image(newscn_msg['day_microphysics'])\n", - "color_array = xr.concat((metop_image.data, metop_image.data, metop_image.data), 'bands')\n", - "color_array['bands'] = ['R', 'G', 'B']\n", - "final_array = msg_image.data.where(color_array.isnull(), color_array.data)\n", - "XRImage(final_array)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## The `Multiscene` class\n", - "In order to manipulate multiple scenes at once, in some circumstances, it maybe advantageous to make use of satpy's `Multiscene` class. Let's create a multiscene instance, load some data and resample the data to the `EastEurope` area" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from satpy import MultiScene\n", - "mscn = MultiScene([scn_msg, scn_metop])\n", - "mscn.load(['overview'])\n", - "new_mscn = mscn.resample('eurol')\n", - "blended_scene = new_mscn.blend()\n", - "blended_scene.show('overview', overlay={'coast_dir': '/tcenas/scratch/pytroll/shapes/', 'color': (255, 0, 0), 'resolution': 'i'})" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/7-A_Time_Loop_Animation.ipynb b/pytroll-exercise-2018-10/7-A_Time_Loop_Animation.ipynb deleted file mode 100644 index abdd649..0000000 --- a/pytroll-exercise-2018-10/7-A_Time_Loop_Animation.ipynb +++ /dev/null @@ -1,102 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## MPEF MSG Animation" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Summary\n", - "In this tutorila we will load 6 hours worth of MSG Level1.5 data and create an animation, saved in mp4 format" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "import sys\n", - "import satpy\n", - "from satpy import Scene\n", - "from satpy.multiscene import MultiScene\n", - "import imageio\n", - "\n", - "import glob" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "# locate and store then Image files in a list\n", - "base_dir='/tcenas/scratch/pytroll/ex7/'\n", - "files = glob.glob('{}*'.format(base_dir))\n", - "files=sorted(files)\n", - "\n", - "# Create a scene for each file and store in scenes list\n", - "scenes = [Scene(sensor=\"seviri\",reader='native_msg', filenames=[filenames]) for filenames in files]\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "mscn = MultiScene(scenes)\n", - "# load a channel\n", - "mscn.load(['VIS006'])\n", - "mscn.save_animation('full_msg4_test.mp4', fps=2)\n", - "# View the animation in your browser\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 1 Create an animated composite resampled over Europe" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/8-Saving_to_netcdf.ipynb b/pytroll-exercise-2018-10/8-Saving_to_netcdf.ipynb deleted file mode 100644 index 3f8415b..0000000 --- a/pytroll-exercise-2018-10/8-Saving_to_netcdf.ipynb +++ /dev/null @@ -1,32 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.3" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/9-Computing_statistics.ipynb b/pytroll-exercise-2018-10/9-Computing_statistics.ipynb deleted file mode 100644 index d057e99..0000000 --- a/pytroll-exercise-2018-10/9-Computing_statistics.ipynb +++ /dev/null @@ -1,161 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Exercise 9: Analysing data for an area of interest\n", - "\n", - "In previous exercises we've learned how to load and display satellite data using a `Scene`-object. In this exercise we will extract a piece of data from that scene and do simple analysis on it. Here we learn how to play with `dask`-arrays. Furthermore, we will touch some Python basics, i.e., string formatting, use of `glob`-module, and using slices." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.1: Find the file\n", - "\n", - "Use string formatting and `glob`-module ot get get a list of files in directory `/tcenas/scratch/pytroll/ex9`. Example of string formatting is given below" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "import glob\n", - "from satpy import Scene\n", - "\n", - "# String formatting example\n", - "str = '{} world!'\n", - "print (str.format('Hello'))\n", - "\n", - "# Using glob, find *.nat files in the given directory\n", - "base_dir = '/tcenas/scratch/pytroll/ex9'\n", - "files = \n", - "\n", - "# print found files\n", - "print (files)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.2: Create a `Scene`-object and load some data\n", - "\n", - "The resulting `files`-variable should only contain one file. Use the variable as a list to initiate a `Scene`-object and load `day_microphysics`-composite and channel data for 0.6um and 1.6um. _Hint:_ `Scene`-object has a convenience method `available_dataset_names` to check how the different channels are named in the reader." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": false - }, - "outputs": [], - "source": [ - "# Create a Scene-object\n", - "scn = \n", - "# Let's define a composite variable for convenience\n", - "composite = 'day_microphysics'\n", - "# Make a list with the composite, and 0.6um and 1.6um channel data\n", - "datasets = \n", - "# Load datasets to the Scene-object\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.3: Resampling the `Scene`\n", - "\n", - "Resample the created `Scene`-object to `EastEurope` area. Show the `day_microphysics`-composite." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "# Resample the scene to the new area\n", - "ee = \n", - "# Show composite\n", - "ee.show(composite)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.4: Data slicing and statistics\n", - "\n", - "Ńow we're goint to work with the resampled image. The left hand side shows a big thunderstorm system over East Europe. Take a data slice of `[245:275, 80:120]` from both 0.6um and 1.6um channels and calculate the mean reflectance. _Hint_: The extracted data slice is a `dask`-array. These arrays are _lazy_. Also, we're using `slice`-objects here for convenient ROI definition." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Area of interest\n", - "rows = slice()\n", - "cols = slice()\n", - "# Slice the area of interest to a variable\n", - "vis06 = \n", - "ir16 = \n", - "# Compute and print the mean reflectances\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 9.5: Utilizing the data loaded for the composite\n", - "\n", - "The `day_microphysics` composite is a three component data array. Here's how the composite is build: http://www.eumetrain.org/rgb_quick_guides/quick_guides/DaymicroRGB.pdf\n", - "Calculate and print the mean of the 0.8um channel using the data loaded for the `day_microphysics` composite. Furthermore, calculate and print the minimum, maximum, and standard deviation of the 10.8um channel." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Slice the area of interest to variables\n", - "vis08 = \n", - "ir108 =\n", - "# Compute and print the required statistics\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/pytroll-exercise-2018-10/sample_data/.keep b/pytroll-exercise-2018-10/sample_data/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/eum-2019-03/0-installation.ipynb b/pytroll-exercises-two-day/0-installation.ipynb similarity index 100% rename from eum-2019-03/0-installation.ipynb rename to pytroll-exercises-two-day/0-installation.ipynb diff --git a/eum-2019-03/1-start-msg.ipynb b/pytroll-exercises-two-day/1-start-msg.ipynb similarity index 100% rename from eum-2019-03/1-start-msg.ipynb rename to pytroll-exercises-two-day/1-start-msg.ipynb diff --git a/eum-2019-03/10-start-sentinel-3.ipynb b/pytroll-exercises-two-day/10-start-sentinel-3.ipynb similarity index 100% rename from eum-2019-03/10-start-sentinel-3.ipynb rename to pytroll-exercises-two-day/10-start-sentinel-3.ipynb diff --git a/eum-2019-03/2-start-metop.ipynb b/pytroll-exercises-two-day/2-start-metop.ipynb similarity index 100% rename from eum-2019-03/2-start-metop.ipynb rename to pytroll-exercises-two-day/2-start-metop.ipynb diff --git a/eum-2019-03/3-pyorbital-passes.ipynb b/pytroll-exercises-two-day/3-pyorbital-passes.ipynb similarity index 100% rename from eum-2019-03/3-pyorbital-passes.ipynb rename to pytroll-exercises-two-day/3-pyorbital-passes.ipynb diff --git a/eum-2019-03/4-custom-areas.ipynb b/pytroll-exercises-two-day/4-custom-areas.ipynb similarity index 100% rename from eum-2019-03/4-custom-areas.ipynb rename to pytroll-exercises-two-day/4-custom-areas.ipynb diff --git a/eum-2019-03/5-custom-reader.ipynb b/pytroll-exercises-two-day/5-custom-reader.ipynb similarity index 100% rename from eum-2019-03/5-custom-reader.ipynb rename to pytroll-exercises-two-day/5-custom-reader.ipynb diff --git a/eum-2019-03/6-metop-and-msg-same-grid.ipynb b/pytroll-exercises-two-day/6-metop-and-msg-same-grid.ipynb similarity index 100% rename from eum-2019-03/6-metop-and-msg-same-grid.ipynb rename to pytroll-exercises-two-day/6-metop-and-msg-same-grid.ipynb diff --git a/eum-2019-03/7-animation.ipynb b/pytroll-exercises-two-day/7-animation.ipynb similarity index 100% rename from eum-2019-03/7-animation.ipynb rename to pytroll-exercises-two-day/7-animation.ipynb diff --git a/eum-2019-03/8-saving-to-netcdf.ipynb b/pytroll-exercises-two-day/8-saving-to-netcdf.ipynb similarity index 100% rename from eum-2019-03/8-saving-to-netcdf.ipynb rename to pytroll-exercises-two-day/8-saving-to-netcdf.ipynb diff --git a/eum-2019-03/9-hands-on-data.ipynb b/pytroll-exercises-two-day/9-hands-on-data.ipynb similarity index 100% rename from eum-2019-03/9-hands-on-data.ipynb rename to pytroll-exercises-two-day/9-hands-on-data.ipynb diff --git a/eum-2019-03/sample_data/.keep b/pytroll-exercises-two-day/sample_data/.keep similarity index 100% rename from eum-2019-03/sample_data/.keep rename to pytroll-exercises-two-day/sample_data/.keep