{ "cells": [ { "cell_type": "markdown", "id": "77da5fe5-80a7-4952-86e4-f39b3f06ddef", "metadata": {}, "source": [ "## ATMODAT Standard Compliance Checker\n", "\n", "This notebook introduces you to the [atmodat checker](https://github.com/AtMoDat/atmodat_data_checker) which contains checks to ensure compliance with the ATMODAT Standard.\n", "\n", "> Its core functionality is based on the [IOOS compliance checker](https://github.com/ioos/compliance-checker). The ATMODAT Standard Compliance Checker library makes use of [cc-yaml](https://github.com/cedadev/cc-yaml), which provides a plugin for the IOOS compliance checker that generates check suites from YAML descriptions. Furthermore, the Compliance Check Library is used as the basis to define generic, reusable compliance checks.\n", "\n", "In addition, the compliance to the **CF Conventions 1.4 or higher** is verified with the [CF checker](https://github.com/cedadev/cf-checker)." ] }, { "cell_type": "markdown", "id": "edb35c53-dc33-4f1f-a4af-5a8ea69e5dfe", "metadata": {}, "source": [ "In this notebook, you will learn\n", "\n", "- [how to use an environment on DKRZ HPC mistral or levante](#Preparation)\n", "- [how to run checks with the atmodat data checker](#Application)\n", "- [to understand the results of the checker and further analyse it with pandas](#Results)\n", "- [how you could proceed to cure the data with xarray if it does not pass the QC](#Curation)" ] }, { "cell_type": "markdown", "id": "3abf2250-4b78-4043-82fe-189875d692f2", "metadata": { "tags": [] }, "source": [ "### Preparation\n", "\n", "On DKRZ's High-performance computer PC, we provide a `conda` environment which are useful for working with data in DKRZ’s CMIP Data Pool.\n", "\n", "**Option 1: Activate checker libraries for working with a comand-line shell**\n", "\n", "If you like to work with shell commands, you can simply activate the environment. Prior to this, you may have\n", "to load a module with a recent python interpreter\n", "\n", "```bash\n", "module load python3/unstable\n", "#The following line activates the quality-assurance environment mit den checker libraries so that you can execute them with shell commands:\n", "source activate /work/bm0021/conda-envs/quality-assurance\n", "``` " ] }, { "cell_type": "markdown", "id": "dff94c1c-8aa1-42aa-9486-f6d5a6df1884", "metadata": { "tags": [] }, "source": [ "**Option 2: Create a kernel with checker libraries to work with jupyter notebooks**\n", "\n", "With `ipykernel` you can install a *kernel* which can be used within a jupyter server like [jupyterhub](https://jupyterhub.dkrz.de). `ipykernel` creates the kernel based on the activated environment.\n", "\n", "```bash\n", "module load python3/unstable\n", "#The following line activates the quality-assurance environment mit den checker libraries so that you can execute them with shell commands:\n", "source activate /work/bm0021/conda-envs/quality-assurance\n", "python -m ipykernel install --user --name qualitychecker --display-name=\"qualitychecker\"\n", "```\n", "\n", "If you run this command from within a jupyter server, you have to restart the jupyterserver afterwards to be able to select the new *quality checker* kernel." ] }, { "cell_type": "markdown", "id": "95f9ba22-f84c-42e4-9952-ff6ef4f7b86d", "metadata": { "tags": [] }, "source": [ "**Expert mode**: Running the jupyter server from a different environment than the environment in which atmodat is installed\n", "\n", "Make sure that you:\n", "\n", "1. Install the `cfunits` package to the jupyter environment via `conda install cfunits -c conda-forge -p $jupyterenv` and restart the kernel.\n", "1. Add the atmodat environment to the `PATH` environment variable inside the notebook. Otherwise, the notebook's shell does not find the application `run_checks`. You can modify environment variables with the `os` package and its command `os.envrion`. The environment of the kernel can be found with `sys` and `sys.executable`. The following block sets the environment variable `PATH` correctly:" ] }, { "cell_type": "code", "execution_count": null, "id": "955fcaff-3b3f-4e5e-8c56-59ed90a4bca2", "metadata": {}, "outputs": [], "source": [ "import sys\n", "import os\n", "os.environ[\"PATH\"]=os.environ[\"PATH\"]+\":\"+os.path.sep.join(sys.executable.split('/')[:-1])" ] }, { "cell_type": "code", "execution_count": null, "id": "72c0158e-1fbb-420b-8976-329579e397b9", "metadata": {}, "outputs": [], "source": [ "#As long as there is the installation bug, we have to manually get the Atmodat CVs:\n", "if not \"AtMoDat_CVs\" in [dirpath.split(os.path.sep)[-1]\n", " for (dirpath, dirs, files) in os.walk(os.path.sep.join(sys.executable.split('/')[:-2]))] :\n", " !git clone https://github.com/AtMoDat/AtMoDat_CVs.git {os.path.sep.join(sys.executable.split('/')[:-2])}/lib/python3.9/site-packages/atmodat_checklib/AtMoDat_CVs" ] }, { "cell_type": "markdown", "id": "3d0c7dc2-4e14-4738-92c5-b8c107916656", "metadata": { "tags": [] }, "source": [ "### Data to be checked\n", "\n", "In this tutorial, we will check a small subset of CMIP6 data which we gain via `intake`:" ] }, { "cell_type": "code", "execution_count": null, "id": "75e90932-4e2f-478c-b7b5-d82b9fd347c9", "metadata": {}, "outputs": [], "source": [ "import intake\n", "# Path to master catalog on the DKRZ server\n", "col_url = \"https://dkrz.de/s/intake\"\n", "col_url = \"https://gitlab.dkrz.de/data-infrastructure-services/intake-esm/-/raw/master/esm-collections/cloud-access/dkrz_catalog.yaml\"\n", "parent_col=intake.open_catalog([col_url])\n", "list(parent_col)\n", "\n", "# Open the catalog with the intake package and name it \"col\" as short for \"collection\"\n", "col=parent_col[\"dkrz_cmip6_disk\"]" ] }, { "cell_type": "code", "execution_count": null, "id": "d30edc41-2561-43b1-879f-5e5d58784e4e", "metadata": {}, "outputs": [], "source": [ "# We just use the first file from the CMIP6 catalog and copy it to the local disk because we make some experiments from it\n", "exp_file=col.df[\"uri\"].values[0]\n", "exp_file" ] }, { "cell_type": "markdown", "id": "f1476f21-6f58-4430-9602-f18d8fa79460", "metadata": {}, "source": [ "### Application\n", "\n", "The command `run_checks` can be executed from any directory from within the atmodat conda environment. \n", "\n", "The atmodat checker contains two modules:\n", "- one that checks the global attributes for compliance with the ATMODAT standard\n", "- another that performs a standard CF check (building upon the cfchecks library)." ] }, { "cell_type": "markdown", "id": "365507aa-33a6-42df-9b35-7ead7da006b6", "metadata": {}, "source": [ "Show usage instructions of `run_checks`" ] }, { "cell_type": "code", "execution_count": null, "id": "76dabfbf-839b-4dca-844c-514cf82f0b66", "metadata": {}, "outputs": [], "source": [ "!run_checks -h" ] }, { "cell_type": "markdown", "id": "2c04701c-bc27-4460-b80e-d32daf4a7376", "metadata": {}, "source": [ "The results of the performed checks are provided in the checker_output directory. By default, `run_checks` assumes writing permissions in the path where the atmodat checker is installed. If this is not the case, you must specify an output directory where you possess writing permissions with the `-op output_path`.\n", "\n", "In the following block, we set the *output path* to the current working directory which we get via the bash command `pwd`. We apply `run_checks` for the `exp_file` which we downloaded in the chapter before." ] }, { "cell_type": "code", "execution_count": null, "id": "c3ef1468-6ce9-4869-a173-2374eca5bc2c", "metadata": {}, "outputs": [], "source": [ "cwd=!pwd\n", "cwd=cwd[0]\n", "!run_checks -f {exp_file} -op {cwd} -s" ] }, { "cell_type": "markdown", "id": "13e20408-b6fa-4d39-be02-41db2109c980", "metadata": {}, "source": [ "Now, we have a directory `atmodat_checker_output` in the `op`. For each run of `run_checks`, a new directory is created inside of `op` named by the timestamp. Additionally, a directory *latest* always shows the output of the most recent run." ] }, { "cell_type": "code", "execution_count": null, "id": "601f3486-91e2-4ff5-9f8e-324f10f799b5", "metadata": {}, "outputs": [], "source": [ "!ls {os.path.sep.join([cwd, \"atmodat_checker_output\"])}" ] }, { "cell_type": "markdown", "id": "fa5ef2a4-a1da-4fa0-873f-902884ea4db6", "metadata": {}, "source": [ "As we ran `run_checks` with the option `-s`, one output is the *short_summary.txt* file which we `cat` in the following:" ] }, { "cell_type": "code", "execution_count": null, "id": "9f6c38fd-199b-413e-9821-6535235be83c", "metadata": {}, "outputs": [], "source": [ "output_dir_string=os.path.sep.join([\"atmodat_checker_output\",\"latest\"])\n", "output_path=os.path.sep.join([cwd, output_dir_string])\n", "!cat {os.path.sep.join([output_path, \"short_summary.txt\"])}" ] }, { "cell_type": "markdown", "id": "99d2ba16-52c2-4cb6-b82b-226e75463aab", "metadata": {}, "source": [ "### Results\n", "\n", "The short summary contains information about versions, the timestamp of execution, the ratio of passed checks on attributes and errors written by the CF checker.\n", "\n", "- cfchecks routine only issues a warning/information message if variable metadata are completely missing.\n", "- Zero errors in the cfchecks routine does not necessarily mean that a data file is CF compliant!\n", "\n", "We can also have a look into the detailled output including the exact error message in the *long_summary_* files which are subdivided into severe levels." ] }, { "cell_type": "code", "execution_count": null, "id": "9600c713-1203-430b-a4a6-bf70ec441221", "metadata": {}, "outputs": [], "source": [ "!cat {os.path.sep.join([output_path,\"long_summary_recommended.csv\"])}" ] }, { "cell_type": "code", "execution_count": null, "id": "b9fa72d6-6e5f-433a-81f0-40e4cd5a94cd", "metadata": {}, "outputs": [], "source": [ "!cat {os.path.sep.join([output_path,\"long_summary_mandatory.csv\"])}" ] }, { "cell_type": "markdown", "id": "b94a7c75-abc6-4792-aa5f-65467c6522de", "metadata": {}, "source": [ "We can open the *.csv* files with `pandas` to further analyse the output." ] }, { "cell_type": "code", "execution_count": null, "id": "f02ea2c4-7238-4afd-aef0-565aa5a5787f", "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "recommend_df=pd.read_csv(os.path.sep.join([output_path,\"long_summary_recommended.csv\"]))\n", "recommend_df" ] }, { "cell_type": "markdown", "id": "6453b4ca-288e-4c49-8c93-da4524ef5792", "metadata": {}, "source": [ "There may be **missing** global attributes wich are recommended by the *atmodat standard*. We can find them with pandas:" ] }, { "cell_type": "code", "execution_count": null, "id": "f0a7e6db-f79a-448f-8046-bb4bf3bcef9d", "metadata": {}, "outputs": [], "source": [ "missing_recommend_atts=list(\n", " recommend_df.loc[recommend_df[\"Error Message\"]==\"global attribute is not present\"][\"Global Attribute\"]\n", ")\n", "missing_recommend_atts" ] }, { "cell_type": "markdown", "id": "06283c25-c5b6-450f-bfe9-d65e8fe26623", "metadata": {}, "source": [ "### Curation\n", "\n", "Let's try first steps to *cure* the file by adding a missing attribute with `xarray`. We can open the file into an *xarray dataset* with:" ] }, { "cell_type": "code", "execution_count": null, "id": "b294cd89-d55c-421f-82e2-4cf42ece7d62", "metadata": {}, "outputs": [], "source": [ "import xarray as xr\n", "exp_file_ds=xr.open_dataset(exp_file)\n", "exp_file_ds" ] }, { "cell_type": "markdown", "id": "f02bc09f-94dc-4e0f-b12f-9798549e90e8", "metadata": {}, "source": [ "We can **handle and add attributes** via the `dict`-type attribute `.attrs`. Applied on the dataset, it shows all *global attributes* of the file:" ] }, { "cell_type": "code", "execution_count": null, "id": "fc0ffe80-4288-4ac3-a599-3239f37f461d", "metadata": {}, "outputs": [], "source": [ "exp_file_ds.attrs" ] }, { "cell_type": "markdown", "id": "6f61190e-49bc-40da-8b33-30f3debd1895", "metadata": {}, "source": [ "We add all missing attributes and set a dummy value for them:" ] }, { "cell_type": "code", "execution_count": null, "id": "3fd18adf-fe43-4d47-b565-d082b80b970d", "metadata": {}, "outputs": [], "source": [ "for att in missing_recommend_atts:\n", " exp_file_ds.attrs[att]=\"Dummy\"" ] }, { "cell_type": "markdown", "id": "56e26094-0ad6-42a9-afaf-5c482ee8ca87", "metadata": {}, "source": [ "We save the modified dataset with the `to_netcdf` function:" ] }, { "cell_type": "code", "execution_count": null, "id": "8050d724-da0d-417a-992e-24bb5aae0c82", "metadata": {}, "outputs": [], "source": [ "exp_file_ds.to_netcdf(\"testfile-modified.nc\")" ] }, { "cell_type": "markdown", "id": "5794c6ce-fff2-4c6e-8c08-aaf5dd342f8d", "metadata": {}, "source": [ "Now, lets run `run_checks` again.\n", "\n", "We can also only provide a directory instead of a file as an argument with the option `-p`. The checker will find all `.nc` files inside that directory." ] }, { "cell_type": "code", "execution_count": null, "id": "6c3698f7-62a4-4297-bfbf-d6447a0f006a", "metadata": {}, "outputs": [], "source": [ "!run_checks -p {cwd} -op {cwd} -s" ] }, { "cell_type": "markdown", "id": "c72647ee-7497-42df-ae68-f6a2d4ea87ad", "metadata": {}, "source": [ "Using the *latest* directory, here is the new summary:" ] }, { "cell_type": "code", "execution_count": null, "id": "51d2eff6-2a31-47b7-a706-f2555e03b9c3", "metadata": {}, "outputs": [], "source": [ "!cat {os.path.sep.join([output_path,\"short_summary.txt\"])}" ] }, { "cell_type": "markdown", "id": "1c9205ec-4f5f-4173-bb0d-1896785a9d04", "metadata": {}, "source": [ "You can see that the checks do not fail for the modified file when subtracting the earlier failes from the sum of new passed checks." ] } ], "metadata": { "kernelspec": { "display_name": "python3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" } }, "nbformat": 4, "nbformat_minor": 5 }