User guide#

Gitlab-repo Pages

Using Jupyter Notebooks for Model Data Analysis#

Welcome to the DKRZ tutorials and use cases repository!

This repository collects and prepares Jupyter notebooks with coding examples on how to use state-of-the-art processing tools on big data collections. The Jupyter notebooks highlight the optimal usage of High-Performance Computing resources and adress data analysists and researchers which begin to work with resources of German Climate Computing Center DKRZ.

While jupyter notebooks with demonstrations are provided in the notebooks/demo directory, we also host notebooks for hands-on sessions in the notebooks/hands-on_* directories.

Getting a DKRZ account:#

  • for model data users working in EU:

  • for model data users with partners in the German earth systems research community, see here.

Quick start#

To run the notebooks, you need a browser (like Firefox, Chrome, Safari,…) and internet connection.

  1. Open the DKRZ Jupyterhub in your browser.

  2. Login with your DKRZ account (if you do not have one account yet, see the links above).

  3. Select a preset spawner Option.

  4. Choose job profile which matches your processing requirements. We recommend to use at least 10GB memory. Find info about the partitons here or note the mouse hoover. Specify an account (the luv account which your user belongs to, e.g. bk1088).

  5. Press “start” and your Jupyter server will start (which it is also known as spawning). The server will run for the specified time in which you can always come back to the server (i.e. reopen the web-url) and continue to work.

  6. In the upper bar, click on Git -> Clone a Repository

  7. In the alert window, type in When it is successfull, a new folder appears in the data browser which is the cloned repo.

  8. In the data browser, change the directory to tutorials-and-use-cases/notebooks and browse and open a notebook from this folder.

  9. Make sure you use a recent Python 3 kernel (Kernel -> Change Kernel).


Some notebooks need individual Jupyter kernel:

  1. Open a terminal.

  2. Run the following lines to create a conda environment and a kernel for the notebook:

module load python3 # works at levante. otherwise, install conda or mamba
#the following creates the environment:
mamba env create -f environment.yml # set -p TARGETPATH for installing not in home
#activate the environment:
conda activate nbdemo #sometimes you need to use 'source' instead of conda and the full path instead of 'nbdemo'
#the following creates the kernel
python -m ipykernel install --user --name nbdemokernel
  1. When done then go to you Jupyter and choose the new Kernel we just created nbdemokerenl.

  2. Now you can run also the summer days notebook.

Content and structure#


  • notebooks/demo/tutorial_*


  • notebooks/demo/use-case_*

Further Infos#

  • Find more in the DKRZ Jupyterhub documentation.

  • See in this video the main features of the DKRZ Jupterhub and how to use it.

  • Advanced users developing their own notebooks can find there how to create their own environments that are visible as kernels by the Jupyterhub.


In this hands-on we will find, analyze, and visualize data from our DKRZ data pool. The goal is to create two maps, one showing the number of tropical nights for 2014 (the most recent year of the historical dataset) and another one showing a chosen year in the past. The hands-on will be split into two exercises:


  • Search for an appropriate list of data files. The datasets should contain the variables tasmin on a daily basis.

  • Save your selection as .csv file, so it can be used by another notebook.


  • Read the saved selection and open the two files, which are needed.

  • Calculate the number of tropical nights for both years.

  • Visualize the results on a map. You can use your preferred visualization package or stick to the example in the demo use-case_frost_days_intake_xarray_cmip6.ipynb.

Contact us#

Reach us at