Skip to content

Job request: 15170

Organisation:
Bennett Institute
Workspace:
winter-pressures
ID:
z4i37mrc4pyfpzmv

This page shows the technical details of what happened when the authorised researcher Iain Dillingham requested one or more actions to be run against real patient data in the project, within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level various outputs were written to. Researchers can never directly view outputs marked as highly_sensitive ; they can only request that code runs against them. Outputs marked as moderately_sensitive can be viewed by an approved researcher by logging into a highly secure environment. Only outputs marked as moderately_sensitive can be requested for release to the public, via a controlled output review service.

Jobs

Pipeline

Show project.yaml
version: "3.0"

expectations:
  population_size: 5000

actions:
  # Other data
  # ----------
  # Add actions for other data to this section. Prefix them with a suitable name; place
  # scripts in a similarly named sub-directory of the analysis directory; write outputs
  # to a similarly named sub-directory of the output directory.
  #
  # For example, let's call our other data "metrics". We would prefix our actions
  # "metrics_"; we would place our scripts in analysis/metrics; we would write outputs
  # to output/metrics.

  # Metrics data
  # ------------
  metrics_generate_study_dataset_winter:
    run: cohortextractor:latest generate_cohort --study-definition study_definition
      --index-date-range '2021-12-01 to 2022-03-30 by month' --output-dir=output/metrics
      --output-format=feather
    outputs:
      highly_sensitive:
        extract: output/metrics/input_*.feather

  metrics_generate_study_dataset_summer:
    run: cohortextractor:latest generate_cohort --study-definition study_definition
      --index-date-range '2021-06-01 to 2021-09-30 by month' --output-dir=output/metrics
      --output-format=feather
    outputs:
      highly_sensitive:
        extract: output/metrics/input*.feather

  metrics_generate_measures:
    run: cohortextractor:latest generate_measures --study-definition study_definition
      --output-dir=output/metrics
    needs:
    - metrics_generate_study_dataset_summer
    - metrics_generate_study_dataset_winter
    outputs:
      highly_sensitive:
        measure_csv: output/metrics/measure_*_rate.csv

  # Appointments data
  # -----------------
  appointments_generate_dataset_sql:
    run: >
      sqlrunner:latest
        analysis/appointments/dataset_query.sql
        --output output/appointments/dataset_long.csv.gz
        --dummy-data-file analysis/appointments/dummy_dataset_long.csv.gz
    outputs:
      highly_sensitive:
        dataset: output/appointments/dataset_long.csv.gz

  # appointments_generate_dataset:
  #   run: >
  #     databuilder:v0
  #       generate-dataset
  #       analysis/appointments/dataset_definition.py
  #       --output output/appointments/dataset_wide.arrow
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/appointments/dataset_wide.arrow

  # appointments_get_freq_na_values:
  #   run: >
  #     python:latest
  #       python
  #       -m analysis.appointments.get_freq_na_values
  #   needs: [appointments_generate_dataset]
  #   outputs:
  #     moderately_sensitive:
  #       dataset: output/appointments/freq_na_values.csv

  # appointments_reshape_dataset:
  #   run: >
  #     python:latest
  #       python
  #       -m analysis.appointments.reshape_dataset
  #   needs: [appointments_generate_dataset]
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/appointments/dataset_long.arrow

  appointments_generate_measure_by_booked_month:
    run: >
      python:latest
        python
        -m analysis.appointments.generate_measure
        --value-col lead_time_in_days
        --index-cols booked_month practice
    needs: [appointments_generate_dataset_sql]
    outputs:
      moderately_sensitive:
        measure: output/appointments/measure_median_lead_time_in_days_by_booked_month.csv

  appointments_generate_measure_by_start_month:
    run: >
      python:latest
        python
        -m analysis.appointments.generate_measure
        --value-col lead_time_in_days
        --index-cols start_month practice
    needs: [appointments_generate_dataset_sql]
    outputs:
      moderately_sensitive:
        measure: output/appointments/measure_median_lead_time_in_days_by_start_month.csv

  appointments_generate_deciles_charts:
    run: >
      deciles-charts:v0.0.33
        --input-files output/appointments/measure_*.csv
        --output-dir output/appointments
    config:
      show_outer_percentiles: true
    needs:
      - appointments_generate_measure_by_booked_month
      - appointments_generate_measure_by_start_month
    outputs:
      moderately_sensitive:
        deciles_charts: output/appointments/deciles_chart_*.png
        deciles_tables: output/appointments/deciles_table_*.csv

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 05:20:39

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job information

Status
Succeeded
Backend
TPP
Workspace
winter-pressures
Requested by
Iain Dillingham
Branch
main
Force run dependencies
Yes
Git commit hash
b032f41
Requested actions
  • appointments_generate_deciles_charts

Code comparison

Compare the code used in this Job Request