Skip to content

Job request: 15231

Organisation:
Bennett Institute
Workspace:
winter-pressures
ID:
xj7k4lf5l4pxakiu

This page shows the technical details of what happened when the authorised researcher Caroline Walters requested one or more actions to be run against real patient data within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.

The output security levels are:

  • highly_sensitive
    • Researchers can never directly view these outputs
    • Researchers can only request code is run against them
  • moderately_sensitive
    • Can be viewed by an approved researcher by logging into a highly secure environment
    • These are the only outputs that can be requested for public release via a controlled output review service.

Jobs

Pipeline

Show project.yaml
version: "3.0"

expectations:
  population_size: 5000

actions:
  # Other data
  # ----------
  # Add actions for other data to this section. Prefix them with a suitable name; place
  # scripts in a similarly named sub-directory of the analysis directory; write outputs
  # to a similarly named sub-directory of the output directory.
  #
  # For example, let's call our other data "metrics". We would prefix our actions
  # "metrics_"; we would place our scripts in analysis/metrics; we would write outputs
  # to output/metrics.

  # Metrics data
  # ------------
  metrics_generate_study_dataset_winter:
    run: cohortextractor:latest generate_cohort --study-definition study_definition
      --index-date-range '2021-12-01 to 2022-03-30 by month' --output-dir=output/metrics
      --output-format=feather
    outputs:
      highly_sensitive:
        extract: output/metrics/input_*.feather

  metrics_generate_study_dataset_summer:
    run: cohortextractor:latest generate_cohort --study-definition study_definition
      --index-date-range '2021-06-01 to 2021-09-30 by month' --output-dir=output/metrics
      --output-format=feather
    outputs:
      highly_sensitive:
        extract: output/metrics/input*.feather

  metrics_generate_measures:
    run: cohortextractor:latest generate_measures --study-definition study_definition
      --output-dir=output/metrics
    needs:
    - metrics_generate_study_dataset_summer
    - metrics_generate_study_dataset_winter
    outputs:
      highly_sensitive:
        measure_csv: output/metrics/measure_*_rate.csv

  metrics_generate_single_metric:
    run: r:latest analysis/metrics/single_metric.R
    needs:
    - metrics_generate_measures
    outputs:
      moderately_sensitive:
        png1: output/metrics/summer_winter_difference_histogram.png
        png2: output/metrics/summer_winter_ratio_histogram.png
        csv1: output/metrics/summer_winter_difference_histogram_data.csv
        csv2: output/metrics/summer_winter_ratio_histogram_data.csv


  # Appointments data
  # -----------------
  appointments_generate_dataset_sql:
    run: >
      sqlrunner:latest
        analysis/appointments/dataset_query.sql
        --output output/appointments/dataset_long.csv.gz
        --dummy-data-file analysis/appointments/dummy_dataset_long.csv.gz
    outputs:
      highly_sensitive:
        dataset: output/appointments/dataset_long.csv.gz

  # appointments_generate_dataset:
  #   run: >
  #     databuilder:v0
  #       generate-dataset
  #       analysis/appointments/dataset_definition.py
  #       --output output/appointments/dataset_wide.arrow
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/appointments/dataset_wide.arrow

  # appointments_get_freq_na_values:
  #   run: >
  #     python:latest
  #       python
  #       -m analysis.appointments.get_freq_na_values
  #   needs: [appointments_generate_dataset]
  #   outputs:
  #     moderately_sensitive:
  #       dataset: output/appointments/freq_na_values.csv

  # appointments_reshape_dataset:
  #   run: >
  #     python:latest
  #       python
  #       -m analysis.appointments.reshape_dataset
  #   needs: [appointments_generate_dataset]
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/appointments/dataset_long.arrow

  appointments_generate_measure_by_booked_month:
    run: >
      python:latest
        python
        -m analysis.appointments.generate_measure
        --value-col lead_time_in_days
        --index-cols booked_month practice
    needs: [appointments_generate_dataset_sql]
    outputs:
      moderately_sensitive:
        measure: output/appointments/measure_median_lead_time_in_days_by_booked_month.csv

  appointments_generate_measure_by_start_month:
    run: >
      python:latest
        python
        -m analysis.appointments.generate_measure
        --value-col lead_time_in_days
        --index-cols start_month practice
    needs: [appointments_generate_dataset_sql]
    outputs:
      moderately_sensitive:
        measure: output/appointments/measure_median_lead_time_in_days_by_start_month.csv

  appointments_generate_deciles_charts:
    run: >
      deciles-charts:v0.0.33
        --input-files output/appointments/measure_*.csv
        --output-dir output/appointments
    config:
      show_outer_percentiles: true
    needs:
      - appointments_generate_measure_by_booked_month
      - appointments_generate_measure_by_start_month
    outputs:
      moderately_sensitive:
        deciles_charts: output/appointments/deciles_chart_*.png
        deciles_tables: output/appointments/deciles_table_*.csv

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 00:00:12

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job request

Status
Succeeded
Backend
TPP
Workspace
winter-pressures
Requested by
Caroline Walters
Branch
main
Force run dependencies
No
Git commit hash
c419d27
Requested actions
  • metrics_generate_single_metric

Code comparison

Compare the code used in this job request