Skip to content

Job request: 4346

Organisation:
Bennett Institute
Workspace:
hba1c-levels-report
ID:
szcvi63w7pgpigy6

This page shows the technical details of what happened when the authorised researcher Robin Park requested one or more actions to be run against real patient data within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.

The output security levels are:

  • highly_sensitive
    • Researchers can never directly view these outputs
    • Researchers can only request code is run against them
  • moderately_sensitive
    • Can be viewed by an approved researcher by logging into a highly secure environment
    • These are the only outputs that can be requested for public release via a controlled output review service.

Jobs

Pipeline

Show project.yaml
version: '3.0'

expectations:
  population_size: 10000

actions:

  generate_study_population:
    run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2019-01-01 to 2021-06-01 by month" --output-dir=output/data 
    outputs:
      highly_sensitive:
        cohort: output/data/input_*.csv

  generate_study_population_median:
    run: cohortextractor:latest generate_cohort --study-definition study_definition_median --index-date-range "2019-01-01 to 2021-06-01 by month" --output-dir=output/data 
    outputs:
      highly_sensitive:
        cohort: output/data/input_median_*.csv
        
  generate_study_population_ethnicity:
    run: cohortextractor:latest generate_cohort --study-definition study_definition_ethnicity --output-dir=output/data 
    outputs:
      highly_sensitive:
        cohort: output/data/input_ethnicity.csv
        
  join_ethnicity:
    run: python:latest python analysis/join_ethnicity.py 
    needs: [generate_study_population, generate_study_population_median, generate_study_population_ethnicity]
    outputs:
      highly_sensitive:
        cohort: output/data/input*.csv

  calculate_measures:
    run: cohortextractor:latest generate_measures --study-definition study_definition --output-dir=output/data
    needs: [join_ethnicity]
    outputs:
      moderately_sensitive:
        measure_csv: output/data/measure_*.csv

  generate_data_description:
    run: jupyter:latest jupyter nbconvert /workspace/notebooks/data_description.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
    needs: [join_ethnicity]
    outputs:
      moderately_sensitive:
        notebook: output/data_description.html

  # generate_charts:
  #   run: python:latest python analysis/charts.py
  #   needs: [join_ethnicity]
  #   outputs:
  #     moderately_sensitive:
  #       charts1: output/gt*.png
  #       charts2: output/med*.png
  #       charts3: output/pct*.png
  #       charts4: output/total*.png

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 25:54:53

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job request

Status
Succeeded
Backend
TPP
Requested by
Robin Park
Branch
master
Force run dependencies
Yes
Git commit hash
18ae89b
Requested actions
  • run_all

Code comparison

Compare the code used in this job request