Skip to content

Job request: 4216

Organisation:
Bennett Institute
Workspace:
hba1c-levels
ID:
n44veyxti6dcxaj2

This page shows the technical details of what happened when the authorised researcher Robin Park requested one or more actions to be run against real patient data within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.

The output security levels are:

  • highly_sensitive
    • Researchers can never directly view these outputs
    • Researchers can only request code is run against them
  • moderately_sensitive
    • Can be viewed by an approved researcher by logging into a highly secure environment
    • These are the only outputs that can be requested for public release via a controlled output review service.

Jobs

Pipeline

Show project.yaml
version: '3.0'

expectations:
  population_size: 10000

actions:

  generate_study_population:
    run: cohortextractor:latest generate_cohort --study-definition study_definition --output-dir=output/data --output-format=feather
    outputs:
      highly_sensitive:
        cohort: output/data/input.feather
        
  generate_study_population_ethnicity:
    run: cohortextractor:latest generate_cohort --study-definition study_definition_ethnicity --output-dir=output/data --output-format=feather
    outputs:
      highly_sensitive:
        cohort: output/data/input_ethnicity.feather
        
  join_ethnicity:
    run: python:latest python analysis/join_ethnicity.py --output-format=feather
    needs: [generate_study_population, generate_study_population_ethnicity]
    outputs:
      highly_sensitive:
        cohort: output/data/input_joined.feather

  cohort_report:
    run: cohort-report:v2.0.1 output/data/input_joined.feather
    needs: [join_ethnicity]
    config:
      output_path: output/data
    outputs:
      moderately_sensitive:
        cohort_report: output/data/descriptives_input_joined.html
        
  # calculate_measures:
  #   run: cohortextractor:latest generate_measures --study-definition study_definition --output-dir=output/data
  #   needs: [join_ethnicity]
  #   outputs:
  #     moderately_sensitive:
  #       measure_csv: output/data/measure_*.csv
        
  # generate_abnormal_nb:
  #   run: jupyter:latest jupyter nbconvert /workspace/notebooks/abnormal_results.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
  #   needs: [calculate_measures]
  #   outputs:
  #     moderately_sensitive:
  #       notebook: output/abnormal_results.html
        
  # generate_levels_nb:
  #   run: jupyter:latest jupyter nbconvert /workspace/notebooks/hba1c_levels.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
  #   needs: [join_ethnicity]
  #   outputs:
  #     moderately_sensitive:
  #       notebook: output/hba1c_levels.html

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 02:16:13

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job request

Status
Succeeded
Backend
TPP
Workspace
hba1c-levels
Requested by
Robin Park
Branch
study-def
Force run dependencies
Yes
Git commit hash
dd6f102
Requested actions
  • generate_study_population
  • generate_study_population_ethnicity
  • join_ethnicity
  • cohort_report

Code comparison

Compare the code used in this job request