Skip to content

Job request: 4618

Organisation:
Bennett Institute
Workspace:
hba1c-levels-report
ID:
tdurviviejdm2csj

This page shows the technical details of what happened when the authorised researcher Robin Park requested one or more actions to be run against real patient data in the project, within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level various outputs were written to. Researchers can never directly view outputs marked as highly_sensitive ; they can only request that code runs against them. Outputs marked as moderately_sensitive can be viewed by an approved researcher by logging into a highly secure environment. Only outputs marked as moderately_sensitive can be requested for release to the public, via a controlled output review service.

Jobs

  • Action:
    generate_study_population
    Status:
    Status: Failed
    Job identifier:
    nplqzyeokxrl4xpt
    Error:
    Internal error
  • Action:
    calculate_measures
    Status:
    Status: Failed
    Job identifier:
    rbkcs7sdjbvrniqd
    Error:
    dependency_failed: Not starting as dependency failed
  • Action:
    generate_data_description
    Status:
    Status: Failed
    Job identifier:
    itxdtah223nnyxdg
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    join_ethnicity_all_patients
    Status:
    Status: Failed
    Job identifier:
    bzrh4eay7umtehxx
    Error:
    dependency_failed: Not starting as dependency failed
  • Action:
    redact_measures
    Status:
    Status: Failed
    Job identifier:
    2uc4jhhrn52yrnmk
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    generate_charts
    Status:
    Status: Failed
    Job identifier:
    svtifeb7a6illhae
    Error:
    cancelled_by_user: Cancelled by user

Pipeline

Show project.yaml
version: '3.0'

expectations:
  population_size: 10000

actions:

  generate_study_population:
    run: cohortextractor:latest generate_cohort --study-definition study_definition_all_patients --index-date-range "2019-01-01 to 2021-06-01 by month" --output-dir=output/data 
    outputs:
      highly_sensitive:
        cohort: output/data/input_all_patients_*.csv

  generate_study_population_median:
    run: cohortextractor:latest generate_cohort --study-definition study_definition_median --index-date-range "2019-01-01 to 2021-06-01 by month" --output-dir=output/data 
    outputs:
      highly_sensitive:
        cohort: output/data/input_median_*.csv
        
  generate_study_population_ethnicity:
    run: cohortextractor:latest generate_cohort --study-definition study_definition_ethnicity --output-dir=output/data 
    outputs:
      highly_sensitive:
        cohort: output/data/input_ethnicity.csv
        
  join_ethnicity_all_patients:
    run: python:latest python analysis/join_ethnicity.py "input_all_patients"
    needs: [generate_study_population, generate_study_population_ethnicity]
    outputs:
      highly_sensitive:
        cohort: output/data/input_all_patients*.csv

  join_ethnicity_median:
    run: python:latest python analysis/join_ethnicity.py "input_median"
    needs: [generate_study_population_median, generate_study_population_ethnicity]
    outputs:
      highly_sensitive:
        cohort: output/data/input_median*.csv

  calculate_measures:
    run: cohortextractor:latest generate_measures --study-definition study_definition_all_patients --output-dir=output/data
    needs: [join_ethnicity_all_patients]
    outputs:
      moderately_sensitive:
        measure_csv: output/data/measure_*.csv

  redact_measures:
    run: python:latest python analysis/redact_measures.py
    needs: [calculate_measures]
    outputs:
      moderately_sensitive:
        measure_csv: output/data/measure*.csv

  generate_data_description:
    run: jupyter:latest jupyter nbconvert /workspace/notebooks/data_description.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
    needs: [join_ethnicity_all_patients]
    outputs:
      moderately_sensitive:
        notebook: output/data_description.html

  generate_change_inputs: 
    run: python:latest python analysis/mean_change_input.py
    needs: [join_ethnicity_median]
    outputs:
      moderately_sensitive:
        cohort: output/data/calc_chg_t2dm*.csv

  generate_median_inputs: 
    run: python:latest python analysis/median_input.py
    needs: [join_ethnicity_median]
    outputs:
      moderately_sensitive:
        cohort: output/data/calc_med_t2dm*.csv

  generate_charts:
    run: jupyter:latest jupyter nbconvert /workspace/notebooks/charts.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
    needs: [redact_measures, generate_change_inputs, generate_median_inputs]
    outputs:
      moderately_sensitive:
        notebook: output/charts.html

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 13:13:22

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job request

Status
Failed
Backend
TPP
Requested by
Robin Park
Branch
master
Force run dependencies
No
Git commit hash
f0af5d4
Requested actions
  • generate_study_population
  • join_ethnicity_all_patients
  • calculate_measures
  • redact_measures
  • generate_data_description
  • generate_charts

Code comparison

Compare the code used in this job request