Skip to content

Job request: 9678

Organisation:
Bennett Institute
Workspace:
covid_mortality_over_time
ID:
mbdzp7gpoiwybbdt

This page shows the technical details of what happened when the authorised researcher Linda Nab requested one or more actions to be run against real patient data in the project, within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level various outputs were written to. Researchers can never directly view outputs marked as highly_sensitive ; they can only request that code runs against them. Outputs marked as moderately_sensitive can be viewed by an approved researcher by logging into a highly secure environment. Only outputs marked as moderately_sensitive can be requested for release to the public, via a controlled output review service.

Jobs

  • Action:
    generate_study_population_wave2
    Status:
    Status: Succeeded
    Job identifier:
    fa3thxnrwwaykqfu
  • Action:
    generate_study_population_wave1
    Status:
    Status: Succeeded
    Job identifier:
    vymxaarrm5n7tozh
  • Action:
    generate_study_population_wave3
    Status:
    Status: Succeeded
    Job identifier:
    bybcpx2pyrb2sv56
  • Action:
    generate_study_population_ethnicity
    Status:
    Status: Succeeded
    Job identifier:
    t25wnoupwhchweem
  • Action:
    join_cohorts_waves
    Status:
    Status: Succeeded
    Job identifier:
    qcktlzdzneosfa25
  • Action:
    process_data
    Status:
    Status: Succeeded
    Job identifier:
    ubncxvftsz3ok5nb
  • Action:
    calc_irs
    Status:
    Status: Succeeded
    Job identifier:
    b5s4zr4hahz72f7f
  • Action:
    calc_vax_cov
    Status:
    Status: Succeeded
    Job identifier:
    ypvpt44st3tilfvs
  • Action:
    model_cox_ph
    Status:
    Status: Failed
    Job identifier:
    5wj237p67tfwshz5
    Error:
    Internal error: this usually means a platform issue rather than a problem for users to fix. The tech team are automatically notified of these errors and will be investigating.
  • Action:
    skim_data_wave3
    Status:
    Status: Succeeded
    Job identifier:
    f262oue5eyubsit6
  • Action:
    create_kaplan_meier
    Status:
    Status: Succeeded
    Job identifier:
    i6dbhene4vtljvuw
  • Action:
    calc_irs_std
    Status:
    Status: Succeeded
    Job identifier:
    wwudsrniprwjh367
  • Action:
    skim_data_wave2
    Status:
    Status: Succeeded
    Job identifier:
    7pexsswobui6xok2
  • Action:
    skim_data_wave1
    Status:
    Status: Succeeded
    Job identifier:
    7pzoevpnoqc7yypr
  • Action:
    create_table_one
    Status:
    Status: Succeeded
    Job identifier:
    wzlbswlf5lz3cbwv
  • Action:
    tidy_absrisks_for_viz
    Status:
    Status: Succeeded
    Job identifier:
    gvxtqmgepydjoors
  • Action:
    create_table_two
    Status:
    Status: Failed
    Job identifier:
    amobs4hjvg4lic3o
    Error:
    dependency_failed: Not starting as dependency failed
  • Action:
    join_cohorts
    Status:
    Status: Failed
    Job identifier:
    tmemwhivw4bxyao6
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    redact_rates
    Status:
    Status: Failed
    Job identifier:
    oxq2a4cazf6ciqf4
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    calculate_measures
    Status:
    Status: Failed
    Job identifier:
    4ldzasbav7lnv2yv
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    visualise_crude_rates
    Status:
    Status: Failed
    Job identifier:
    x3aev237kieevidb
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    visualise_subgroup_rates
    Status:
    Status: Failed
    Job identifier:
    6sugs3b7v6oiziwm
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    standardise_crude_rates
    Status:
    Status: Failed
    Job identifier:
    u4vwyls7cxbqtyhr
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    visualise_subgroup_ratios
    Status:
    Status: Failed
    Job identifier:
    plywojacwzdkphrn
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    calculate_rate_ratios
    Status:
    Status: Failed
    Job identifier:
    s3a6gainmrb7eh3x
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    generate_study_population
    Status:
    Status: Failed
    Job identifier:
    sm7t22udgiithjd7
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    tidy_relrisks_for_viz
    Status:
    Status: Failed
    Job identifier:
    6gh5qom4crkafhpc
    Error:
    dependency_failed: Not starting as dependency failed
  • Action:
    standardise_subgroup_rates
    Status:
    Status: Failed
    Job identifier:
    xhzvrmgzx3o2o7vp
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    calculate_measures_ckd_rrt
    Status:
    Status: Failed
    Job identifier:
    yawk4izzybwlwz2p
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    process_subgroup_rates
    Status:
    Status: Failed
    Job identifier:
    vqjbvfmqc2ycllvz
    Error:
    cancelled_by_user: Cancelled by user

Pipeline

Show project.yaml
version: '3.0'

expectations:
  population_size: 1000

actions:

# Extract data
# When argument --index-date-range is changed, change has to be made in ./analysis/config.json too
  generate_study_population:
    run: >
      cohortextractor:latest generate_cohort 
        --study-definition study_definition 
        --skip-existing 
        --output-format=csv.gz 
        --index-date-range "2020-03-01 to 2022-02-01 by month"
    outputs:
      highly_sensitive:
        cohort: output/input_*.csv.gz

# Extract ethnicity
  generate_study_population_ethnicity:
    run: >
      cohortextractor:latest generate_cohort 
        --study-definition study_definition_ethnicity 
        --output-format=csv.gz
    outputs:
      highly_sensitive:
        cohort: output/input_ethnicity.csv.gz

# Join data
  join_cohorts:
    run: >
      cohort-joiner:v0.0.7
        --lhs output/input_202*.csv.gz
        --rhs output/input_ethnicity.csv.gz
        --output-dir=output/joined
    needs: [generate_study_population, generate_study_population_ethnicity]
    outputs:
      highly_sensitive:
        cohort: output/joined/input_202*.csv.gz

# Calculate mortality rates (crude + subgroup specific)
  calculate_measures:
    run: >
      cohortextractor:latest generate_measures 
        --study-definition study_definition
        --skip-existing
        --output-dir=output/joined
    needs: [join_cohorts]
    outputs:
      moderately_sensitive:
        measure: output/joined/measure_*_mortality_rate.csv

# Calculate mortality rates ckd_rrt subgroup
  calculate_measures_ckd_rrt:
    run: r:latest analysis/measures_calc_ckd_rrt.R
    needs: [join_cohorts]
    outputs:
      moderately_sensitive:
        measure: output/joined/measure_ckd_rrt_mortality_rate.csv

# Redact rates
  redact_rates:
    run: r:latest analysis/utils/redact_rates.R
    needs: [calculate_measures, calculate_measures_ckd_rrt]
    outputs:
      moderately_sensitive:
        csvs: output/rates/redacted/*_redacted.csv       

# Standardise crude mortality rate
  standardise_crude_rates:
    run: r:latest analysis/crude_rates_standardise.R
    needs: [redact_rates]
    outputs:
      moderately_sensitive:
        csvs: output/rates/standardised/crude_*std.csv 

# Standardise subgroup specific mortality rates
  standardise_subgroup_rates:
    run: r:latest analysis/subgroups_rates_standardise.R
    needs: [redact_rates]
    outputs:
      moderately_sensitive:
        csvs: output/rates/standardised/*_std.csv

# Process subgroup specific mortality rates
  process_subgroup_rates:
    run: r:latest analysis/utils/process_rates.R
    needs: [standardise_subgroup_rates, standardise_subgroup_rates]
    outputs:
      moderately_sensitive:
        csvs: output/rates/processed/*.csv

# Calculate standardised rate ratios
  calculate_rate_ratios:
    run: r:latest analysis/subgroups_ratios.R
    needs: [standardise_subgroup_rates, process_subgroup_rates]
    outputs:
      moderately_sensitive:
        csvs: output/ratios/*.csv

# Plot and save graphs depicting the crude rates
  visualise_crude_rates:
    run: r:latest analysis/crude_rates_visualise.R
    needs: [standardise_crude_rates]
    outputs:
      moderately_sensitive:
        pngs: output/figures/rates_crude/*.png

# Plot and save graphs depicting the subgroup specific mortality rates
  visualise_subgroup_rates:
    run: r:latest analysis/subgroups_rates_visualise.R
    needs: [standardise_subgroup_rates, process_subgroup_rates]
    outputs:
      moderately_sensitive:
        pngs: output/figures/rates_subgroups/*.png

# Plot and save graphs depicting the subgroup specific mortality ratios
  visualise_subgroup_ratios:
    run: r:latest analysis/subgroups_ratios_visualise.R
    needs: [calculate_rate_ratios]
    outputs:
      moderately_sensitive:
        pngs: output/figures/ratios_subgroups/*.png

# SECOND PART OF STUDY
  generate_study_population_wave1:
    run: >
      cohortextractor:latest generate_cohort 
        --study-definition study_definition_wave1 
        --skip-existing 
        --output-format=csv.gz
    outputs:
      highly_sensitive:
        cohort: output/input_wave1.csv.gz

  generate_study_population_wave2:
    run: >
      cohortextractor:latest generate_cohort 
        --study-definition study_definition_wave2
        --skip-existing 
        --output-format=csv.gz
    outputs:
      highly_sensitive:
        cohort: output/input_wave2.csv.gz

  generate_study_population_wave3:
    run: >
      cohortextractor:latest generate_cohort 
        --study-definition study_definition_wave3
        --skip-existing 
        --output-format=csv.gz
    outputs:
      highly_sensitive:
        cohort: output/input_wave3.csv.gz

# Join data
  join_cohorts_waves:
    run: >
      cohort-joiner:v0.0.7
        --lhs output/input_wave*.csv.gz
        --rhs output/input_ethnicity.csv.gz
        --output-dir=output/joined
    needs: [generate_study_population_wave1, generate_study_population_wave2, generate_study_population_wave3, generate_study_population_ethnicity]
    outputs:
      highly_sensitive:
        cohort: output/joined/input_wave*.csv.gz

# Process data
  process_data:
    run: r:latest analysis/data_process.R
    needs: [join_cohorts_waves]
    outputs:
      highly_sensitive: 
        rds: output/processed/input_wave*.rds

# Skim data
  skim_data_wave1:
    run: r:latest analysis/data_skim.R output/processed/input_wave1.rds output/data_properties
    needs: [process_data]
    outputs: 
      moderately_sensitive:
        txt1: output/data_properties/input_wave1_skim.txt
        txt2: output/data_properties/input_wave1_coltypes.txt
        txt3: output/data_properties/input_wave1_tabulate.txt

  skim_data_wave2:
    run: r:latest analysis/data_skim.R output/processed/input_wave2.rds output/data_properties
    needs: [process_data]
    outputs: 
      moderately_sensitive:
        txt1: output/data_properties/input_wave2_skim.txt
        txt2: output/data_properties/input_wave2_coltypes.txt
        txt3: output/data_properties/input_wave2_tabulate.txt

  skim_data_wave3:
    run: r:latest analysis/data_skim.R output/processed/input_wave3.rds output/data_properties
    needs: [process_data]
    outputs: 
      moderately_sensitive:
        txt1: output/data_properties/input_wave3_skim.txt
        txt2: output/data_properties/input_wave3_coltypes.txt
        txt3: output/data_properties/input_wave3_tabulate.txt

# Create table one
  create_table_one:
    run: r:latest analysis/table_one.R
    needs: [process_data]
    outputs:
      moderately_sensitive: 
        html: output/tables/table1.html

# Vaccine coverage
  calc_vax_cov:
    run: r:latest analysis/vaccine_coverage_calc.R
    needs: [process_data]
    outputs: 
      moderately_sensitive:
        csvs: output/tables/wave*_vax_coverage.csv

# Incidence rates (crude)
  calc_irs:
    run: r:latest analysis/waves_irs.R
    needs: [process_data]
    outputs: 
      moderately_sensitive:
        csvs: output/tables/wave*_ir.csv
        csv: output/tables/ir_crude.csv

# Incidence rates (crude)
  calc_irs_std:
    run: r:latest analysis/waves_std_irs.R
    needs: [process_data]
    outputs: 
      moderately_sensitive:
        csvs: output/tables/wave*_ir_std.csv

# Kaplan-Meier
  create_kaplan_meier:
    run: r:latest analysis/waves_kaplan_meier.R
    needs: [process_data]
    outputs:
      moderately_sensitive:
        pngs: output/figures/kaplan_meier/wave*_*.png

# COX ph models
  model_cox_ph:
    run: r:latest analysis/waves_model_survival.R
    needs: [process_data]
    outputs: 
      moderately_sensitive:
        csvs1: output/tables/wave*_effect_estimates.csv
        csvs2: output/tables/wave*_ph_tests.csv
        csvs3: output/tables/wave*_log_file.csv

# Create table two
  create_table_two:
    run: r:latest analysis/table_two.R
    needs: [model_cox_ph]
    outputs:
      moderately_sensitive: 
        html: output/tables/table2.html

# Tidy absrisks (IRs) for viz
  tidy_absrisks_for_viz:
    run: r:latest analysis/absrisks_tidy_for_viz.R
    needs: [calc_irs, calc_irs_std, calc_vax_cov]
    outputs:
      moderately_sensitive: 
        csv: output/tables/absrisks_for_viz_tidied.csv

# Tidy relrisks (HRs) for viz
  tidy_relrisks_for_viz:
    run: r:latest analysis/relrisks_tidy_for_viz.R
    needs: [model_cox_ph, calc_vax_cov]
    outputs:
      moderately_sensitive: 
        csv: output/tables/relrisks_for_viz_tidied.csv

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 150:17:48

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job information

Status
Failed
Backend
TPP
Requested by
Linda Nab
Branch
main
Force run dependencies
Yes
Git commit hash
d772e26
Requested actions
  • run_all

Code comparison

Compare the code used in this Job Request