Job request: 7487
- Organisation:
- Bennett Institute
- Workspace:
- covid_mortality_over_time
- ID:
- d4mdhsmea6l2yj7t
This page shows the technical details of what happened when the authorised researcher Linda Nab requested one or more actions to be run against real patient data within a secure environment.
By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.
The output security levels are:
-
highly_sensitive
- Researchers can never directly view these outputs
- Researchers can only request code is run against them
-
moderately_sensitive
- Can be viewed by an approved researcher by logging into a highly secure environment
- These are the only outputs that can be requested for public release via a controlled output review service.
Jobs
-
- Job identifier:
-
5vdisu7to6d5g3dz
-
- Job identifier:
-
fn3nldmj4rfdgcti
-
- Job identifier:
-
o3m6uaan4f6ccuwu
-
- Job identifier:
-
leuuqgb5xvatxxxl
Pipeline
Show project.yaml
version: '3.0'
expectations:
population_size: 1000
actions:
# Extract data
# When argument --index-date-range is changed, change has to be made in ./analysis/config.json too
generate_study_population:
run: >
cohortextractor:latest generate_cohort
--study-definition study_definition
--skip-existing
--output-format=csv.gz
--index-date-range "2020-03-01 to 2022-02-01 by month"
outputs:
highly_sensitive:
cohort: output/input_*.csv.gz
# Extract ethnicity
generate_study_population_ethnicity:
run: >
cohortextractor:latest generate_cohort
--study-definition study_definition_ethnicity
--output-format=csv.gz
outputs:
highly_sensitive:
cohort: output/input_ethnicity.csv.gz
# Join data
join_cohorts:
run: >
cohort-joiner:v0.0.7
--lhs output/input_202*.csv.gz
--rhs output/input_ethnicity.csv.gz
--output-dir=output/joined
needs: [generate_study_population, generate_study_population_ethnicity]
outputs:
highly_sensitive:
cohort: output/joined/input_202*.csv.gz
# Calculate mortality rates (crude + subgroup specific)
calculate_measures:
run: >
cohortextractor:latest generate_measures
--study-definition study_definition
--skip-existing
--output-dir=output/joined
needs: [join_cohorts]
outputs:
moderately_sensitive:
measure: output/joined/measure_*_mortality_rate.csv
# Standardise crude mortality rate
standardise_crude_rates:
run: r:latest analysis/crude_rates_standardise.R
needs: [calculate_measures]
outputs:
moderately_sensitive:
csvs: output/rates/crude_*monthly_std.csv
# Standardise subgroup specific mortality rates
standardise_subgroup_rates:
run: r:latest analysis/subgroups_rates_standardise.R
needs: [calculate_measures]
outputs:
moderately_sensitive:
csvs: output/rates/*_monthly_std.csv
# Process subgroup specific mortality rates
process_subgroup_rates:
run: r:latest analysis/utils/process_rates.R
needs: [standardise_subgroup_rates, standardise_subgroup_rates]
outputs:
moderately_sensitive:
csvs: output/rates/processed/*_monthly_std.csv
# Calculate standardised rate ratios
calculate_rate_ratios:
run: r:latest analysis/subgroups_ratios.R
needs: [standardise_subgroup_rates, process_subgroup_rates]
outputs:
moderately_sensitive:
csvs: output/ratios/*.csv
# Plot and save graphs depicting the crude rates
visualise_crude_rates:
run: r:latest analysis/crude_rates_visualise.R
needs: [standardise_crude_rates]
outputs:
moderately_sensitive:
pngs: output/figures/rates_crude/*.png
# Plot and save graphs depicting the subgroup specific mortality rates
visualise_subgroup_rates:
run: r:latest analysis/subgroups_rates_visualise.R
needs: [standardise_subgroup_rates, process_subgroup_rates]
outputs:
moderately_sensitive:
pngs: output/figures/rates_subgroups/*.png
# Plot and save graphs depicting the subgroup specific mortality ratios
visualise_subgroup_ratios:
run: r:latest analysis/subgroups_ratios_visualise.R
needs: [calculate_rate_ratios]
outputs:
moderately_sensitive:
pngs: output/figures/ratios_subgroups/*.png
# SECOND PART OF STUDY
generate_study_population_wave1:
run: >
cohortextractor:latest generate_cohort
--study-definition study_definition_wave1
--skip-existing
--output-format=csv.gz
outputs:
highly_sensitive:
cohort: output/input_wave1.csv.gz
generate_study_population_wave2:
run: >
cohortextractor:latest generate_cohort
--study-definition study_definition_wave2
--skip-existing
--output-format=csv.gz
outputs:
highly_sensitive:
cohort: output/input_wave2.csv.gz
generate_study_population_wave3:
run: >
cohortextractor:latest generate_cohort
--study-definition study_definition_wave3
--skip-existing
--output-format=csv.gz
outputs:
highly_sensitive:
cohort: output/input_wave3.csv.gz
# Join data
join_cohorts_waves:
run: >
cohort-joiner:v0.0.7
--lhs output/input_wave*.csv.gz
--rhs output/input_ethnicity.csv.gz
--output-dir=output/joined
needs: [generate_study_population_wave1, generate_study_population_wave2, generate_study_population_wave3, generate_study_population_ethnicity]
outputs:
highly_sensitive:
cohort: output/joined/input_wave*.csv.gz
# Process data
process_data:
run: r:latest analysis/data_process.R
needs: [join_cohorts_waves]
outputs:
highly_sensitive:
rds: output/processed/input_wave*.rds
# Skim data
skim_data_wave1:
run: r:latest analysis/data_skim.R output/processed/input_wave1.rds output/data_properties
needs: [process_data]
outputs:
moderately_sensitive:
txt1: output/data_properties/input_wave1_skim.txt
txt2: output/data_properties/input_wave1_coltypes.txt
txt3: output/data_properties/input_wave1_tabulate.txt
skim_data_wave2:
run: r:latest analysis/data_skim.R output/processed/input_wave2.rds output/data_properties
needs: [process_data]
outputs:
moderately_sensitive:
txt1: output/data_properties/input_wave2_skim.txt
txt2: output/data_properties/input_wave2_coltypes.txt
txt3: output/data_properties/input_wave2_tabulate.txt
skim_data_wave3:
run: r:latest analysis/data_skim.R output/processed/input_wave3.rds output/data_properties
needs: [process_data]
outputs:
moderately_sensitive:
txt1: output/data_properties/input_wave3_skim.txt
txt2: output/data_properties/input_wave3_coltypes.txt
txt3: output/data_properties/input_wave3_tabulate.txt
# Create table one
create_table_one:
run: r:latest analysis/table_one.R
needs: [process_data]
outputs:
moderately_sensitive:
html: output/tables/table1.html
# Kaplan-Meier
create_kaplan_meier:
run: r:latest analysis/waves_kaplan_meier.R
needs: [process_data]
outputs:
moderately_sensitive:
pngs: output/figures/kaplan_meier/wave*_*.png
# COX ph models
model_cox_ph:
run: r:latest analysis/waves_model_survival.R
needs: [process_data]
outputs:
moderately_sensitive:
csvs1: output/tables/wave*_effect_estimates.csv
csvs2: output/tables/wave*_ph_tests.csv
csvs3: output/tables/wave*_log_file.csv
# Create table two
create_table_two:
run: r:latest analysis/table_two.R
needs: [model_cox_ph]
outputs:
moderately_sensitive:
html: output/tables/table2.html
Timeline
-
Created:
-
Started:
-
Finished:
-
Runtime: 35:14:59
These timestamps are generated and stored using the UTC timezone on the TPP backend.
Job request
- Status
-
Succeeded
- Backend
- TPP
- Workspace
- covid_mortality_over_time
- Requested by
- Linda Nab
- Branch
- main
- Force run dependencies
- No
- Git commit hash
- ce7c2f0
- Requested actions
-
-
create_table_one -
create_kaplan_meier -
model_cox_ph -
create_table_two
-
Code comparison
Compare the code used in this job request