Job request: 6489
- Organisation:
- The UK Renal Registry
- Workspace:
- renal_sdr
- ID:
- kn5zhvkw6s3sqtdw
This page shows the technical details of what happened when the authorised researcher Louis Fisher requested one or more actions to be run against real patient data within a secure environment.
By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.
The output security levels are:
-
highly_sensitive
- Researchers can never directly view these outputs
- Researchers can only request code is run against them
-
moderately_sensitive
- Can be viewed by an approved researcher by logging into a highly secure environment
- These are the only outputs that can be requested for public release via a controlled output review service.
Jobs
-
- Job identifier:
-
dfasntpr2fz4nukd
-
- Job identifier:
-
k5bgvdh62rdwus2o
-
- Job identifier:
-
w2flcmcgvxnwes2p
-
- Job identifier:
-
2dtz5gpkmdf6yjua
Pipeline
Show project.yaml
version: "3.0"
expectations:
population_size: 1000
actions:
generate_study_population:
run: cohortextractor:latest generate_cohort
--study-definition study_definition
--index-date-range "2019-01-01 to 2022-02-01 by month"
--output-format csv.gz
outputs:
highly_sensitive:
cohort: output/input*.csv.gz
generate_study_population_ckd:
run: cohortextractor:latest generate_cohort
--study-definition study_definition_ckd
--index-date-range "2019-01-01 to 2022-02-01 by month"
--output-format csv.gz
outputs:
highly_sensitive:
cohort: output/input_ckd*.csv.gz
generate_study_population_ethnicity:
run: cohortextractor:latest generate_cohort
--study-definition study_definition_ethnicity
--output-dir=output
--output-format csv.gz
outputs:
highly_sensitive:
cohort: output/input_ethnicity.csv.gz
join_cohorts:
run: >
cohort-joiner:v0.0.9
--lhs output/input_20*.csv.gz
--rhs output/input_ethnicity.csv.gz
--output-dir output/joined
needs: [generate_study_population, generate_study_population_ethnicity]
outputs:
highly_sensitive:
cohort: output/joined/input_20*.csv.gz
join_cohorts_ckd:
run: >
cohort-joiner:v0.0.9
--lhs output/input_ckd_20*.csv.gz
--rhs output/input_ethnicity.csv.gz
--output-dir output/joined
needs: [generate_study_population_ckd, generate_study_population_ethnicity]
outputs:
highly_sensitive:
cohort: output/joined/input_ckd_20*.csv.gz
generate_measures:
run: cohortextractor:latest generate_measures
--study-definition study_definition
--output-dir=output/joined
needs: [join_cohorts]
outputs:
moderately_sensitive:
measure_csv: output/joined/measure_*_rate.csv
generate_measures_ckd:
run: cohortextractor:latest generate_measures
--study-definition study_definition_ckd
--output-dir=output/joined
needs: [join_cohorts_ckd]
outputs:
moderately_sensitive:
measure_csv: output/joined/measure*_rate.csv
get_counts:
run: python:latest python analysis/combine_operators.py
needs: [join_cohorts]
outputs:
moderately_sensitive:
counts: output/*_coun*.csv
generate_plots:
run: python:latest python analysis/plot_measures.py
needs: [generate_measures]
outputs:
moderately_sensitive:
counts: output/figures/plot_*.jpeg
generate_notebook:
run: jupyter:latest jupyter nbconvert /workspace/analysis/report.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
needs: [generate_plots, get_counts]
outputs:
moderately_sensitive:
notebook: output/report.html
Timeline
-
Created:
-
Started:
-
Finished:
-
Runtime: 17:57:24
These timestamps are generated and stored using the UTC timezone on the TPP backend.
Code comparison
Compare the code used in this job request