Job request: 5273
- Organisation:
- Bennett Institute
- Workspace:
- hba1c-levels-report
- ID:
- 6lq5lvolabpxvz74
This page shows the technical details of what happened when the authorised researcher Robin Park requested one or more actions to be run against real patient data within a secure environment.
By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.
The output security levels are:
-
highly_sensitive
- Researchers can never directly view these outputs
- Researchers can only request code is run against them
-
moderately_sensitive
- Can be viewed by an approved researcher by logging into a highly secure environment
- These are the only outputs that can be requested for public release via a controlled output review service.
Jobs
-
- Job identifier:
-
efcwsfvzae5b2kd5
Pipeline
Show project.yaml
version: '3.0'
expectations:
population_size: 10000
actions:
generate_study_population:
run: cohortextractor:latest generate_cohort --study-definition study_definition_all_patients --index-date-range "2019-01-01 to 2021-06-01 by month" --output-dir=output/data
outputs:
highly_sensitive:
cohort: output/data/input_all_patients_*.csv
generate_study_population_elev_predm:
run: cohortextractor:latest generate_cohort --study-definition study_definition_elev_predm --index-date-range "2019-01-01 to 2021-06-01 by month" --output-dir=output/data
outputs:
highly_sensitive:
cohort: output/data/input_elev_predm_*.csv
generate_study_population_median:
run: cohortextractor:latest generate_cohort --study-definition study_definition_median --index-date-range "2019-01-01 to 2021-06-01 by month" --output-dir=output/data
outputs:
highly_sensitive:
cohort: output/data/input_median_*.csv
generate_study_population_ethnicity:
run: cohortextractor:latest generate_cohort --study-definition study_definition_ethnicity --output-dir=output/data
outputs:
highly_sensitive:
cohort: output/data/input_ethnicity.csv
join_ethnicity_all_patients:
run: python:latest python analysis/join_ethnicity.py "input_all_patients"
needs: [generate_study_population, generate_study_population_ethnicity]
outputs:
highly_sensitive:
cohort: output/data/input_all_patients*.csv
join_ethnicity_elev_predm:
run: python:latest python analysis/join_ethnicity.py "input_elev_predm"
needs: [generate_study_population_elev_predm, generate_study_population_ethnicity]
outputs:
highly_sensitive:
cohort: output/data/input_elev_predm*.csv
join_ethnicity_median:
run: python:latest python analysis/join_ethnicity.py "input_median"
needs: [generate_study_population_median, generate_study_population_ethnicity]
outputs:
highly_sensitive:
cohort: output/data/input_median*.csv
calculate_measures:
run: cohortextractor:latest generate_measures --study-definition study_definition_all_patients --output-dir=output/data
needs: [join_ethnicity_all_patients]
outputs:
moderately_sensitive:
measure_csv: output/data/measure_*.csv
redact_measures:
run: python:latest python analysis/redact_measures.py
needs: [calculate_measures]
outputs:
moderately_sensitive:
measure_csv: output/data/measure*.csv
generate_data_description:
run: jupyter:latest jupyter nbconvert /workspace/notebooks/data_description.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
needs: [join_ethnicity_all_patients]
outputs:
moderately_sensitive:
notebook: output/data_description.html
generate_change_inputs:
run: python:latest python analysis/mean_change_input.py
needs: [join_ethnicity_median]
outputs:
moderately_sensitive:
cohort: output/data/calc_chg_t2dm*.csv
generate_median_inputs:
run: python:latest python analysis/median_input.py
needs: [join_ethnicity_median]
outputs:
moderately_sensitive:
cohort: output/data/calc_med_t2dm*.csv
generate_elev_predm_inputs:
run: python:latest python analysis/elev_predm_input.py
needs: [join_ethnicity_elev_predm]
outputs:
moderately_sensitive:
cohorts1: output/data/calc_t2dm_elev*.csv
cohorts2: output/data/calc_predm*.csv
generate_charts:
run: jupyter:latest jupyter nbconvert /workspace/notebooks/charts.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
needs: [redact_measures, generate_change_inputs, generate_median_inputs]
outputs:
moderately_sensitive:
notebook: output/charts.html
generate_charts_elev_predm:
run: jupyter:latest jupyter nbconvert /workspace/notebooks/charts_elev_predm.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
needs: [redact_measures, generate_elev_predm_inputs]
outputs:
moderately_sensitive:
notebook: output/charts_elev_predm.html
generate_tables:
run: jupyter:latest jupyter nbconvert /workspace/notebooks/tables.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
needs: [join_ethnicity_all_patients]
outputs:
moderately_sensitive:
notebook: output/tables.html
Timeline
-
Created:
-
Started:
-
Finished:
-
Runtime: 00:16:27
These timestamps are generated and stored using the UTC timezone on the TPP backend.
Job request
- Status
-
Succeeded
- Backend
- TPP
- Workspace
- hba1c-levels-report
- Requested by
- Robin Park
- Branch
- master
- Force run dependencies
- No
- Git commit hash
- 5d49208
- Requested actions
-
-
generate_elev_predm_inputs
-
Code comparison
Compare the code used in this job request