Job request: 4728
- Organisation:
- Bennett Institute
- Workspace:
- hba1c-levels-report
- ID:
- xgamhz3nccchrc2p
This page shows the technical details of what happened when the authorised researcher Robin Park requested one or more actions to be run against real patient data in the project, within a secure environment.
By cross-referencing the list of jobs with the
pipeline section below, you can infer what
security level
various outputs were written to. Researchers can never directly
view outputs marked as
highly_sensitive
;
they can only request that code runs against them. Outputs
marked as
moderately_sensitive
can be viewed by an approved researcher by logging into a highly
secure environment. Only outputs marked as
moderately_sensitive
can be requested for release to the public, via a controlled
output review service.
Pipeline
Show project.yaml
version: '3.0'
expectations:
population_size: 10000
actions:
generate_study_population:
run: cohortextractor:latest generate_cohort --study-definition study_definition_all_patients --index-date-range "2019-01-01 to 2021-06-01 by month" --output-dir=output/data
outputs:
highly_sensitive:
cohort: output/data/input_all_patients_*.csv
generate_study_population_median:
run: cohortextractor:latest generate_cohort --study-definition study_definition_median --index-date-range "2019-01-01 to 2021-06-01 by month" --output-dir=output/data
outputs:
highly_sensitive:
cohort: output/data/input_median_*.csv
generate_study_population_ethnicity:
run: cohortextractor:latest generate_cohort --study-definition study_definition_ethnicity --output-dir=output/data
outputs:
highly_sensitive:
cohort: output/data/input_ethnicity.csv
join_ethnicity_all_patients:
run: python:latest python analysis/join_ethnicity.py "input_all_patients"
needs: [generate_study_population, generate_study_population_ethnicity]
outputs:
highly_sensitive:
cohort: output/data/input_all_patients*.csv
join_ethnicity_median:
run: python:latest python analysis/join_ethnicity.py "input_median"
needs: [generate_study_population_median, generate_study_population_ethnicity]
outputs:
highly_sensitive:
cohort: output/data/input_median*.csv
calculate_measures:
run: cohortextractor:latest generate_measures --study-definition study_definition_all_patients --output-dir=output/data
needs: [join_ethnicity_all_patients]
outputs:
moderately_sensitive:
measure_csv: output/data/measure_*.csv
redact_measures:
run: python:latest python analysis/redact_measures.py
needs: [calculate_measures]
outputs:
moderately_sensitive:
measure_csv: output/data/measure*.csv
generate_data_description:
run: jupyter:latest jupyter nbconvert /workspace/notebooks/data_description.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
needs: [join_ethnicity_all_patients]
outputs:
moderately_sensitive:
notebook: output/data_description.html
generate_change_inputs:
run: python:latest python analysis/mean_change_input.py
needs: [join_ethnicity_median]
outputs:
moderately_sensitive:
cohort: output/data/calc_chg_t2dm*.csv
generate_median_inputs:
run: python:latest python analysis/median_input.py
needs: [join_ethnicity_median]
outputs:
moderately_sensitive:
cohort: output/data/calc_med_t2dm*.csv
generate_charts:
run: jupyter:latest jupyter nbconvert /workspace/notebooks/charts.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
needs: [redact_measures, generate_change_inputs, generate_median_inputs]
outputs:
moderately_sensitive:
notebook: output/charts.html
generate_validity_population:
run: cohortextractor:latest generate_cohort --study-definition study_definition_east --index-date-range "2017-01-01 to 2021-06-01 by month" --output-dir=output/data
outputs:
highly_sensitive:
cohort: output/data/input_east_*.csv
calculate_validity_measures:
run: cohortextractor:latest generate_measures --study-definition study_definition_east --output-dir=output/data
needs: [generate_validity_population]
outputs:
moderately_sensitive:
measure_csv: output/data/measure_avg_*.csv
Timeline
-
Created:
-
Started:
-
Finished:
-
Runtime:
These timestamps are generated and stored using the UTC timezone on the TPP backend.
Job information
- Status
-
Failed
JobRequestError: Internal error
- Backend
- TPP
- Workspace
- hba1c-levels-report
- Requested by
- Robin Park
- Branch
- master
- Force run dependencies
- No
- Git commit hash
- 7e08755
- Requested actions
-
-
generate_validity_population
-
calculate_validity_measures
-
Code comparison
Compare the code used in this Job Request