Job request: 9132
- Organisation:
- Bennett Institute
- Workspace:
- bmi-short-data-report-segmented
- ID:
- 4xqyalr2ls5rwnyr
This page shows the technical details of what happened when the authorised researcher Robin Park requested one or more actions to be run against real patient data within a secure environment.
By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.
The output security levels are:
-
highly_sensitive
- Researchers can never directly view these outputs
- Researchers can only request code is run against them
-
moderately_sensitive
- Can be viewed by an approved researcher by logging into a highly secure environment
- These are the only outputs that can be requested for public release via a controlled output review service.
Jobs
-
- Job identifier:
-
wxqx2hpvtcjpzguv
-
- Job identifier:
-
4mwp2saxtjapae6c
-
- Job identifier:
-
oojebibsw3tqevwy
-
- Job identifier:
-
6lryvysjrw2gpmqo
-
- Job identifier:
-
jkz4chl3clqcxbr2
-
- Job identifier:
-
4n55xioti6nn3ho7
-
- Job identifier:
-
jxev6t366pktagtz
-
- Job identifier:
-
zu73ujdc2zhsxzve
-
- Job identifier:
-
mwci2enw2gyvqieg
-
- Job identifier:
-
3wsjbz2adgq3p5zq
Pipeline
Show project.yaml
version: '3.0'
expectations:
population_size: 1000
actions:
generate_study_population:
run: cohortextractor:latest generate_cohort --study-definition study_definition --output-dir=output/data --output-format feather
outputs:
highly_sensitive:
cohort: output/data/input.feather
generate_study_population_derived_bmi:
run: cohortextractor:latest generate_cohort --study-definition study_definition_derived_bmi --output-dir=output/data --output-format feather
outputs:
highly_sensitive:
cohort: output/data/input_derived_bmi.feather
generate_study_population_recorded_bmi:
run: cohortextractor:latest generate_cohort --study-definition study_definition_recorded_bmi --output-dir=output/data --output-format feather
outputs:
highly_sensitive:
cohort: output/data/input_recorded_bmi.feather
generate_study_population_snomed_hw:
run: cohortextractor:latest generate_cohort --study-definition study_definition_snomed_hw --output-dir=output/data --output-format feather
outputs:
highly_sensitive:
cohort: output/data/input_snomed_hw.feather
generate_study_population_ctv3_hw:
run: cohortextractor:latest generate_cohort --study-definition study_definition_ctv3_hw --output-dir=output/data --output-format feather
outputs:
highly_sensitive:
cohort: output/data/input_ctv3_hw.feather
preprocess_derived_bmi_input:
run: python:latest python analysis/preprocess_bmi_inputs.py "derived_bmi" --output-format feather
needs: [generate_study_population_derived_bmi]
outputs:
highly_sensitive:
cohort_with_duration: output/data/input_processed_derived_bmi.feather
preprocess_recorded_bmi_input:
run: python:latest python analysis/preprocess_bmi_inputs.py "recorded_bmi" --output-format feather
needs: [generate_study_population_recorded_bmi]
outputs:
highly_sensitive:
cohort_with_duration: output/data/input_processed_recorded_bmi.feather
preprocess_computed_bmi_input:
run: python:latest python analysis/preprocess_hw_inputs.py "height" "weight" "snomed" "computed_bmi" --output-format feather
needs: [generate_study_population_snomed_hw]
outputs:
highly_sensitive:
cohort_with_duration: output/data/input_processed_computed_bmi.feather
preprocess_backend_computed_bmi_input:
run: python:latest python analysis/preprocess_hw_inputs.py "height_backend" "weight_backend" "ctv3" "backend_computed_bmi" --output-format feather
needs: [generate_study_population_ctv3_hw]
outputs:
highly_sensitive:
cohort_with_duration: output/data/input_processed_backend_computed_bmi.feather
join_cohorts:
run: >
cohort-joiner:v0.0.35
--lhs output/data/input_processed*.feather
--rhs output/data/input.feather
--output-dir output/joined
needs: [generate_study_population, preprocess_derived_bmi_input, preprocess_recorded_bmi_input, preprocess_computed_bmi_input, preprocess_backend_computed_bmi_input]
outputs:
highly_sensitive:
cohort: output/joined/input_processed*.feather
# execute_validation_analyses:
# run: python:latest python analysis/validation_script.py
# needs: [preprocess_inputs]
# outputs:
# moderately_sensitive:
# tables: output/phenotype_validation_bmi/tables/*.csv
# figures: output/phenotype_validation_bmi/figures/*.png
# generate_report_bmi:
# run: python:latest jupyter nbconvert /workspace/notebooks/report_bmi.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
# needs: [execute_validation_analyses]
# outputs:
# moderately_sensitive:
# notebook: output/report_bmi.html
Timeline
-
Created:
-
Started:
-
Finished:
-
Runtime: 47:21:52
These timestamps are generated and stored using the UTC timezone on the TPP backend.
Job request
- Status
-
Succeeded
- Backend
- TPP
- Workspace
- bmi-short-data-report-segmented
- Requested by
- Robin Park
- Branch
- separate-study-definitions
- Force run dependencies
- Yes
- Git commit hash
- 9575068
- Requested actions
-
-
run_all
-
Code comparison
Compare the code used in this job request