This page shows the technical details of what happened when authorised researcher Alexandra Benson requested one or more actions to be run against real patient data in the project, within a secure environment.
By cross-referencing the indicated Requested Actions with the
Pipeline section below, you can infer what
security level
various outputs were written to. Outputs marked as
highly_sensitive
can never be viewed directly by a researcher; they can only
request that code runs against them. Outputs marked as
moderately_sensitive
can be viewed by an approved researcher by logging into a highly
secure environment. Only outputs marked as
moderately_sensitive
can be requested for release to the public, via a controlled
output review service.
Jobs
-
- Job identifier:
-
o4oek7xnq6uasu2a
-
- Job identifier:
-
5fjfikghncigxp6x
-
- Job identifier:
-
56tx5p6vgqzckfta
-
- Job identifier:
-
dmg6k3frh47q7243
-
- Job identifier:
-
4x6ixzx4izdxtzkv
-
- Job identifier:
-
6qcn56rnvn4ltw37
-
- Job identifier:
-
uknysdyir7edjbjd
-
- Job identifier:
-
wxhkob5uvql4otbl
-
- Job identifier:
-
lwq2ykvc6e4gfjkt
-
- Job identifier:
-
cbn5c2dkxk7betso
-
- Job identifier:
-
hezs2z5cpthjxodz
-
- Job identifier:
-
nishigofgwlxtaok
-
- Job identifier:
-
7ning4ugs6szn7ov
-
- Job identifier:
-
hy76gw4uv7fnapmn
-
- Job identifier:
-
ubx46myhuzwsrv2h
-
- Job identifier:
-
a7joaeifwvh3xgum
-
- Job identifier:
-
b3ucdgdt74gthhio
-
- Job identifier:
-
ocfs3bizx7gzsaaz
-
- Job identifier:
-
idgzz7qtunu2l54m
Pipeline
Show project.yaml
version: "3.0"
expectations:
population_size: 1000
actions:
generate_study_definition_static:
run: cohortextractor:latest generate_cohort --study-definition study_definition_static
outputs:
highly_sensitive:
cohort: output/input_static.csv
# Oximetry
generate_study_population_oximetry:
run: cohortextractor:latest generate_cohort --study-definition study_definition_oximetry --index-date-range "2019-04-01 to 2022-05-12 by week"
outputs:
highly_sensitive:
oximetry_cohort: output/input_oximetry*.csv
join_cohorts_oximetry:
run: cohort-joiner:v0.0.30 --lhs output/input_oximetry*.csv --rhs output/input_static.csv --output-dir output/completed
needs:
[generate_study_population_oximetry, generate_study_definition_static]
outputs:
highly_sensitive:
cohort: output/completed/input_oximetry*.csv
# BP
generate_study_population_bp:
run: cohortextractor:latest generate_cohort --study-definition study_definition_bp --index-date-range "2019-04-01 to 2022-05-12 by week"
outputs:
highly_sensitive:
bp_cohort: output/input_bp*.csv
join_cohorts_bp:
run: cohort-joiner:v0.0.30 --lhs output/input_bp*.csv --rhs output/input_static.csv --output-dir output/completed
needs: [generate_study_population_bp, generate_study_definition_static]
outputs:
highly_sensitive:
cohort: output/completed/input_bp*.csv
# Proactive
generate_study_population_proactive:
run: cohortextractor:latest generate_cohort --study-definition study_definition_proactive --index-date-range "2019-04-01 to 2022-05-12 by week"
outputs:
highly_sensitive:
proactive_cohort: output/input_proactive*.csv
join_cohorts_proactive:
run: cohort-joiner:v0.0.30 --lhs output/input_proactive*.csv --rhs output/input_static.csv --output-dir output/completed
needs:
[generate_study_population_proactive, generate_study_definition_static]
outputs:
highly_sensitive:
cohort: output/completed/input_proactive*.csv
# Analysis
generate_oximetry_timeseries:
run: python:latest python analysis/analysis_oximetry_timeseries.py
needs: [join_cohorts_oximetry]
outputs:
moderately_sensitive:
oximetry_table_counts: output/oximetry_table_counts.csv
oximetry_plot_timeseries: output/oximetry_plot_timeseries.png
generate_oximetry_regional_timeseries:
run: python:latest python analysis/analysis_oximetry_region.py
needs: [join_cohorts_oximetry]
outputs:
moderately_sensitive:
oximetry_table_counts_region: output/oximetry_table_counts_*.csv
oximetry_plot_timeseries_region: output/oximetry_plot_timeseries_region_*.png
generate_oximetry_breakdowns:
run: python:latest python analysis/analysis_oximetry_breakdowns.py
needs: [join_cohorts_oximetry]
outputs:
moderately_sensitive:
oximetry_table_breakdowns: output/oximetry_table_code_*.csv
oximetry_plot_breakdowns: output/oximetry_plot_code_*.png
generate_oximetry_codes_analysis:
run: python:latest python analysis/analysis_oximetry_codes.py
needs: [join_cohorts_oximetry]
outputs:
moderately_sensitive:
oximetry_table_code_counts: output/oximetry_table_code_counts_*.csv
oximetry_table_code_combinations: output/oximetry_table_code_combinations.csv
generate_bp_timeseries:
run: python:latest python analysis/analysis_bp_timeseries.py
needs: [join_cohorts_bp]
outputs:
moderately_sensitive:
bp_table_counts: output/bp_table_counts.csv
bp_plot_timeseries: output/bp_plot_timeseries.png
generate_bp_regional_timeseries:
run: python:latest python analysis/analysis_bp_region.py
needs: [join_cohorts_bp]
outputs:
moderately_sensitive:
bp_table_counts_region: output/bp_table_counts_*.csv
bp_plot_timeseries_region: output/bp_plot_timeseries_region_*.png
generate_bp_breakdowns:
run: python:latest python analysis/analysis_bp_breakdowns.py
needs: [join_cohorts_bp]
outputs:
moderately_sensitive:
bp_table_breakdowns: output/bp_table_code_*.csv
bp_plot_breakdowns: output/bp_plot_code_*.png
generate_bp_codes_analysis:
run: python:latest python analysis/analysis_bp_codes.py
needs: [join_cohorts_bp]
outputs:
moderately_sensitive:
bp_table_code_counts: output/bp_table_code_counts_*.csv
bp_table_code_combinations: output/bp_table_code_combinations.csv
generate_proactive_timeseries:
run: python:latest python analysis/analysis_proactive_timeseries.py
needs: [join_cohorts_proactive]
outputs:
moderately_sensitive:
proactive_table_counts: output/proactive_table_counts.csv
proactive_plot_timeseries: output/proactive_plot_timeseries.png
generate_proactive_regional_timeseries:
run: python:latest python analysis/analysis_proactive_region.py
needs: [join_cohorts_proactive]
outputs:
moderately_sensitive:
proactive_table_counts_region: output/proactive_table_counts_*.csv
proactive_plot_timeseries_region: output/proactive_plot_timeseries_region_*.png
generate_proactive_breakdowns:
run: python:latest python analysis/analysis_proactive_breakdowns.py
needs: [join_cohorts_proactive]
outputs:
moderately_sensitive:
proactive_table_breakdowns: output/proactive_table_code_*.csv
proactive_plot_breakdowns: output/proactive_plot_code_*.png
generate_proactive_codes_analysis:
run: python:latest python analysis/analysis_proactive_codes.py
needs: [join_cohorts_proactive]
outputs:
moderately_sensitive:
proactive_table_code_counts: output/proactive_table_code_counts_*.csv
Timeline
-
Created:
-
Started:
-
Finished:
-
Runtime: 43:17:35
These timestamps are generated and stored using the UTC timezone on the backend.
Job information
- Status
-
Failed
Job exited with error code 137: Ran out of memory (limit for this job was 6e+01GB)
- Backend
- TPP
- Workspace
- nhs_at_home_main
- Requested by
- Alexandra Benson
- Branch
- main
- Force run dependencies
- Yes
- Git commit hash
- b963e80
- Requested actions
-
-
run_all
-