Job request: 20422
- Organisation:
- The London School of Hygiene & Tropical Medicine
- Workspace:
- openprompt-cohort-profile
- ID:
- jahp67m3uypz4xg6
This page shows the technical details of what happened when the authorised researcher Alasdair Henderson requested one or more actions to be run against real patient data within a secure environment.
By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.
The output security levels are:
-
highly_sensitive
- Researchers can never directly view these outputs
- Researchers can only request code is run against them
-
moderately_sensitive
- Can be viewed by an approved researcher by logging into a highly secure environment
- These are the only outputs that can be requested for public release via a controlled output review service.
Jobs
-
- Job identifier:
-
cfkgodfprihvdwdx
-
- Job identifier:
-
ay5pkcbngo6qrj32
-
- Job identifier:
-
p63ly7gmzy2jv7lo
-
- Job identifier:
-
uotog5ms6me7dyrk
-
- Job identifier:
-
niz3dyipy2vduflb
-
- Job identifier:
-
irsyqvnw7og3yela
-
- Job identifier:
-
qswygjhmc5wcjq6g
-
- Job identifier:
-
cqzzjdab7poprh4m
-
- Job identifier:
-
adecxxrw63jsvkdg
-
- Job identifier:
-
d67qlrcdgtl4hagi
-
- Job identifier:
-
z7s5mdnzm5govjfz
Pipeline
Show project.yaml
version: '3.0'
expectations:
population_size: 1000
actions:
create_dummy_data:
run: >
ehrql:v0
create-dummy-tables
analysis/dataset_definition.py output/dummydata
--
--day=0
outputs:
highly_sensitive:
openprompt_dummy: output/dummydata/open_prompt.csv
edit_dummy_data:
run: >
r:latest
analysis/dummy_data_editing/edit_automatic_dummy_data.R
needs: [create_dummy_data]
outputs:
highly_sensitive:
openprompt_dummy_edited: output/dummydata/dummy_edited/open_prompt.csv
scrape_all_data:
run: >
ehrql:v0
generate-dataset
analysis/scrape_data_response_dates.py
--output output/openprompt_all.csv
--dummy-tables output/dummydata/dummy_edited
needs: [edit_dummy_data]
outputs:
highly_sensitive:
openprompt_all: output/openprompt_all.csv
generate_openprompt_survey1:
run: >
ehrql:v0
generate-dataset
analysis/dataset_definition.py
--output output/openprompt_survey1.csv
--dummy-tables output/dummydata/dummy_edited
--
--day=0
--window=5
needs: [edit_dummy_data]
outputs:
highly_sensitive:
openprompt_survey1: output/openprompt_survey1.csv
generate_openprompt_survey2:
run: >
ehrql:v0
generate-dataset
analysis/dataset_definition.py
--output output/openprompt_survey2.csv
--dummy-tables output/dummydata/dummy_edited
--
--day=30
--window=5
needs: [edit_dummy_data]
outputs:
highly_sensitive:
openprompt_survey2: output/openprompt_survey2.csv
generate_openprompt_survey3:
run: >
ehrql:v0
generate-dataset
analysis/dataset_definition.py
--output output/openprompt_survey3.csv
--dummy-tables output/dummydata/dummy_edited
--
--day=60
--window=5
needs: [edit_dummy_data]
outputs:
highly_sensitive:
openprompt_survey3: output/openprompt_survey3.csv
generate_openprompt_survey4:
run: >
ehrql:v0
generate-dataset
analysis/dataset_definition.py
--output output/openprompt_survey4.csv
--dummy-tables output/dummydata/dummy_edited
--
--day=90
--window=5
needs: [edit_dummy_data]
outputs:
highly_sensitive:
openprompt_survey4: output/openprompt_survey4.csv
extract_linked_tpp_info:
run: >
ehrql:v0
generate-dataset
analysis/add_tpp_data.py
--output output/openprompt_linked_tpp.csv.gz
outputs:
highly_sensitive:
linked_tpp_data: output/openprompt_linked_tpp.csv.gz
datacombine_and_figure1:
run: >
r:latest analysis/data_import/001_data_combine.R
needs: [scrape_all_data, generate_openprompt_survey1, generate_openprompt_survey2, generate_openprompt_survey3, generate_openprompt_survey4]
outputs:
highly_sensitive:
openprompt_combined: output/openprompt_raw.csv.gz
moderately_sensitive:
openprompt_raw_skim: output/data_properties/op_raw_skim.txt
openprompt_raw_tab: output/data_properties/op_raw_tabulate.txt
openprompt_mapped_skim: output/data_properties/op_mapped_skim.txt
openprompt_mapped_tab: output/data_properties/op_mapped_tabulate.txt
raw_summ_base_s: output/data_properties/op_baseline_skim.txt
raw_summ_base_t: output/data_properties/op_baseline_tabulate.txt
raw_summ_survey1_s: output/data_properties/op_survey1_skim.txt
raw_summ_survey1_t: output/data_properties/op_survey1_tabulate.txt
raw_summ_survey2_s: output/data_properties/op_survey2_skim.txt
raw_summ_survey2_t: output/data_properties/op_survey2_tabulate.txt
raw_summ_survey3_s: output/data_properties/op_survey3_skim.txt
raw_summ_survey3_t: output/data_properties/op_survey3_tabulate.txt
raw_summ_survey4_s: output/data_properties/op_survey4_skim.txt
raw_summ_survey4_t: output/data_properties/op_survey4_tabulate.txt
check_days_after_baseline: output/data_properties/sample_day_lags.pdf
p1a_jpeg: output/plots/p1a_index_dates.jpeg
p1a_tiff: output/plots/p1a_index_dates.tiff
p1b_jpeg: output/plots/p1b_recorded_question_responses.jpeg
p1b_tiff: output/plots/p1b_recorded_question_responses.tiff
p1c_jpeg: output/plots/p1c_anyresponse_hist.jpeg
p1c_tiff: output/plots/p1c_anyresponse_hist.tiff
p1d_jpeg: output/plots/p1d_ltfu.jpeg
p1d_tiff: output/plots/p1d_ltfu.tiff
p1_jpeg: output/plots/p1_fup.jpeg
p1_tiff: output/plots/p1_fup.tiff
import_linked_tpp:
run:
r:latest analysis/data_import/002_import_linked_tpp.R
needs: [extract_linked_tpp_info]
outputs:
highly_sensitive:
linked_tpp_data_edited: output/openprompt_linked_tpp_edited.csv.gz
moderately_sensitive:
openprompt_tpp_skim: output/data_properties/op_tpp_skim.txt
openprompt_tpp_tab: output/data_properties/op_tpp_tabulate.txt
export_table1_stats:
run:
r:latest analysis/baseline_descriptive/001-table1stats.R
needs: [datacombine_and_figure1, import_linked_tpp]
outputs:
moderately_sensitive:
table1: output/tables/table1.html
table1_stats: output/tables/table1_stats.csv
# Then need to run analysis/baseline_descriptive/002-map-making.R to produce the Figure 2 for the paper
Timeline
-
Created:
-
Started:
-
Finished:
-
Runtime: 02:28:07
These timestamps are generated and stored using the UTC timezone on the TPP backend.
Job request
- Status
-
Succeeded
- Backend
- TPP
- Workspace
- openprompt-cohort-profile
- Requested by
- Alasdair Henderson
- Branch
- main
- Force run dependencies
- Yes
- Git commit hash
- 0dcf949
- Requested actions
-
-
run_all
-
Code comparison
Compare the code used in this job request