Job request: 14100
- Organisation:
- University of Liverpool
- Workspace:
- flucats
- ID:
- lygdlawaakyb7c3h
This page shows the technical details of what happened when the authorised researcher Louis Fisher requested one or more actions to be run against real patient data within a secure environment.
By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.
The output security levels are:
-
highly_sensitive
- Researchers can never directly view these outputs
- Researchers can only request code is run against them
-
moderately_sensitive
- Can be viewed by an approved researcher by logging into a highly secure environment
- These are the only outputs that can be requested for public release via a controlled output review service.
Jobs
-
- Job identifier:
-
otgx5myn46jouccf
Pipeline
Show project.yaml
version: '3.0'
expectations:
population_size: 1000
actions:
# this looks for whether each flucats var is ever recorded for each patient as of 2021-01-01
generate_study_population_test:
run: cohortextractor:latest generate_cohort --study-definition study_definition_test --output-format=csv.gz --with-end-date-fix
outputs:
highly_sensitive:
cohort: output/input_test.csv.gz
# this looks for whether each flucats var is recorded within a week of template code for each patient as of 2021-01-01
generate_study_population_long_window:
run: cohortextractor:latest generate_cohort --study-definition study_definition_long_window --output-format=csv.gz --with-end-date-fix
outputs:
highly_sensitive:
cohort: output/input_long_window.csv.gz
generate_study_population_1:
run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2020-03-01 to 2020-07-01 by month" --output-format=csv.gz --with-end-date-fix
outputs:
highly_sensitive:
cohort: output/input_*.csv.gz
generate_study_population_2:
run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2020-08-01 to 2021-01-01 by month" --output-format=csv.gz --with-end-date-fix
outputs:
highly_sensitive:
cohort: output/input*.csv.gz
generate_study_population_3:
run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2021-02-01 to 2021-03-01 by month" --output-format=csv.gz --with-end-date-fix
outputs:
highly_sensitive:
cohort: output/inpu*.csv.gz
#Gives until 2021-07-01. Only have ONS deaths until 2021-07-01
generate_study_population_4:
run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2021-04-01 to 2021-06-01 by month" --output-format=csv.gz --with-end-date-fix
outputs:
highly_sensitive:
cohort: output/inp*.csv.gz
generate_study_population_end:
run: cohortextractor:latest generate_cohort --study-definition study_definition_end --output-format=csv.gz --with-end-date-fix
outputs:
highly_sensitive:
cohort: output/input_end.csv.gz
join_cohorts_monthly:
run: >
cohort-joiner:v0.0.44
--lhs output/input_20*.csv.gz
--rhs output/input_end.csv.gz
--output-dir output/joined
needs: [
generate_study_population_1,
generate_study_population_2,
generate_study_population_3,
generate_study_population_4,
generate_study_population_end]
outputs:
highly_sensitive:
cohort: output/joined/input_20*.csv.gz
generate_dataset_report:
run: >
dataset-report:v0.0.24
--input-files output/joined/input_2021-06-01.csv.gz
--output-dir output
needs: [join_cohorts_monthly]
outputs:
moderately_sensitive:
dataset_report: output/input_2021-06-01.html
combined_input_files:
run: python:latest python analysis/combine_input_files.py
needs: [join_cohorts_monthly]
outputs:
highly_sensitive:
attrition: output/joined/input_all_py.csv.gz
column_counts:
run: python:latest python analysis/test_column_counts.py
needs: [generate_study_population_test, combined_input_files, generate_study_population_2]
outputs:
moderately_sensitive:
counts: output/column_counts/combine*.csv
generate_first_outputs:
run: r:latest analysis/flucats_descriptive_basic.R
needs: [combined_input_files]
outputs:
moderately_sensitive:
attrition: output/attrition.csv
histogram_age: output/age_hist.png
date_plot: output/weekly_template.png
flucat_tables: output/flucat*.csv
sex_table: output/sex_table.csv
region_table: output/region_table.csv
compare_short_long_window:
run: python:latest python analysis/compare_short_long_window.py
needs: [generate_study_population_2, generate_study_population_long_window]
outputs:
moderately_sensitive:
counts: output/compare_short_long_window.csv
# check_codes:
# run: r:latest analysis/check_codes.R
# needs: [generate_study_population_2]
# outputs:
# moderately_sensitive:
# text: output/text.txt
Timeline
-
Created:
-
Started:
-
Finished:
-
Runtime: 00:31:12
These timestamps are generated and stored using the UTC timezone on the EMIS backend.
Code comparison
Compare the code used in this job request