Skip to content

Job request: 19798

Organisation:
Bennett Institute
Workspace:
opioids-covid-research
ID:
r5bkw6lcl6yoaz4d

This page shows the technical details of what happened when authorised researcher Andrea Schaffer requested one or more actions to be run against real patient data in the project, within a secure environment.

By cross-referencing the indicated Requested Actions with the Pipeline section below, you can infer what security level various outputs were written to. Outputs marked as highly_sensitive can never be viewed directly by a researcher; they can only request that code runs against them. Outputs marked as moderately_sensitive can be viewed by an approved researcher by logging into a highly secure environment. Only outputs marked as moderately_sensitive can be requested for release to the public, via a controlled output review service.

Jobs

  • Action:
    measures_demo
    Status:
    Status: Failed
    Job identifier:
    4n6avkequyyrrno5
  • Action:
    measures_overall
    Status:
    Status: Succeeded
    Job identifier:
    wxurjrgqmwcuoe3b
  • Action:
    measures_type
    Status:
    Status: Succeeded
    Job identifier:
    dn4zvnwsnzovdhdj
  • Action:
    measures_carehome
    Status:
    Status: Succeeded
    Job identifier:
    zh7enlpx2pwefwf2
  • Action:
    process_data_ts
    Status:
    Status: Failed
    Job identifier:
    6vvtg77fomxwgzwj
  • Action:
    rounding_ts
    Status:
    Status: Failed
    Job identifier:
    26llhsmjmof64nhi

Pipeline

Show project.yaml
######################################
# This script defines the project pipeline - 
# it specifies the execution orders for all the code in this
# repo using a series of actions.
######################################

version: '3.0'

expectations:
  population_size: 10000

actions:

  generate_dataset_table:
    run: ehrql:v0 generate-dataset analysis/define_dataset_table.py 
      --output output/data/dataset_table.csv.gz
    outputs:
      highly_sensitive:
        cohort: output/data/dataset_table.csv.gz  

  # Measures - overall
  measures_overall:
    run: ehrql:v0 generate-measures analysis/measures_overall.py 
      --output output/measures/measures_overall.csv
      --
      --start-date "2018-01-01"
      --intervals 54
    outputs:
      moderately_sensitive:
        measure_csv: output/measures/measures_overall.csv

  # Measures - by demographic categories
  measures_demo:
    run: ehrql:v0 generate-measures analysis/measures_demo.py 
      --output output/measures/measures_demo.csv
      --
      --start-date "2018-01-01"
      --intervals 54
    outputs:
      moderately_sensitive:
        measure_csv: output/measures/measures_demo.csv

  # Measures - by opioid type
  measures_type:
    run: ehrql:v0 generate-measures analysis/measures_type.py 
      --output output/measures/measures_type.csv
      --
      --start-date "2018-01-01"
      --intervals 54
    outputs:
      moderately_sensitive:
        measure_csv: output/measures/measures_type.csv 

  # Measures - in people in care home
  measures_carehome:
    run: ehrql:v0 generate-measures analysis/measures_carehome.py 
      --output output/measures/measures_carehome.csv
      --
      --start-date "2018-01-01"
      --intervals 54
    outputs:
      moderately_sensitive:
        measure_csv: output/measures/measures_carehome.csv
        
  ## Process data - time series
  process_data_ts:
   run: r:latest analysis/process/process_data_ts.R
   needs: [measures_overall, measures_demo, measures_type, measures_carehome]
   outputs:
      moderately_sensitive:
        timeseries_csv: output/timeseries/ts_*.csv

  ## Time series - rounding
  rounding_ts:
   run: r:latest analysis/process/rounding_ts.R
   needs: [process_data_ts]
   outputs:
      moderately_sensitive:
        timeseries_csv: output/timeseries/ts*.csv

  ## Results table 
  table:
    run: r:latest analysis/descriptive/table_stand.R
    needs: [generate_dataset_table]
    outputs:
      moderately_sensitive:
        table: output/tables/table_*.csv

  
  # OLD COHORTEXTRACTOR CODE

  # ## Cohort data
  # generate_study_population_1:
  #   run: cohortextractor:latest generate_cohort
  #     --study-definition study_definition
  #     --index-date-range "2018-01-01 to 2018-12-01 by month" 
  #     --output-dir=output 
  #     --output-format=csv
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/input_*.csv

  # generate_study_population_2:
  #   run: cohortextractor:latest generate_cohort 
  #     --study-definition study_definition
  #     --index-date-range "2019-01-01 to 2019-12-01 by month" 
  #     --output-dir=output 
  #     --output-format=csv
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/input*.csv

  # generate_study_population_3:
  #   run: cohortextractor:latest generate_cohort 
  #     --study-definition study_definition
  #     --index-date-range "2020-01-01 to 2020-12-01 by month" 
  #     --output-dir=output 
  #     --output-format=csv
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/inpu*.csv

  # generate_study_population_4:
  #   run: cohortextractor:latest generate_cohort 
  #     --study-definition study_definition
  #     --index-date-range "2021-01-01 to 2021-12-01 by month" 
  #     --output-dir=output 
  #     --output-format=csv
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/inp*.csv
  
  # generate_study_population_5:
  #   run: cohortextractor:latest generate_cohort 
  #     --study-definition study_definition
  #     --index-date-range "2022-01-01 to 2022-03-01 by month" 
  #     --output-dir=output 
  #     --output-format=csv
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/in*.csv

  # ## Ethnicity      
  # generate_ethnicity_cohort:
  #   run: >
  #     cohortextractor:latest generate_cohort
  #       --study-definition study_definition_ethnicity
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/input_ethnicity.csv


  # # Data processing ----
  
  # ## Add ethnicity
  # join_cohorts:
  #   run: >
  #     cohort-joiner:v0.0.48
  #       --lhs output/input_*.csv
  #       --rhs output/input_ethnicity.csv
  #       --output-dir output/data
  #   needs: [generate_study_population_1,  generate_study_population_2, 
  #     generate_study_population_5, generate_study_population_3, 
  #     generate_study_population_4, generate_ethnicity_cohort]
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/data/input_*.csv 


  # ## Generate measures - full population
  # generate_measures:
  #   run: >
  #     cohortextractor:latest generate_measures 
  #       --study-definition study_definition
  #       --output-dir output/data
  #   needs: [join_cohorts]
  #   outputs:
  #     moderately_sensitive:
  #       measure_csv: output/data/measure_*.csv

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 129:28:56

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job information

Status
Failed
Backend
TPP
Requested by
Andrea Schaffer
Branch
main
Force run dependencies
No
Git commit hash
1a0879b
Requested actions
  • measures_overall
  • measures_demo
  • measures_type
  • measures_carehome
  • process_data_ts
  • rounding_ts

Code comparison

Compare the code used in this Job Request