Skip to content

Job request: 13847

Organisation:
University of Liverpool
Workspace:
flucats
ID:
2roq64niy3iusp3c

This page shows the technical details of what happened when the authorised researcher Louis Fisher requested one or more actions to be run against real patient data within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.

The output security levels are:

  • highly_sensitive
    • Researchers can never directly view these outputs
    • Researchers can only request code is run against them
  • moderately_sensitive
    • Can be viewed by an approved researcher by logging into a highly secure environment
    • These are the only outputs that can be requested for public release via a controlled output review service.

Jobs

  • Action:
    combined_input_files
    Status:
    Status: Failed
    Job identifier:
    o6v6iqvlhnl3djhv
    Error:
    nonzero_exit: Job exited with an error: Ran out of memory (limit for this job was 64.00GB)
  • Action:
    generate_first_outputs
    Status:
    Status: Failed
    Job identifier:
    zb7c2bl5ol3fvnxf
    Error:
    dependency_failed: Not starting as dependency failed
  • Action:
    column_counts
    Status:
    Status: Failed
    Job identifier:
    b4vyoxwjvs3s2xpr
    Error:
    dependency_failed: Not starting as dependency failed

Pipeline

Show project.yaml
version: '3.0'

expectations:
  population_size: 1000

actions:

  generate_study_population_test:
    run: cohortextractor:latest generate_cohort --study-definition study_definition_test --output-format=csv.gz --with-end-date-fix
    outputs:
      highly_sensitive:
        cohort: output/input_test.csv.gz
   
  generate_study_population_1:
    run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2020-03-01 to 2020-07-01 by month" --output-format=csv.gz --with-end-date-fix
    outputs:
      highly_sensitive:
        cohort: output/input_*.csv.gz

  generate_study_population_2:
    run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2020-08-01 to 2021-01-01 by month" --output-format=csv.gz --with-end-date-fix
    outputs:
      highly_sensitive:
        cohort: output/input*.csv.gz

  # Gives until 2021-07-01. Only have ONS deaths until 2021-07-01
  # generate_study_population_3:
  #   run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2021-02-01 to 2021-06-01 by month" --output-format=csv.gz --with-end-date-fix
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/inpu*.csv.gz
  

  # generate_study_population_end:
  #   run: cohortextractor:latest generate_cohort --study-definition study_definition_end --output-format=csv.gz --with-end-date-fix
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/input_end.csv.gz

  # join_cohorts_monthly:
  #   run: >
  #     cohort-joiner:v0.0.44
  #       --lhs output/input_20*.csv.gz
  #       --rhs output/input_end.csv.gz
  #       --output-dir output/joined
  #   needs: [
  #     generate_study_population_1,
  #     generate_study_population_2,
  #     generate_study_population_3,
  #     generate_study_population_end]
  #   outputs:
  #     highly_sensitive:
  #       cohort: output/joined/input_20*.csv.gz


  generate_dataset_report:
    run: >
      dataset-report:v0.0.24
        --input-files output/input_2021-01-01.csv.gz
        --output-dir output
    needs: [generate_study_population_2]
    outputs:
      moderately_sensitive:
        dataset_report: output/input_2021-01-01.html
  
  combined_input_files:
    run: python:latest python analysis/combine_input_files.py
    needs: [generate_study_population_1, generate_study_population_2]
    outputs:
      highly_sensitive:
        attrition: output/input_all_py.csv.gz

  column_counts:
    run: python:latest python analysis/test_column_counts.py
    needs: [generate_study_population_test, combined_input_files, generate_study_population_2]
    outputs:
      moderately_sensitive:
        counts: output/column_counts/combine*.csv

  generate_first_outputs:
    run: r:latest analysis/flucats_descriptive_basic.R


    needs: [combined_input_files]
    outputs:
      moderately_sensitive:
        attrition: output/attrition.csv
        histogram_age: output/age_hist.png
        date_plot: output/weekly_template.png
        flucat_tables: output/flucat*.csv
        sex_table: output/sex_table.csv
        region_table: output/region_table.csv

  # check_codes:
  #   run: r:latest analysis/check_codes.R
  #   needs: [generate_study_population_2]
  #   outputs:
  #     moderately_sensitive:
  #       text: output/text.txt

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 00:04:57

These timestamps are generated and stored using the UTC timezone on the EMIS backend.

Job request

Status
Failed
Backend
EMIS
Workspace
flucats
Requested by
Louis Fisher
Branch
main
Force run dependencies
No
Git commit hash
c8e8d5c
Requested actions
  • combined_input_files
  • column_counts
  • generate_first_outputs

Code comparison

Compare the code used in this job request