Skip to content

Job request: 8501

Organisation:
ISARIC
Workspace:
hdruk-os-covid-paeds
ID:
xezzt54btr4emwhb

This page shows the technical details of what happened when the authorised researcher James Farrell requested one or more actions to be run against real patient data in the project, within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level various outputs were written to. Researchers can never directly view outputs marked as highly_sensitive ; they can only request that code runs against them. Outputs marked as moderately_sensitive can be viewed by an approved researcher by logging into a highly secure environment. Only outputs marked as moderately_sensitive can be requested for release to the public, via a controlled output review service.

Jobs

  • Action:
    generate_study_population
    Status:
    Status: Succeeded
    Job identifier:
    coijrsbfenh25m5b
  • Action:
    process_patient_data
    Status:
    Status: Succeeded
    Job identifier:
    ciolizs7bibxzmjw
  • Action:
    generate_dummy_data
    Status:
    Status: Succeeded
    Job identifier:
    bwiqv2komsntuj7y
  • Action:
    generate_covid_tests_positive
    Status:
    Status: Failed
    Job identifier:
    a7ruvsn5nlir6vyl
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    generate_admissions
    Status:
    Status: Failed
    Job identifier:
    2istmcwj5ulm3ni3
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    generate_outpatient
    Status:
    Status: Failed
    Job identifier:
    2y54m6nqwoq47wer
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    generate_gp
    Status:
    Status: Failed
    Job identifier:
    jqnwbomxbz4klvxt
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    generate_covid_tests_negative
    Status:
    Status: Failed
    Job identifier:
    mvewkyzji3ywzrie
    Error:
    cancelled_by_user: Cancelled by user
  • Action:
    process_outpatient_data
    Status:
    Status: Failed
    Job identifier:
    zgvzycsqdlm5e3ut
    Error:
    Internal error: this usually means a platform issue rather than a problem for users to fix. The tech team are automatically notified of these errors and will be investigating.
  • Action:
    process_testing_data
    Status:
    Status: Failed
    Job identifier:
    54owggq2drbg7ipl
    Error:
    dependency_failed: Not starting as dependency failed
  • Action:
    process_admissions_data
    Status:
    Status: Failed
    Job identifier:
    s67q6kfbr3b5oxzk
    Error:
    dependency_failed: Not starting as dependency failed
  • Action:
    process_gp_data
    Status:
    Status: Failed
    Job identifier:
    jlpvfs7ytnovcfqj
    Error:
    dependency_failed: Not starting as dependency failed

Pipeline

Show project.yaml
version: '3.0'

expectations:
  population_size: 1000

actions:

  generate_study_population:
    run: cohortextractor:latest generate_cohort --study-definition study_definition --output-format csv.gz
    outputs:
      highly_sensitive:
        cohort: output/input.csv.gz

  generate_dummy_data:
    run: r:latest analysis/generate_dummy_data.R
    needs: [generate_study_population]
    outputs:
      highly_sensitive:
        dummy_data_admissions: output/dummy_data/dummy_data_admissions.csv.gz
        dummy_data_outpatient: output/dummy_data/dummy_data_outpatient.csv.gz
        dummy_data_gp: output/dummy_data/dummy_data_gp.csv.gz
        dummy_data_testing_negative: output/dummy_data/dummy_data_testing_negative.csv.gz
        dummy_data_testing_positive: output/dummy_data/dummy_data_testing_positive.csv.gz

  generate_admissions:
    run: >
      cohortextractor:latest generate_cohort
        --study-definition study_definition_admissions
        --index-date-range "2018-10-01 to 2022-01-31 by week"
        --output-format csv.gz
        --skip-existing 
        --output-dir=output/data_weekly
    dummy_data_file: output/dummy_data/dummy_data_admissions.csv.gz
    needs: [generate_dummy_data]
    outputs:
      highly_sensitive:
        data_admissions: output/data_weekly/input_admissions_20*.csv.gz

  generate_outpatient:
    run: >
      cohortextractor:latest generate_cohort
        --study-definition study_definition_outpatient
        --index-date-range "2019-03-11 to 2022-01-31 by week"
        --output-format csv.gz
        --skip-existing 
        --output-dir=output/data_weekly
    dummy_data_file: output/dummy_data/dummy_data_outpatient.csv.gz
    needs: [generate_dummy_data]
    outputs:
      highly_sensitive:
        data_outpatient: output/data_weekly/input_outpatient_20*.csv.gz

  generate_gp:
    run: >
      cohortextractor:latest generate_cohort
        --study-definition study_definition_gp
        --index-date-range "2018-12-31 to 2022-01-31 by week"
        --output-format csv.gz
        --skip-existing 
        --output-dir=output/data_weekly
    dummy_data_file: output/dummy_data/dummy_data_gp.csv.gz
    needs: [generate_dummy_data]
    outputs:
      highly_sensitive:
        data_gp_contacts: output/data_weekly/input_gp_20*.csv.gz

  generate_covid_tests_negative:
    run: >
      cohortextractor:latest generate_cohort
        --study-definition study_definition_covid_tests_negative
        --index-date-range "2019-12-30 to 2022-01-31 by week"
        --output-format csv.gz
        --skip-existing 
        --output-dir=output/data_weekly
    dummy_data_file: output/dummy_data/dummy_data_testing_negative.csv.gz
    needs: [generate_dummy_data]
    outputs:
      highly_sensitive:
        data_negative_tests: output/data_weekly/input_covid_tests_negative_20*.csv.gz

  generate_covid_tests_positive:
    run: >
      cohortextractor:latest generate_cohort
        --study-definition study_definition_covid_tests_positive
        --index-date-range "2019-12-30 to 2022-01-31 by week"
        --output-format csv.gz
        --skip-existing 
        --output-dir=output/data_weekly
    dummy_data_file: output/dummy_data/dummy_data_testing_positive.csv.gz
    needs: [generate_dummy_data]
    outputs:
      highly_sensitive:
        data_positive_tests: output/data_weekly/input_covid_tests_positive_20*.csv.gz

  process_patient_data:
    run: r:latest analysis/01_process_patient_data.R
    needs: [generate_study_population]
    outputs:
      highly_sensitive:
        data_patient: output/data/data_patient.rds
        data_id: output/data/data_id.rds

  process_admissions_data:
    run: r:latest analysis/01_process_admissions_data.R
    needs: [generate_admissions, process_patient_data]
    outputs:
      highly_sensitive:
        data_admissions: output/data/data_admissions.rds
      moderately_sensitive:
        diagnostics_admissions: output/diagnostics/diagnostics_admissions.csv
        plots_admissions: output/descriptives/data_admissions/*.jpeg

  process_outpatient_data:
    run: r:latest analysis/01_process_outpatient_data.R
    needs: [generate_outpatient, process_patient_data]
    outputs:
      highly_sensitive:
        data_outpatient: output/data/data_outpatient.rds
      moderately_sensitive:
        diagnostics_outpatient: output/diagnostics/diagnostics_outpatient.csv
        plots_outpatient: output/descriptives/data_outpatient/*.jpeg

  process_gp_data:
    run: r:latest analysis/01_process_gp_data.R
    needs: [generate_gp, process_patient_data]
    outputs:
      highly_sensitive:
        data_gp: output/data/data_gp.rds
      moderately_sensitive:
        diagnostics_gp: output/diagnostics/diagnostics_gp.csv
        plots_gp: output/descriptives/data_gp/*.jpeg

  process_testing_data:
    run: r:latest analysis/01_process_testing_data.R
    needs: [generate_covid_tests_negative, generate_covid_tests_positive, process_patient_data]
    outputs:
      highly_sensitive:
        data_testing: output/data/data_testing.rds
      moderately_sensitive:
        diagnostics_testing: output/diagnostics/diagnostics_testing.csv
        plots_testing: output/descriptives/data_testing/*.jpeg

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 138:55:03

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job information

Status
Failed
Backend
TPP
Requested by
James Farrell
Branch
main
Force run dependencies
Yes
Git commit hash
5b7616e
Requested actions
  • run_all

Code comparison

Compare the code used in this Job Request