Skip to content

Job request: 25866

Organisation:
University of Bristol
Workspace:
polypharmacy-deprescribing-dementia
ID:
7cpneu6faavdsrdn

This page shows the technical details of what happened when the authorised researcher Robert Porteous requested one or more actions to be run against real patient data within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.

The output security levels are:

  • highly_sensitive
    • Researchers can never directly view these outputs
    • Researchers can only request code is run against them
  • moderately_sensitive
    • Can be viewed by an approved researcher by logging into a highly secure environment
    • These are the only outputs that can be requested for public release via a controlled output review service.

Jobs

  • Action:
    generate_dataset_prematch
    Status:
    Status: Succeeded
    Job identifier:
    ta32f5d7lcn5udqy
  • Action:
    clean_dataset_prematch
    Status:
    Status: Succeeded
    Job identifier:
    gjzkgsx6k5zs3jn7
  • Action:
    generate_dataset_desc
    Status:
    Status: Failed
    Job identifier:
    uu2ttvnznjlcatlr
    Error:
    nonzero_exit: Job exited with an error: There was a problem reading your ehrQL code; please confirm that it runs locally
  • Action:
    clean_dataset_desc
    Status:
    Status: Failed
    Job identifier:
    aawircjgu7pssmfs
    Error:
    dependency_failed: Not starting as dependency failed
  • Action:
    create_cum_inc_plot
    Status:
    Status: Failed
    Job identifier:
    xst3gvk3fk4tlah7
    Error:
    dependency_failed: Not starting as dependency failed
  • Action:
    create_table_desc
    Status:
    Status: Failed
    Job identifier:
    6zbh5twjzmqu6ruy
    Error:
    dependency_failed: Not starting as dependency failed

Pipeline

Show project.yaml
version: '4.0'

actions:
  generate_dataset_prematch:
    run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_prematch.py --output output/dataset/input_prematch.csv.gz --dummy-tables dummy_tables
    outputs:
      highly_sensitive:
        dataset: output/dataset/input_prematch.csv.gz

  clean_dataset_prematch:
    run: r:v2 analysis/dataset_clean/dataset_clean_prematch.R
    needs: [generate_dataset_prematch]
    outputs:
      highly_sensitive:
        dataset: output/dataset_clean/input_clean_prematch.csv
      moderately_sensitive:
        flow_prematch: output/dataset_clean/flow_prematch.csv
        describe_inex_prematch: output/describe/inex-prematch.txt
        describe_preprocessed_prematch: output/describe/preprocessed-prematch.txt
        describe_qa_prematch: output/describe/qa-prematch.txt
        describe_ref_prematch: output/describe/ref-prematch.txt

  generate_dataset_desc:
    run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_desc.py --output output/dataset/input_desc.csv.gz --dummy-tables dummy_tables
    needs: [clean_dataset_prematch]
    outputs:
      highly_sensitive:
        dataset: output/dataset/input_desc.csv.gz

  clean_dataset_desc:
    run: r:v2 analysis/dataset_clean/dataset_clean_desc.R
    needs: [generate_dataset_desc]
    outputs:
      highly_sensitive:
        dataset: output/dataset_clean/input_clean_desc.rds
      moderately_sensitive:
        describe_preprocessed: output/describe/preprocessed-desc.txt
        describe_ref: output/describe/ref-desc.txt

  create_cum_inc_plot:
    run: r:v2 analysis/tables/create_table_cum_inc_med_rev.R
    needs: [clean_dataset_desc]
    outputs:
      moderately_sensitive: 
        medication_review_incidence_table: output/tables/med_rev_cum_inc.csv
        medication_review_incidence_table_midpoint6: output/tables/med_rev_cum_inc_midpoint6.csv
        medication_review_incidence: output/plots/med_rev_cum_inc.png
        medication_review_incidence_midpoint6: output/plots/med_rev_cum_inc_midpoint6.png

  create_table_desc:
    run: r:v2 analysis/tables/create_table_desc.R
    needs: [clean_dataset_desc]
    outputs:
      moderately_sensitive: 
        descptive_measures_table: output/tables/table_desc_region.csv
        descptive_measures_table_midpoint6: output/tables/table_desc_region_midpoint6.csv

      


  # generate_dataset_hist:
  #   run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_hist.py --output output/dataset/input_hist.csv.gz --dummy-tables dummy_tables
  #   needs: [clean_dataset_prematch]
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/dataset/input_hist.csv.gz

  # clean_dataset_hist:
  #   run: r:v2 analysis/dataset_clean/dataset_clean_hist.R
  #   needs: [generate_dataset_hist]
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/dataset_clean/input_clean_hist.rds
  #     moderately_sensitive:
  #       describe_preprocessed: output/describe/preprocessed-hist.txt
  #       describe_ref: output/describe/ref-hist.txt

  # create_table1_hist:
  #  run: r:v2 analysis/tables/create_table1_hist.R
  #  needs: [clean_dataset_hist]
  #  outputs:
  #    moderately_sensitive:
  #      table_one: output/tables/table1_hist.csv
  #      table_one_midpoint6: output/tables/table1_hist_midpoint6.csv

  # create_prescription_gaps_table:
  #  run: r:v2 analysis/tables/create_table_prescription_gaps.R
  #  needs: [clean_dataset_hist]
  #  outputs:
  #    moderately_sensitive:
  #      prescription_gaps: output/tables/prescription_gaps.csv
  #      prescription_gaps_midpoint6: output/tables/prescription_gaps_midpoint6.csv
    
  # generate_dataset_match:
  #   run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_match.py --output output/dataset/input_match.csv.gz --dummy-tables dummy_tables
  #   needs: [clean_dataset_prematch]
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/dataset/input_match.csv.gz

  # match:
  #   run: r:v2 analysis/dataset_clean/match.R
  #   needs: [generate_dataset_match]
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/dataset_clean/input_matched.csv

  # generate_dataset_matched:
  #   run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_matched.py --output output/dataset/input_matched.csv.gz --dummy-tables dummy_tables
  #   needs: [match]
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/dataset/input_matched.csv.gz

  # clean_dataset_matched:
  #   run: r:v2 analysis/dataset_clean/dataset_clean_matched.R
  #   needs: [generate_dataset_matched]
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/dataset_clean/input_clean_matched.csv
  #     moderately_sensitive:
  #       flow: output/dataset_clean/flow_matched.csv
  #       describe_inex: output/describe/inex-matched.txt
  #       describe_preprocessed: output/describe/preprocessed-matched.txt
  #       describe_qa: output/describe/qa-matched.txt
  #       describe_ref: output/describe/ref-matched.txt

  # generate_dataset_matched_full:
  #   run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_matched_full.py --output output/dataset/input_matched_full.csv.gz --dummy-tables dummy_tables
  #   needs: [clean_dataset_matched]
  #   outputs:
  #     highly_sensitive:
  #       dataset: output/dataset/input_matched_full.csv.gz

  # generate_project_dag:
  #   run: python:v2 python analysis/project_dag.py --yaml-path project.yaml --output-path project.dag.md
  #   outputs:
  #     moderately_sensitive:
  #       counts: project.dag.md

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 00:42:48

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job request

Status
Failed
Backend
TPP
Requested by
Robert Porteous
Branch
main
Force run dependencies
No
Git commit hash
48c014c
Requested actions
  • generate_dataset_prematch
  • clean_dataset_prematch
  • generate_dataset_desc
  • clean_dataset_desc
  • create_cum_inc_plot
  • create_table_desc
  • run_all

Code comparison

Compare the code used in this job request