Skip to content

Job request: 25592

Organisation:
University of Bristol
Workspace:
polypharmacy-deprescribing-dementia
ID:
wnrxamdkz66m3yh4

This page shows the technical details of what happened when the authorised researcher Robert Porteous requested one or more actions to be run against real patient data within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.

The output security levels are:

  • highly_sensitive
    • Researchers can never directly view these outputs
    • Researchers can only request code is run against them
  • moderately_sensitive
    • Can be viewed by an approved researcher by logging into a highly secure environment
    • These are the only outputs that can be requested for public release via a controlled output review service.

Jobs

Pipeline

Show project.yaml
version: '4.0'

actions:
  generate_dataset_prematch:
    run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_prematch.py --output output/dataset/input_prematch.csv.gz --dummy-tables dummy_tables
    outputs:
      highly_sensitive:
        dataset: output/dataset/input_prematch.csv.gz

  clean_dataset_prematch:
    run: r:v2 analysis/dataset_clean/dataset_clean_prematch.R
    needs: [generate_dataset_prematch]
    outputs:
      highly_sensitive:
        dataset: output/dataset_clean/input_clean_prematch.csv
      moderately_sensitive:
        flow_prematch: output/dataset_clean/flow_prematch.csv
        describe_inex_prematch: output/describe/inex-prematch.txt
        describe_preprocessed_prematch: output/describe/preprocessed-prematch.txt
        describe_qa_prematch: output/describe/qa-prematch.txt
        describe_ref_prematch: output/describe/ref-prematch.txt

  generate_dataset_hist:
    run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_hist.py --output output/dataset/input_hist.csv.gz --dummy-tables dummy_tables
    needs: [clean_dataset_prematch]
    outputs:
      highly_sensitive:
        dataset: output/dataset/input_hist.csv.gz
    
  generate_dataset_match:
    run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_match.py --output output/dataset/input_match.csv.gz --dummy-tables dummy_tables
    needs: [clean_dataset_prematch]
    outputs:
      highly_sensitive:
        dataset: output/dataset/input_match.csv.gz

  match:
    run: r:v2 analysis/dataset_clean/match.R
    needs: [generate_dataset_match]
    outputs:
      highly_sensitive:
        dataset: output/dataset_clean/input_matched.csv

  generate_dataset_matched:
    run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_matched.py --output output/dataset/input_matched.csv.gz --dummy-tables dummy_tables
    needs: [match]
    outputs:
      highly_sensitive:
        dataset: output/dataset/input_matched.csv.gz

  clean_dataset_matched:
    run: r:v2 analysis/dataset_clean/dataset_clean_matched.R
    needs: [generate_dataset_matched]
    outputs:
      highly_sensitive:
        dataset: output/dataset_clean/input_clean_matched.csv
      moderately_sensitive:
        flow: output/dataset_clean/flow_matched.csv
        describe_inex: output/describe/inex-matched.txt
        describe_preprocessed: output/describe/preprocessed-matched.txt
        describe_qa: output/describe/qa-matched.txt
        describe_ref: output/describe/ref-matched.txt

  generate_dataset_matched_full:
    run: ehrql:v1 generate-dataset analysis/dataset_definition/dataset_definition_matched_full.py --output output/dataset/input_matched_full.csv.gz --dummy-tables dummy_tables
    needs: [clean_dataset_matched]
    outputs:
      highly_sensitive:
        dataset: output/dataset/input_matched_full.csv.gz

  clean_dataset_hist:
    run: r:v2 analysis/dataset_clean/dataset_clean_hist.R
    needs: [generate_dataset_hist]
    outputs:
      highly_sensitive:
        dataset: output/dataset_clean/input_clean_hist.rds
      moderately_sensitive:
        describe_preprocessed: output/describe/preprocessed-hist.txt
        describe_ref: output/describe/ref-hist.txt

  create_table1_hist:
   run: r:v2 analysis/tables/create_table1_hist.R
   needs: [clean_dataset_hist]
   outputs:
     moderately_sensitive:
       table_one: output/tables/table1_hist.csv
       table_one_midpoint6: output/tables/table1_hist_midpoint6.csv

  create_prescription_gaps_table:
   run: r:v2 analysis/tables/create_table_prescription_gaps.R
   needs: [clean_dataset_hist]
   outputs:
     moderately_sensitive:
       prescription_gaps: output/tables/prescription_gaps.csv
       prescription_gaps_midpoint6: output/tables/prescription_gaps_midpoint6.csv

  # generate_project_dag:
  #   run: python:v2 python analysis/project_dag.py --yaml-path project.yaml --output-path project.dag.md
  #   outputs:
  #     moderately_sensitive:
  #       counts: project.dag.md

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 00:01:02

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job request

Status
Succeeded
Backend
TPP
Requested by
Robert Porteous
Branch
main
Force run dependencies
No
Git commit hash
ff94e3a
Requested actions
  • clean_dataset_hist
  • create_table1_hist
  • create_prescription_gaps_table

Code comparison

Compare the code used in this job request