Skip to content

Job request: 24715

Organisation:
UKHSA
Workspace:
main_branch
ID:
az3gpmmqo7jxpplk

This page shows the technical details of what happened when the authorised researcher Megan Griffiths requested one or more actions to be run against real patient data in the project, within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level various outputs were written to. Researchers can never directly view outputs marked as highly_sensitive ; they can only request that code runs against them. Outputs marked as moderately_sensitive can be viewed by an approved researcher by logging into a highly secure environment. Only outputs marked as moderately_sensitive can be requested for release to the public, via a controlled output review service.

Jobs

Pipeline

Show project.yaml
version: '4.0'

actions:
  generate_dataset:
    run: ehrql:v1 generate-dataset analysis/dataset_definition.py --output output/dataset.csv.gz
    outputs:
      highly_sensitive:
        dataset: output/dataset.csv.gz

  raw_data_overview:
    run: r:v2 analysis/raw_data_overview.R output/dataset.csv.gz output/overview/ TRUE
    needs:
    - generate_dataset
    outputs:
      moderately_sensitive:
        txt: output/overview/dataset*.txt

  dataset_cleaning:
    run: r:v2 analysis/dataset_cleaning.R 
    needs:
    - generate_dataset
    outputs:
      highly_sensitive:
        df_clean: output/dataset_clean/dataset_clean.csv.gz

  clean_data_overview:
    run: r:v2 analysis/raw_data_overview.R output/dataset_clean/dataset_clean.csv.gz output/dataset_clean/overview/ TRUE
    needs:
    - generate_dataset
    - dataset_cleaning
    outputs:
      moderately_sensitive:
        txt: output/dataset_clean/overview/dataset*.txt

  dataset_processing:
    run: r:v2 analysis/dataset_processing.R 
    needs:
    - generate_dataset
    - dataset_cleaning
    outputs:
      highly_sensitive:
        df_all_counts: output/dataset_processed/processed_counts_data.csv.gz

  dataset_processing_subset:
    run: r:v2 analysis/dataset_processing_subset.R 
    needs:
    - generate_dataset
    - dataset_cleaning
    outputs:
      highly_sensitive:
        df_all_counts: output/dataset_processed/subset_processed_counts_data.csv.gz

  processed_data_overview:
    run: r:v2 analysis/raw_data_overview.R output/dataset_processed/processed_counts_data.csv.gz output/dataset_processed/overview/ FALSE
    needs:
    - generate_dataset
    - dataset_cleaning
    - dataset_processing
    outputs:
      moderately_sensitive:
        txt: output/dataset_processed/overview/dataset*.txt

  post_processing_analysis:
    run: r:v2 analysis/post_processing_analysis.R
    needs:
    - generate_dataset
    - dataset_cleaning
    - dataset_processing
    outputs:
      moderately_sensitive:
        column_hist_png: output/dataset_processed/analysis/plots/all_hist_plot.png
        subset_processed_counts_data: output/dataset_processed/analysis/subset_processed_counts_data.csv

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 00:09:03

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job information

Status
Succeeded
Backend
TPP
Workspace
main_branch
Requested by
Megan Griffiths
Branch
main
Force run dependencies
No
Git commit hash
690605c
Requested actions
  • dataset_cleaning
  • clean_data_overview

Code comparison

Compare the code used in this Job Request