Job request: 24755
- Organisation:
 - UKHSA
 - Workspace:
 - main_branch
 - ID:
 - 6jnktgvxwmqpxx6a
 
This page shows the technical details of what happened when the authorised researcher Megan Griffiths requested one or more actions to be run against real patient data within a secure environment.
By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.
The output security levels are:
- 
                highly_sensitive
                
- Researchers can never directly view these outputs
 - Researchers can only request code is run against them
 
 - 
                moderately_sensitive
                
- Can be viewed by an approved researcher by logging into a highly secure environment
 - These are the only outputs that can be requested for public release via a controlled output review service.
 
 
Jobs
- 
                
- Job identifier:
 - 
                    
                    
57gihjmxbeuallmw - Error:
 - cancelled_by_user: Cancelled by user
 
 - 
                
- Job identifier:
 - 
                    
                    
n2p2fjb4js4iao52 - Error:
 - cancelled_by_user: Cancelled by user
 
 
Pipeline
Show project.yaml
version: '4.0'
actions:
  generate_dataset:
    run: ehrql:v1 generate-dataset analysis/dataset_definition.py --output output/dataset.csv.gz
    outputs:
      highly_sensitive:
        dataset: output/dataset.csv.gz
  raw_data_overview:
    run: r:v2 analysis/raw_data_overview.R output/dataset.csv.gz output/overview/ TRUE
    needs:
    - generate_dataset
    outputs:
      moderately_sensitive:
        txt: output/overview/dataset*.txt
  dataset_cleaning:
    run: r:v2 analysis/dataset_cleaning.R 
    needs:
    - generate_dataset
    outputs:
      highly_sensitive:
        df_clean: output/dataset_clean/dataset_clean.csv.gz
      moderately_sensitive:
        msoas: output/dataset_clean/dataset_msoas.csv
  clean_data_overview:
    run: r:v2 analysis/raw_data_overview.R output/dataset_clean/dataset_clean.csv.gz output/dataset_clean/overview/ TRUE
    needs:
    - generate_dataset
    - dataset_cleaning
    outputs:
      moderately_sensitive:
        txt: output/dataset_clean/overview/dataset*.txt
  dataset_processing:
    run: r:v2 analysis/dataset_processing.R 
    needs:
    - generate_dataset
    - dataset_cleaning
    outputs:
      highly_sensitive:
        df_all_counts1: output/dataset_processed/processed_counts_data_max_district.csv.gz
        df_all_counts2: output/dataset_processed/processed_counts_data_min_district.csv.gz
        df_all_counts3: output/dataset_processed/processed_counts_data_no_e_district.csv.gz
      moderately_sensitive:
        df_all_counts_sub1: output/dataset_processed/subset/processed_counts_data_max_district_sub.csv
        df_all_counts_sub2: output/dataset_processed/subset/processed_counts_data_min_district_sub.csv
        df_all_counts_sub3: output/dataset_processed/subset/processed_counts_data_no_e_district_sub.csv
        df_all_props_sub1: output/dataset_processed/subset/processed_counts_data_max_district_sub_proportions.csv
        df_all_props_sub2: output/dataset_processed/subset/processed_counts_data_min_district_sub_proportions.csv
        df_all_props_sub3: output/dataset_processed/subset/processed_counts_data_no_e_district_sub_proportions.csv
  processed_data_overview1:
    run: r:v2 analysis/raw_data_overview.R output/dataset_processed/processed_counts_data_min_district.csv.gz output/dataset_processed/min_district/overview/ FALSE
    needs:
    - generate_dataset
    - dataset_cleaning
    - dataset_processing
    outputs:
      moderately_sensitive:
        txt: output/dataset_processed/min_district/overview/dataset*.txt
  processed_data_overview2:
    run: r:v2 analysis/raw_data_overview.R output/dataset_processed/processed_counts_data_max_district.csv.gz output/dataset_processed/max_district/overview/ FALSE
    needs:
    - generate_dataset
    - dataset_cleaning
    - dataset_processing
    outputs:
      moderately_sensitive:
        txt: output/dataset_processed/max_district/overview/dataset*.txt
  processed_data_overview3:
    run: r:v2 analysis/raw_data_overview.R output/dataset_processed/processed_counts_data_no_e_district.csv.gz output/dataset_processed/no_e_district/overview/ FALSE
    needs:
    - generate_dataset
    - dataset_cleaning
    - dataset_processing
    outputs:
      moderately_sensitive:
        txt: output/dataset_processed/no_e_district/overview/dataset*.txt
  post_processing_analysis:
    run: r:v2 analysis/post_processing_analysis.R
    needs:
    - generate_dataset
    - dataset_cleaning
    - dataset_processing
    outputs:
      moderately_sensitive:
        column_bar_png: output/dataset_processed/no_e_district/analysis/plots/all_bar_plot.png
        #ethnic_box_png: output/dataset_processed/no_e_district/analysis/plots/ethnicity_total_pop_boxplot.png
Timeline
- 
  
    
  
  
Created:
 - 
  
    
  
  
Finished:
 - 
  
  
Runtime:
 
These timestamps are generated and stored using the UTC timezone on the TPP backend.
Job request
- Status
 - 
            Failed
 - Backend
 - TPP
 - Workspace
 - main_branch
 - Requested by
 - Megan Griffiths
 - Branch
 - main
 - Force run dependencies
 - No
 - Git commit hash
 - b2644d1
 - Requested actions
 - 
            
- 
                  
generate_dataset - 
                  
raw_data_overview 
 - 
                  
 
Code comparison
Compare the code used in this job request