Skip to content

Job request: 8141

Organisation:
Bennett Institute
Workspace:
ethnicity-short-data-report-notebook
ID:
lmtbsi4u7ukp5cci

This page shows the technical details of what happened when the authorised researcher Colm Andrews requested one or more actions to be run against real patient data within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.

The output security levels are:

  • highly_sensitive
    • Researchers can never directly view these outputs
    • Researchers can only request code is run against them
  • moderately_sensitive
    • Can be viewed by an approved researcher by logging into a highly secure environment
    • These are the only outputs that can be requested for public release via a controlled output review service.

Pipeline

Show project.yaml
version: '3.0'

expectations:
  population_size: 1000

actions:
  split_codelist:
    run: r:latest analysis/00_trim_snomed_codelist.r
    outputs:
      highly_sensitive:
        data: codelists/ethnicity_*.csv

  generate_study_population:
    run: cohortextractor:latest generate_cohort --study-definition study_definition --output-dir=output --output-format feather
    needs: [split_codelist]
    outputs:
      highly_sensitive:
        cohort: output/data/input.feather

  generate_dataset_report:
    run: >
      dataset-report:v0.0.9
        --input-files output/data/input.feather
        --output-dir output/data
    needs: [generate_study_population]
    outputs:
      moderately_sensitive:
        dataset_report: output/data/input.html

  execute_validation_analyses:
    run: python:latest python analysis/validation_script.py
    needs: [generate_study_population]
    outputs:
      moderately_sensitive: 
        tables: output/phenotype_validation_ethnicity/5/tables/*.csv
        figures: output/phenotype_validation_ethnicity/5/figures/*.png
  
  execute_validation_analyses_16:
    run: python:latest python analysis/validation_script_16.py
    needs: [generate_study_population]
    outputs:
      moderately_sensitive: 
        tables: output/phenotype_validation_ethnicity/16/tables/*.csv
        figures: output/phenotype_validation_ethnicity/16/figures/*.png

  generate_report_ethnicity:
    run: python:latest jupyter nbconvert /workspace/notebooks_jupyter/report_ethnicity_rp.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
    needs: [execute_validation_analyses]
    outputs:
      moderately_sensitive:
        notebook: output/report_ethnicity_rp.html

  generate_report_ethnicity_16:
    run: python:latest jupyter nbconvert /workspace/notebooks_jupyter/report_ethnicity_rp_16.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
    needs: [execute_validation_analyses_16]
    outputs:
      moderately_sensitive:
        notebook: output/report_ethnicity_rp_16.html

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime:

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job request

Status
Failed
JobRequestError: execute_validation_analyses failed on a previous run and must be re-run
Backend
TPP
Requested by
Colm Andrews
Branch
notebook
Force run dependencies
No
Git commit hash
dc1b03b
Requested actions
  • run_all

Code comparison

Compare the code used in this job request