Job request: 8241
- Organisation:
- Bennett Institute
- Workspace:
- ethnicity-short-data-report-notebook
- ID:
- 5hliwbz5pw7aayhj
This page shows the technical details of what happened when the authorised researcher Colm Andrews requested one or more actions to be run against real patient data within a secure environment.
By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.
The output security levels are:
-
highly_sensitive
- Researchers can never directly view these outputs
- Researchers can only request code is run against them
-
moderately_sensitive
- Can be viewed by an approved researcher by logging into a highly secure environment
- These are the only outputs that can be requested for public release via a controlled output review service.
Jobs
-
- Job identifier:
-
bb4qsl575dqbelss
-
- Job identifier:
-
y3kde37zlk7vzp7v - Error:
- cancelled_by_user: Cancelled by user
-
- Job identifier:
-
zajeqntbj43lm7cf - Error:
- dependency_failed: Not starting as dependency failed
Pipeline
Show project.yaml
version: '3.0'
expectations:
population_size: 1000
actions:
split_codelist:
run: r:latest analysis/00_trim_snomed_codelist.r
outputs:
highly_sensitive:
data: codelists/ethnicity_*.csv
generate_study_population:
run: cohortextractor:latest generate_cohort --study-definition study_definition --output-dir=output --output-format feather
needs: [split_codelist]
outputs:
highly_sensitive:
cohort: output/data/input.feather
generate_dataset_report:
run: >
dataset-report:v0.0.9
--input-files output/data/input.feather
--output-dir output/data
needs: [generate_study_population]
outputs:
moderately_sensitive:
dataset_report: output/data/input.html
execute_validation_analyses:
run: python:latest python analysis/validation_script.py
needs: [generate_study_population]
outputs:
moderately_sensitive:
tables: output/phenotype_validation_ethnicity/5/tables/*.csv
figures: output/phenotype_validation_ethnicity/5/figures/*.png
execute_validation_analyses_16:
run: python:latest python analysis/validation_script_16.py
needs: [generate_study_population]
outputs:
moderately_sensitive:
tables: output/phenotype_validation_ethnicity/16/tables/*.csv
figures: output/phenotype_validation_ethnicity/16/figures/*.png
generate_report_ethnicity:
run: python:latest jupyter nbconvert /workspace/notebooks_jupyter/report_ethnicity_rp.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
needs: [execute_validation_analyses]
outputs:
moderately_sensitive:
notebook: output/report_ethnicity_rp.html
generate_report_ethnicity_16:
run: python:latest jupyter nbconvert /workspace/notebooks_jupyter/report_ethnicity_rp_16.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
needs: [execute_validation_analyses_16]
outputs:
moderately_sensitive:
notebook: output/report_ethnicity_rp_16.html
Timeline
-
Created:
-
Started:
-
Finished:
-
Runtime: 01:34:48
These timestamps are generated and stored using the UTC timezone on the TPP backend.
Job request
- Status
-
Failed
- Backend
- TPP
- Workspace
- ethnicity-short-data-report-notebook
- Requested by
- Colm Andrews
- Branch
- notebook
- Force run dependencies
- Yes
- Git commit hash
- b034bf5
- Requested actions
-
-
split_codelist -
generate_study_population -
generate_dataset_report
-
Code comparison
Compare the code used in this job request