Skip to content

Job request: 4828

Organisation:
Bennett Institute
Workspace:
sro-measures
ID:
irs7yn5illfkqbx3

This page shows the technical details of what happened when the authorised researcher Louis Fisher requested one or more actions to be run against real patient data within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.

The output security levels are:

  • highly_sensitive
    • Researchers can never directly view these outputs
    • Researchers can only request code is run against them
  • moderately_sensitive
    • Can be viewed by an approved researcher by logging into a highly secure environment
    • These are the only outputs that can be requested for public release via a controlled output review service.

Jobs

Pipeline

Show project.yaml
version: "3.0"

expectations:
  population_size: 1000

actions:
  generate_study_population_1:
    run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2019-01-01 to 2019-12-01 by month" --output-dir=output --output-format=feather
    outputs:
      highly_sensitive:
        cohort: output/input_*.feather
  
  generate_study_population_2:
    run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2020-01-01 to 2020-12-01 by month" --output-dir=output --output-format=feather
    outputs:
      highly_sensitive:
        cohort: output/input*.feather

  generate_study_population_3:
    run: cohortextractor:latest generate_cohort --study-definition study_definition --index-date-range "2021-01-01 to 2021-06-01 by month" --output-dir=output --output-format=feather
    outputs:
      highly_sensitive:
        cohort: output/inpu*.feather

  get_patient_count:
    run: python:latest python analysis/get_patients_counts.py
    needs:
      [
        generate_study_population_1,
        generate_study_population_2,
        generate_study_population_3,
      ]
    outputs:
      moderately_sensitive:
        text: output/patient_count.json

  generate_study_population_ethnicity:
    run: cohortextractor:latest generate_cohort --study-definition study_definition_ethnicity --output-dir=output --output-format=feather
    outputs:
      highly_sensitive:
        cohort: output/input_ethnicity.feather

  join_ethnicity:
    run: python:latest python analysis/join_ethnicity.py
    needs:
      [
        generate_study_population_1,
        generate_study_population_2,
        generate_study_population_3,
        generate_study_population_ethnicity,
      ]
    outputs:
      highly_sensitive:
        cohort: output/inp*.feather

  get_practice_count:
    run: python:latest python analysis/get_practice_count.py
    needs: [join_ethnicity]
    outputs:
      moderately_sensitive:
        text: output/practice_count.json

  generate_measures:
    run: cohortextractor:latest generate_measures --study-definition study_definition --output-dir=output
    needs: [join_ethnicity]
    outputs:
      moderately_sensitive:
        measure_csv: output/measure_*_rate.csv

  generate_measures_cleaned:
    run: python:latest python analysis/clean_measures.py
    needs: [generate_measures]
    outputs:
      moderately_sensitive:
        measure_csv: output/measure_cleaned_*.csv

  generate_measures_demographics:
    run: python:latest python analysis/calculate_measures.py
    needs: [join_ethnicity]
    outputs:
      moderately_sensitive:
        measure: output/combined_measure_*.csv

  generate_notebook:
    run: jupyter:latest jupyter nbconvert /workspace/analysis/sentinel_measures.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
    needs:
      [
        generate_measures,
        generate_measures_cleaned,
        get_practice_count,
        get_patient_count,
      ]
    outputs:
      moderately_sensitive:
        notebook: output/sentinel_measures.html
        subplots: output/sentinel_measures_subplots.png
        code_tables: output/code_table_*.csv
        events_count: output/event_count.json

  # generate_notebook_demographics:
  #   run: jupyter:latest jupyter nbconvert /workspace/analysis/sentinel_measures_demographics.ipynb --execute --to html --template basic --output-dir=/workspace/output --ExecutePreprocessor.timeout=86400 --no-input
  #   needs: [generate_measures, generate_measures_demographics]
  #   outputs:
  #     moderately_sensitive:
  #       notebook: output/sentinel_measures_demographics.html

  get_population_count:
    run: python:latest python analysis/population_counts.py
    needs: [join_ethnicity]
    outputs:
      moderately_sensitive:
        text: output/*_count.csv

  demographic_changes:
    run: python:latest python analysis/demographic_change.py
    needs: [generate_measures_demographics]
    outputs:
      moderately_sensitive:
        csv: output/demographics_differences.csv
        csv_sorted: output/demographics_differences_sorted.csv
#   run_tests:
#     run: python:latest python -m pytest --junit-xml=output/pytest.xml --verbose
#     outputs:
#       moderately_sensitive:
#         log: output/pytest.xml

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 00:01:54

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job request

Status
Succeeded
Backend
TPP
Workspace
sro-measures
Requested by
Louis Fisher
Branch
master
Force run dependencies
No
Git commit hash
cc70250
Requested actions
  • generate_measures_cleaned
  • generate_notebook

Code comparison

Compare the code used in this job request