Skip to content

Job request: 25225

Organisation:
Bennett Institute
Workspace:
tpp-database-history
ID:
nwmjzg7uonj3syde

This page shows the technical details of what happened when the authorised researcher Becky Smith requested one or more actions to be run against real patient data within a secure environment.

By cross-referencing the list of jobs with the pipeline section below, you can infer what security level the outputs were written to.

The output security levels are:

  • highly_sensitive
    • Researchers can never directly view these outputs
    • Researchers can only request code is run against them
  • moderately_sensitive
    • Can be viewed by an approved researcher by logging into a highly secure environment
    • These are the only outputs that can be requested for public release via a controlled output review service.

Jobs

  • Action:
    query
    Status:
    Status: Succeeded
    Job identifier:
    3ovb4olv3badixfd
  • Action:
    aggregate
    Status:
    Status: Succeeded
    Job identifier:
    5fcclv2vnkydklmx
  • Action:
    plot_from_2016
    Status:
    Status: Succeeded
    Job identifier:
    pkalerlvzpd7qnnx
  • Action:
    plot_from_2020
    Status:
    Status: Succeeded
    Job identifier:
    wde7khgu2a6bhklf
  • Action:
    plot_from_last_30_days
    Status:
    Status: Succeeded
    Job identifier:
    iap2ml26kedvcxau
  • Action:
    render_report
    Status:
    Status: Succeeded
    Job identifier:
    abzpond2tr4gbnbn

Pipeline

Show project.yaml
version: "3.0"

expectations:
  population_size: 1000

actions:
  query:
    run: >
      sqlrunner:latest
        --dummy-data-file analysis/dummy_rows.csv.gz
        --output output/query/rows.csv.gz
        --log-file output/query/log.json
        analysis/query.sql
    outputs:
      highly_sensitive:
        rows: output/query/rows.csv.gz
      moderately_sensitive:
        log: output/query/log.json

  aggregate:
    needs: [query]
    run: python:latest python -m analysis.aggregate
    outputs:
      moderately_sensitive:
        aggregates: output/aggregate/*_by_*.csv

  plot_from_2016:
    needs: [query, aggregate]
    run: >
      python:latest python -m analysis.plot
        --from-date 2016-01-01
        --output output/plot_from_2016
    outputs:
      moderately_sensitive:
        plots: output/plot_from_2016/*.png
        metadata: output/plot_from_2016/metadata.json

  plot_from_2020:
    needs: [query, aggregate]
    run: >
      python:latest python -m analysis.plot
        --from-date 2020-02-01
        --output output/plot_from_2020
    outputs:
      moderately_sensitive:
        plots: output/plot_from_2020/*.png
        metadata: output/plot_from_2020/metadata.json

  plot_from_last_30_days:
    needs: [query, aggregate]
    run: >
      python:latest python -m analysis.plot
        --from-offset 30
        --output output/plot_from_last_30_days
    outputs:
      moderately_sensitive:
        plots: output/plot_from_last_30_days/*.png
        metadata: output/plot_from_last_30_days/metadata.json

  render_report:
    needs: [query, plot_from_2016, plot_from_2020, plot_from_last_30_days]
    run: python:latest python -m analysis.render_report
    outputs:
      moderately_sensitive:
        report: output/render_report/report.html

Timeline

  • Created:

  • Started:

  • Finished:

  • Runtime: 01:42:33

These timestamps are generated and stored using the UTC timezone on the TPP backend.

Job request

Status
Succeeded
Backend
TPP
Requested by
Becky Smith
Branch
main
Force run dependencies
No
Git commit hash
a52b951
Requested actions
  • query
  • aggregate
  • plot_from_2016
  • plot_from_2020
  • plot_from_last_30_days
  • render_report

Code comparison

Compare the code used in this job request