Securing the AI pipeline: Privacy and security metrics for trustworthy and transparent AI

Securing the AI pipeline: Privacy and security metrics for trustworthy and transparent AI

Nowadays transparency is an important aspect of the Trustworthiness AI algorithms should have. However, making AI more transparent raises issues concerning privacy and security. How to best balance the transparency an AI algorithm should have for users with an also needed security and privacy of the data it accesses/uses?

In this project, we aim at building an inventory of privacy metrics for AI models that provide insights into the exposure of personal information in data sets, models and in contexts such as federated learning, with specific focus on maturity and operationalization of those metrics. Drafting security requirements and measures for establishing and auditing the AI pipeline from training data acquisition to model deployment. Focus will be on trust-establishing methods that will either enable defined, measurable privacy metrics, or that will provide provable evidence about e.g. data origin, data originality, model origin, and other trust building information along the AI production pipeline. The expected activities will be literature survey, conceptual modelling, and potential test of deployable metrics/methods.

Goal

Explore trade-offs between different aspects of Trustworthy AI, in particular performance, explainability and privacy.

Learning outcome

  • Methods in Explainable AI
  • Security systems

Qualifications

  • The candidate should bring knowledge in the areas of machine learning, AI and data science.
  • Background knowledge in information security and information privacy is appreciated, but not necessary.

Supervisors

  • Pedro Lind

Collaboration partners

  • NordSTAR (OsloMet)

Associated contacts

Pedro Lind

Pedro Lind

Adjunct Chief Research Scientist