HT2ML: An efficient hybrid framework for privacy-preserving Machine Learning using HE and TEE

Qifan Wang, Lei Zhou, Jianli Bai, Yun Sing Koh, Shujie Cui, Giovanni Russello

Research output: Contribution to journalArticleResearchpeer-review

3 Citations (Scopus)

Abstract

Outsourcing Machine Learning (ML) tasks to cloud servers is a cost-effective solution when dealing with distributed data. However, outsourcing these tasks to cloud servers could lead to data breaches. Secure computing methods, such as Homomorphic Encryption (HE) and Trusted Execution Environments (TEE), have been used to protect outsourced data. Nevertheless, HE remains inefficient in processing complicated functions (e.g., non-linear functions) and TEE (e.g., Intel SGX) is not ideal for directly processing ML tasks due to side-channel attacks and parallel-unfriendly computation. In this paper, we propose a hybrid framework integrating SGX and HE, called HT2ML, to protect user's data and models. In HT2ML, HE-friendly functions are protected with HE and performed outside the enclave, while the remaining operations are performed inside the enclave obliviously. HT2ML leverages optimised HE matrix multiplications to accelerate HE computations outside the enclave while using oblivious blocks inside the enclave to prevent access-pattern-based attacks. We evaluate HT2ML using Linear Regression (LR) training and Convolutional Neural Network (CNN) inference as two instantiations. The performance results show that HT2ML is up to ∼11× faster than HE only baseline with 6-dimensional data in LR training. For CNN inference, HT2ML is ∼196× faster than the most recent approach (Xiao et al., ICDCS'21).

Original languageEnglish
Article number103509
Number of pages12
JournalComputers and Security
Volume135
DOIs
Publication statusPublished - Dec 2023

Keywords

  • Cloud computing
  • Homomorphic encryption
  • Machine learning
  • Privacy-preserving
  • SGX enclave

Cite this