TY - GEN
T1 - Confidential Inference in Decision Trees
T2 - 30th IFIP/IEEE International Conference on Very Large Scale Integration, VLSI-SoC 2022
AU - Karn, Rupesh Raj
AU - Elfadel, Ibrahim Abe M.
N1 - Funding Information:
This research has been sponsored by the Cryptography Research Center, Technology Innovation Institute, Abu Dhabi, UAE, under Contract TII/CRP/2036/2020. The authors would like to thank Drs. Ernesto Damiani and Abudlhadi Shoufan from Khalifa University for useful discussions
Funding Information:
ACKNOWLEDGMENT This research has been sponsored by the Cryptography Research Center, Technology Innovation Institute, Abu Dhabi, UAE, under Contract TII/CRP/2036/2020. The authors would like to thank Drs. Ernesto Damiani and Abudlhadi Shoufan from Khalifa University for useful discussions.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In confidential computing, algorithms operate on encrypted inputs to produce encrypted outputs. Specifically, in confidential inference, Alice has the parameters of the machine-learning model but does not want to reveal them to Bob who has the data. Bob wants to use Alice's model for inference but does not want to reveal his data. Alice and Bob agree to use homomorphic encryption for running the inference engine in full confidence without revealing either model or data. They find that full homomorphic encryption is very time consuming and very challenging to accelerate on hardware. In this particular case, homomorphic encryption can be made computationally efficient and can even be readily accelerated on hardware. In this paper, we reveal how Alice and Bob run the inference engine in full confidence and show an FPGA implementation of the specialized homomorphic computing algorithm they used. We further evaluate the resources needed to implement the encrypted decision tree and compare them with those of a plain decision tree. Confidential inference tests are run on the encrypted FPGA design using the MNIST dataset.
AB - In confidential computing, algorithms operate on encrypted inputs to produce encrypted outputs. Specifically, in confidential inference, Alice has the parameters of the machine-learning model but does not want to reveal them to Bob who has the data. Bob wants to use Alice's model for inference but does not want to reveal his data. Alice and Bob agree to use homomorphic encryption for running the inference engine in full confidence without revealing either model or data. They find that full homomorphic encryption is very time consuming and very challenging to accelerate on hardware. In this particular case, homomorphic encryption can be made computationally efficient and can even be readily accelerated on hardware. In this paper, we reveal how Alice and Bob run the inference engine in full confidence and show an FPGA implementation of the specialized homomorphic computing algorithm they used. We further evaluate the resources needed to implement the encrypted decision tree and compare them with those of a plain decision tree. Confidential inference tests are run on the encrypted FPGA design using the MNIST dataset.
UR - https://www.scopus.com/pages/publications/85142452478
U2 - 10.1109/VLSI-SoC54400.2022.9939567
DO - 10.1109/VLSI-SoC54400.2022.9939567
M3 - Conference contribution
AN - SCOPUS:85142452478
T3 - IEEE/IFIP International Conference on VLSI and System-on-Chip, VLSI-SoC
BT - Proceedings of the 2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration, VLSI-SoC 2022
PB - IEEE Computer Society
Y2 - 3 October 2022 through 5 October 2022
ER -