PlumX Metrics
Embed PlumX Metrics

FSSiBNN: FSS-Based Secure Binarized Neural Network Inference with Free Bitwidth Conversion

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 1611-3349, Vol: 14982 LNCS, Page: 229-250
2024
  • 0
    Citations
  • 0
    Usage
  • 3
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Conference Paper Description

Neural network inference as a service enables a cloud server to provide inference services to clients. To ensure the privacy of both the cloud server’s model and the client’s data, secure neural network inference is essential. Binarized neural networks (BNNs), which use binary weights and activations, are often employed to accelerate inference. However, achieving secure BNN inference with secure multi-party computation (MPC) is challenging because MPC protocols cannot directly operate on values of different bitwidths and require bitwidth conversion. Existing bitwidth conversion schemes expand the bitwidths of weights and activations, leading to significant communication overhead. To address these challenges, we propose FSSiBNN, a secure BNN inference framework featuring free bitwidth conversion based on function secret sharing (FSS). By leveraging FSS, which supports arbitrary input and output bitwidths, we introduce a bitwidth-reduced parameter encoding scheme. This scheme seamlessly integrates bitwidth conversion into FSS-based secure binary activation and max pooling protocols, thereby eliminating the additional communication overhead. Additionally, we enhance communication efficiency by combining and converting multiple BNN layers into fewer matrix multiplication and comparison operations. We precompute matrix multiplication tuples for matrix multiplication and FSS keys for comparison during the offline phase, enabling constant-round online inference. In our experiments, we evaluated various datasets and models, comparing our results with state-of-the-art frameworks. Compared with the two-party framework XONN (USENIX Security ’19), FSSiBNN achieves approximately 7× faster inference times and reduces communication overhead by about 577×. Compared with the three-party frameworks SecureBiNN (ESORICS ’22) and FLEXBNN (TIFS ’23), FSSiBNN is approximately 2.5× faster in inference time and reduces communication overhead by 1.3× to 16.4×.

Bibliographic Details

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know