TY - JOUR
T1 - GNN-RE
T2 - Graph Neural Networks for Reverse Engineering of Gate-Level Netlists
AU - Alrahis, Lilas
AU - Sengupta, Abhrajit
AU - Knechtel, Johann
AU - Patnaik, Satwik
AU - Saleh, Hani
AU - Mohammad, Baker
AU - Al-Qutayri, Mahmoud
AU - Sinanoglu, Ozgur
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2022/8/1
Y1 - 2022/8/1
N2 - This work introduces a generic, machine learning (ML)-based platform for functional reverse engineering (RE) of circuits. Our proposed platform GNN-RE leverages the notion of graph neural networks (GNNs) to: 1) represent and analyze flattened/unstructured gate-level netlists; 2) automatically identify the boundaries between the modules or subcircuits implemented in such netlists; and 3) classify the subcircuits based on their functionalities. For GNNs in general, each graph node is tailored to learn about its own features and its neighboring nodes, which is a powerful approach for the detection of any kind of subgraphs of interest. For GNN-RE, in particular, each node represents a gate and is initialized with a feature vector that reflects on the functional and structural properties of its neighboring gates. GNN-RE also learns the global structure of the circuit, which facilitates identifying the boundaries between subcircuits in a flattened netlist. Initially, to provide high-quality data for training of GNN-RE, we deploy a comprehensive dataset of foundational designs/components with differing functionalities, implementation styles, bit widths, and interconnections. GNN-RE is then tested on the unseen shares of this custom dataset, as well as the EPFL benchmarks, the ISCAS-85 benchmarks, and the 74X series benchmarks. GNN-RE achieves an average accuracy of 98.82% in terms of mapping individual gates to modules, all without any manual intervention or postprocessing. We also release our code and source data.
AB - This work introduces a generic, machine learning (ML)-based platform for functional reverse engineering (RE) of circuits. Our proposed platform GNN-RE leverages the notion of graph neural networks (GNNs) to: 1) represent and analyze flattened/unstructured gate-level netlists; 2) automatically identify the boundaries between the modules or subcircuits implemented in such netlists; and 3) classify the subcircuits based on their functionalities. For GNNs in general, each graph node is tailored to learn about its own features and its neighboring nodes, which is a powerful approach for the detection of any kind of subgraphs of interest. For GNN-RE, in particular, each node represents a gate and is initialized with a feature vector that reflects on the functional and structural properties of its neighboring gates. GNN-RE also learns the global structure of the circuit, which facilitates identifying the boundaries between subcircuits in a flattened netlist. Initially, to provide high-quality data for training of GNN-RE, we deploy a comprehensive dataset of foundational designs/components with differing functionalities, implementation styles, bit widths, and interconnections. GNN-RE is then tested on the unseen shares of this custom dataset, as well as the EPFL benchmarks, the ISCAS-85 benchmarks, and the 74X series benchmarks. GNN-RE achieves an average accuracy of 98.82% in terms of mapping individual gates to modules, all without any manual intervention or postprocessing. We also release our code and source data.
KW - Gate-level netlist
KW - graph neural networks (GNNs)
KW - hardware security
KW - machine learning (ML)
KW - reverse engineering (RE)
UR - http://www.scopus.com/inward/record.url?scp=85114708094&partnerID=8YFLogxK
U2 - 10.1109/TCAD.2021.3110807
DO - 10.1109/TCAD.2021.3110807
M3 - Article
AN - SCOPUS:85114708094
SN - 0278-0070
VL - 41
SP - 2435
EP - 2448
JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IS - 8
ER -