GAT · internal_only
Standard SplitFeb 18, 20268ab380c0fd39416682e3fcc3a42eb0e7
Description
Train GAT on the internal_only dataset to complete the three-model architecture comparison (GIN, GCN, GAT).
Conclusion
GAT performs comparably to GCN (96.3% accuracy) with slightly better recall (96.9% vs 93.8%). Attention mechanism does not provide a clear advantage over simpler aggregation on the internal_only graphs.
Test Metrics
Accuracy
96.3%
F1 Macro
95.6%
F1 Malware
93.9%
Precision
91.2%
Recall
96.9%
AUROC
98.5%
Best Val Loss
0.1853
Training Time
2046.8000s
Confusion Matrix
| Pred Benign | Pred Malware | |
|---|---|---|
| Actual Benign | 73 | 3 |
| Actual Malware | 1 | 31 |
Configuration
| Hidden Dim | 128 |
| Num Layers | 3 |
| Dropout | 0.5 |
| Batch Size | 4 |
| Learning Rate | 0.001 |
| Weight Decay | 0.0001 |
| Max Epochs | 200 |
| ES Patience | 20 |
| ES Min Epochs | 100 |
| LR Patience | 10 |
| LR Factor | 0.5 |
| Mixed Precision | Yes |
| Random Seed | 42 |
| Epochs Trained | 100 |