NET Neural Networks for Finance
The honest case for and against deep learning in trading
Learning Objectives
- •Understand when neural networks actually help in trading applications
- •Learn the fundamental architecture: layers, activations, and backpropagation
- •See why deep learning often loses to tree-based models on financial data
- •Know which financial problems genuinely benefit from neural approaches
Explain Like I'm 5
A neural network is a function that learns its own rules. You give it thousands of examples and it figures out the pattern. For images and language, this is magical. For structured financial data — price, volume, indicators — it's often overkill. The question isn't "can neural nets do this?" It's "should they?"
Think of It This Way
Think of neural networks like hiring a genius who needs to learn everything from scratch versus hiring an experienced specialist. The genius (neural net) might eventually surpass the specialist (XGBoost) if given enough data and time. But for a specific, well-defined job with limited training examples, the specialist usually wins. Finance is that job.
1The Honest Assessment
Model Family Performance on Financial Tabular Data
2Where Neural Nets Actually Help
3Architecture Fundamentals
Effect of Dropout Rate on Overfitting
4The Practical Recommendation
Key Formulas
Neuron Activation
Each neuron computes a weighted sum of inputs, adds a bias, and passes the result through an activation function σ. For ReLU: σ(z) = max(0, z). For sigmoid: σ(z) = 1/(1+e^{-z}).
Binary Cross-Entropy Loss
Standard loss function for binary classification (win/loss prediction). Penalizes confident wrong predictions more heavily than uncertain ones. Lower is better.
Hands-On Code
Simple MLP for Signal Classification
import torch
import torch.nn as nn
class SignalClassifier(nn.Module):
"""MLP for L1 signal detection (when you want to try neural)."""
def __init__(self, n_features, hidden_dim=128, dropout=0.3):
super().__init__()
self.net = nn.Sequential(
nn.Linear(n_features, hidden_dim),
nn.BatchNorm1d(hidden_dim),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(hidden_dim, hidden_dim // 2),
nn.BatchNorm1d(hidden_dim // 2),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(hidden_dim // 2, 1),
nn.Sigmoid(),
)
def forward(self, x):
return self.net(x).squeeze(-1)
# Honest comparison workflow:
# 1. Train this MLP on your data
# 2. Train XGBoost on the same data with same splits
# 3. Compare validation AUC
# 4. XGBoost almost certainly wins
# 5. Use the MLP only if it consistently beats XGBoost
# across multiple walk-forward windowsThis is a fair starting point. BatchNorm and dropout provide regularization. But the real test is whether this beats XGBoost on your specific data. Usually it doesn't. Be empirical, not ideological.
Knowledge Check
Q1.For a signal detection task with 10,000 labeled trades and 38 tabular features, which model is most likely to perform best?
Assignment
Train both an MLP and XGBoost on the same dataset with the same walk-forward split. Compare AUC, accuracy, and profit factor. Document which model wins and speculate on why. This exercise builds calibration about when neural nets help.