P
PipsGrowth
← Back

PyTorch for Trading

Dynamic deep learning framework for custom architectures.

Advanced
Machine Learning

Installation

$ pip install torch torchvision

Key Features

Dynamic Computation

Define-by-run for flexible model architecture.

Research-Friendly

Industry standard for ML research.

Autograd

Automatic differentiation for backprop.

TorchScript

Compile for production deployment.

Code Examples

Setup and Data Preparation

Prepare data using PyTorch tensors and DataLoaders

Python
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import numpy as np
import yfinance as yf
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Download data
df = yf.download('EURUSD=X', start='2020-01-01', end='2024-01-01')
prices = df['Close'].values
normalized = (prices - np.mean(prices)) / np.std(prices)
class TimeSeriesDataset(Dataset):
def __init__(self, data, seq_length):
self.data = torch.FloatTensor(data)
self.seq_length = seq_length
def __len__(self):
return len(self.data) - self.seq_length
def __getitem__(self, idx):
x = self.data[idx:idx+self.seq_length]
y = self.data[idx+self.seq_length]
return x.unsqueeze(-1), y
dataset = TimeSeriesDataset(normalized, 60)
train_loader = DataLoader(dataset, batch_size=32, shuffle=True)

LSTM Model Definition

Build a custom LSTM network

Python
class LSTMPredictor(nn.Module):
def __init__(self, input_size=1, hidden_size=64, num_layers=2):
super().__init__()
self.lstm = nn.LSTM(input_size, hidden_size, num_layers,
batch_first=True, dropout=0.2)
self.fc = nn.Sequential(
nn.Linear(hidden_size, 32),
nn.ReLU(),
nn.Linear(32, 1)
)
def forward(self, x):
out, _ = self.lstm(x)
return self.fc(out[:, -1, :]).squeeze()
model = LSTMPredictor().to(device)

Training Loop

Custom training with loss tracking

Python
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(100):
model.train()
for X, y in train_loader:
X, y = X.to(device), y.to(device)
optimizer.zero_grad()
loss = criterion(model(X), y)
loss.backward()
optimizer.step()
if epoch % 10 == 0:
print(f"Epoch {epoch}: Loss={loss.item():.6f}")

Best Practices

Use GPU When Available

Check CUDA and move tensors to GPU.

Gradient Clipping

Use clip_grad_norm_ for RNNs.

Remember model.eval()

Call before inference.

Detach from Graph

Use no_grad() for inference.