From 54% to a Staggering 4648%: Catapulting Cryptocurrency Trading with CatBoost Classifier, Machine Learning Model at Its Best

--

Unleashing the Power of CatBoost: Supercharging Crypto Trading Returns

Introduction:

In the realm of algorithmic trading, the pursuit of optimizing strategies for maximum returns is a ceaseless journey. Our expedition through various stages of development, from crafting bespoke algorithms to integrating machine learning models, reflects this relentless quest for improvement. In this article, we delve into our latest breakthrough — a significant leap from a modest 54% profit to an astounding 4648% gain using the CatBoostClassifier on our Bitcoin trading algorithm. We invite you to join us as we unveil the meticulous process behind this remarkable achievement and explore the trans-formative power of machine learning in algorithmic trading.

We have used bitcoin pricing in USDT with 15 minute candles time frame from January 1st 2021 to October 22nd 2023, a total of 1022 days data with more than 97,000+ rows with 190+ features to calculate long, short positions by using classification model for prediction.

Our story is one of relentless innovation, fueled by a burning desire to unlock the full potential of machine learning in the pursuit of profit. In this article, we invite you to join us as we unravel the exciting tale of our transformation from humble beginnings to groundbreaking success.

Cat Boost Classifier for crypto trading algo (source — google images)

Our Algorithmic Trading Vs/+ Machine Learning Journey so far?

Stage 1:

We have developed a crypto Algorithmic Strategy which gave us huge profits when ran on multiple crypto assets (138+) with a profit range of 8787%+ in span of 3 years (almost).

“The 8787%+ ROI Algo Strategy Unveiled for Crypto Futures! Revolutionized With Famous RSI, MACD, Bollinger Bands, ADX, EMA” — Link

We have run live trading in dry-run mode for the same for 7 days and details about the same have been shared in another article.

“Freqtrade Revealed: 7-Day Journey in Algorithmic Trading for Crypto Futures Market” — Link

After successful backtest results and forward testing (live trading in dry-run mode), we planned to improve the odds of making more profit for the same. (To lower stop-losses, increase odds of winning more , reduce risk factor and other important things)

Stage 2:

We have worked on developing a strategy alone without freqtrade setup (avoiding trailing stop loss, multiple asst parallel running, higher risk management setups that freqtrade provides for free (it is a free open source platform) and then tested it in market, then optimized it using hyper parameters and then , we got some +ve profits from the strategy

“How I achieved 3000+% Profit in Backtesting for Various Algorithmic Trading Bots and how you can do the same for your Trading Strategies — Using Python Code” — Link

Stage 3:

As we have tested our strategy only on 1 Asset , i.e; BTC/USDT in crypto market, we wanted to know if we can segregate the whole collective assets we have (Which we have used for developing Freqtrade Strategy earlier) segregate them into different clusters based on their volatility, it becomes easy to do trading for certain volatile assets and won’t hit huge stop-losses for others if worked on implementing based on coin volatility.

We used K-nearest Neighbors (KNN Means) to identify different clusters of assets out of 138 crypto assets we use in our freqtrade strategy, which gave us 8000+% profits during backtest.

“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -1 (K-Nearest Neighbors)” — Link

Stage 4:

Now, we want to introduce Unsupervised Machine Learning model — Hidden Markov Model (HMMs) to identify trends in the market and trade during only profitable trends and avoid sudden pumps, dumps in market, avoid negative trends in market. Below explanation unravels the same.

“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -2 (Hidden Markov Model — HMM)” — Link

Stage 5:

I worked on using XGBoost Classifier to identify long and short trades using our old signal. Before using it, we ensured that the signal algorithm we had previously developed was hyper-optimized. Additionally, we introduced different stop-loss and take-profit parameters for this setup, causing the target values to change accordingly. We also adjusted the parameters used for obtaining profitable trades based on the stop-loss and take-profit values. Later, we tested the basic XGBClassifier setup and then enhanced the results by adding re-sampling methods. Our target classes, which include 0’s (neutral), 1’s (for long trades), and 2’s (for short trades), were imbalanced due to the trade execution timing. To address this imbalance, we employed re-sampling methods and performed hyper-optimization of the classifier model. Subsequently, we evaluated if the model performed better with other classifier models such as SVC, CatBoost, and LightGBM, in combination with LSTM and XGBoost. Finally, we concluded by analyzing the results and determining feature importance parameters to identify the most productive features.

“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -3 (XGBoost Classifier , LGBM Classifier, CatBoost Classifier, SVC, LSTM with XGB and Multi level Hyper-optimization)” — Link

Stage 6 (Present):

In this article, I will utilize the CatBoostClassifier along with resampling and sample weights initially. I’ll incorporate multiple time frame indicators related to volume, momentum, trend, and volatility. After running the model, I’ll perform ensembling to enhance its performance. Ultimately, I’ll showcase the remarkable results obtained from the analysis of the overall feature data, including a significant increase in profit from 54% to over 4600% during backtesting. Additionally, I’ll highlight the impressive recall, precision, accuracy, and F1 score metrics, all exceeding 80% for each of the three classes (0 for neutral, 1 for long, and 2 for short trades).

get entire code and profitable algos @ https://patreon.com/pppicasso

The Code Explanation

# Remove Future Warnings
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)

# General
import numpy as np

# Data Management
import pandas as pd

# Machine Learning
from catboost import CatBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV, cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.linear_model import LogisticRegression

# ensemble
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.ensemble import StackingClassifier
from sklearn.ensemble import VotingClassifier

#Sampling Methods
from imblearn.over_sampling import ADASYN

#Scaling
from sklearn.preprocessing import MinMaxScaler

# Binary Classification Specific Metrics
from sklearn.metrics import RocCurveDisplay as plot_roc_curve

# General Metrics
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import confusion_matrix, classification_report, roc_curve, roc_auc_score, accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import ConfusionMatrixDisplay


# Reporting
import matplotlib.pyplot as plt
from matplotlib.pylab import rcParams
from xgboost import plot_tree

#Backtesting
from backtesting import Backtest
from backtesting import Strategy

#hyperopt
from hyperopt import fmin, tpe, hp

from pandas_datareader.data import DataReader

import json
from datetime import datetime
import talib as ta
import ccxt

Remove Future Warnings:

  • This code suppresses future warnings to keep the output cleaner.

Importing Libraries:

  • numpy and pandas are imported for data manipulation.
  • CatBoostClassifier, train_test_split, RandomizedSearchCV, cross_val_score, RepeatedStratifiedKFold, LogisticRegression, RandomForestClassifier, GradientBoostingClassifier, StackingClassifier, VotingClassifier are imported for machine learning tasks.
  • ADASYN is imported for oversampling.
  • MinMaxScaler is imported for scaling.
  • Various metrics functions like accuracy_score, precision_score, confusion_matrix, etc., are imported for model evaluation.
  • matplotlib is imported for visualization.
  • plot_tree is imported from xgboost for plotting trees.
  • Backtesting related modules are imported from the backtesting library.
  • fmin, tpe, and hp are imported from hyperopt for hyperparameter optimization.
  • Additional libraries such as DataReader, json, datetime, ta (TA-Lib), and ccxt are also imported.

Reporting Configuration:

  • Matplotlib’s configuration parameters (rcParams) are imported to customize the appearance of plots.

Context:

  • Each library and module is imported with a specific purpose, such as data manipulation, machine learning, evaluation, visualization, backtesting, hyperparameter optimization, etc.
  • These libraries and modules will be used throughout the code for various tasks like data preprocessing, model training, evaluation, optimization, and visualization.
# Define the path to your JSON file
file_path = "../BTC_USDT_USDT-15m-futures.json"

# Open the file and read the data
with open(file_path, "r") as f:
data = json.load(f)

# Check the data structure
# print(data) # Should be a list of dictionaries

#when using heavy data, then open notebook using this command - jupyter notebook --NotebookApp.iopub_data_rate_limit=100000000

df = pd.DataFrame(data)

# Extract the OHLC data (adjust column names as needed)
# ohlc_data = df[["date","open", "high", "low", "close", "volume"]]
df.rename(columns={0: "Date", 1: "Open", 2: "High",3: "Low", 4: "Adj Close", 5: "Volume"}, inplace=True)

# Convert timestamps to datetime objects
df["Date"] = pd.to_datetime(df['Date'] / 1000, unit='s')

df.set_index("Date", inplace=True)

# Format the date index
df.index = df.index.strftime("%m-%d-%Y %H:%M")
df['Close'] = df['Adj Close']

# print(df.dropna(), df.describe(), df.info())

data = df

data

Here’s the step-by-step explanation of the code:

File Path Definition:

  • The variable file_path is defined to store the path leading to the JSON file containing the financial data.

Reading JSON Data:

  • Using the open() function, the code reads the JSON file in read mode ("r"), and the json.load() function then retrieves the data from the file, storing it in the variable data.

Conversion to DataFrame:

  • The JSON data is transformed into a pandas DataFrame named df, facilitating easier data manipulation and analysis.

Column Renaming:

  • Column names in df are adjusted to conform to the standard OHLCV (Open, High, Low, Close, Volume) format typically used in financial data analysis.

Timestamp Conversion:

  • The timestamps in the “Date” column of df are converted into datetime objects via the pd.to_datetime() function.

Index Setting:

  • The “Date” column is designated as the index of the DataFrame, establishing a time series data structure.

Date Index Formatting:

  • The date index is formatted to the specified format (“%m-%d-%Y %H:%M”) using the strftime() function.

Close Price Adjustment:

  • The column originally labeled “Adj Close” is renamed to “Close” to conform to common terminology used in financial analysis.

Data Return:

  • The processed DataFrame data, ready for further analysis, is returned, encapsulating the financial OHLCV data from the JSON file.
# Assuming you have a DataFrame named 'df' with columns 'Open', 'High', 'Low', 'Close', 'Adj Close', and 'Volume'
target_prediction_number = 2
time_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]
name_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]

df = data.copy()
new_columns = []
for period in time_periods:
for nperiod in name_periods:
df[f'ATR_{nperiod}'] = ta.ATR(df['High'], df['Low'], df['Close'], timeperiod=period)
df[f'EMA_{nperiod}'] = ta.EMA(df['Close'], timeperiod=period)
df[f'RSI_{nperiod}'] = ta.RSI(df['Close'], timeperiod=period)
df[f'VWAP_{nperiod}'] = ta.SUM(df['Volume'] * (df['High'] + df['Low'] + df['Close']) / 3, timeperiod=period) / ta.SUM(df['Volume'], timeperiod=period)
df[f'ROC_{nperiod}'] = ta.ROC(df['Close'], timeperiod=period)
df[f'KC_upper_{nperiod}'] = ta.EMA(df['High'], timeperiod=period)
df[f'KC_middle_{nperiod}'] = ta.EMA(df['Low'], timeperiod=period)
df[f'Donchian_upper_{nperiod}'] = ta.MAX(df['High'], timeperiod=period)
df[f'Donchian_lower_{nperiod}'] = ta.MIN(df['Low'], timeperiod=period)
macd, macd_signal, _ = ta.MACD(df['Close'], fastperiod=(period + 12), slowperiod=(period + 26), signalperiod=(period + 9))
df[f'MACD_{nperiod}'] = macd
df[f'MACD_signal_{nperiod}'] = macd_signal
bb_upper, bb_middle, bb_lower = ta.BBANDS(df['Close'], timeperiod=period, nbdevup=2, nbdevdn=2)
df[f'BB_upper_{nperiod}'] = bb_upper
df[f'BB_middle_{nperiod}'] = bb_middle
df[f'BB_lower_{nperiod}'] = bb_lower
df[f'EWO_{nperiod}'] = ta.SMA(df['Close'], timeperiod=(period+5)) - ta.SMA(df['Close'], timeperiod=(period+35))



df["Returns"] = (df["Adj Close"] / df["Adj Close"].shift(target_prediction_number)) - 1
df["Range"] = (df["High"] / df["Low"]) - 1
df["Volatility"] = df['Returns'].rolling(window=target_prediction_number).std()

# Volume-Based Indicators
df['OBV'] = ta.OBV(df['Close'], df['Volume'])
df['ADL'] = ta.AD(df['High'], df['Low'], df['Close'], df['Volume'])


# Momentum-Based Indicators
df['Stoch_Oscillator'] = ta.STOCH(df['High'], df['Low'], df['Close'])[0]
# Calculate the Elliott Wave Oscillator (EWO)
#df['EWO'] = ta.SMA(df['Close'], timeperiod=5) - ta.SMA(df['Close'], timeperiod=35)

# Volatility-Based Indicators
# df['ATR'] = ta.ATR(df['High'], df['Low'], df['Close'], timeperiod=14)
# df['BB_upper'], df['BB_middle'], df['BB_lower'] = ta.BBANDS(df['Close'], timeperiod=20, nbdevup=2, nbdevdn=2)
# df['KC_upper'], df['KC_middle'] = ta.EMA(df['High'], timeperiod=20), ta.EMA(df['Low'], timeperiod=20)
# df['Donchian_upper'], df['Donchian_lower'] = ta.MAX(df['High'], timeperiod=20), ta.MIN(df['Low'], timeperiod=20)

# Trend-Based Indicators
# df['MA'] = ta.SMA(df['Close'], timeperiod=20)
# df['EMA'] = ta.EMA(df['Close'], timeperiod=20)
df['PSAR'] = ta.SAR(df['High'], df['Low'], acceleration=0.02, maximum=0.2)

# Displaying the calculated indicators
print(df.tail())

df.dropna(inplace=True)
print("Length: ", len(df))
df

get entire code and profitable algos @ https://patreon.com/pppicasso

Variable Initialization:

  • target_prediction_number is set to 2, representing the number of periods ahead for predicting the target.
  • time_periods and name_periods are lists defining the periods for calculating technical indicators and their corresponding names.

DataFrame Copy:

  • The DataFrame df is copied from the original data, ensuring the original data remains intact.

Indicator Calculation Loop:

  • Nested loops iterate over time_periods and name_periods, computing various technical indicators for each combination.
  • Technical indicators include Average True Range (ATR), Exponential Moving Average (EMA), Relative Strength Index (RSI), Volume Weighted Average Price (VWAP), Rate of Change (ROC), Keltner Channels (KC), Donchian Channels, Moving Average Convergence Divergence (MACD), Bollinger Bands (BB), and Elliott Wave Oscillator (EWO).
  • we can experiment further by adding additional indicators that talib provides or make custom indicators and test with those data as features.

Target Calculation:

  • The target variables are computed based on future returns, range, and volatility, representing different aspects of price movement.

Volume-Based Indicators:

  • On-Balance Volume (OBV) and Accumulation/Distribution Line (ADL) are calculated using volume-based formulas.

Momentum-Based Indicators:

  • Stochastic Oscillator is computed using price data.

Volatility-Based Indicators:

  • The Elliott Wave Oscillator (EWO) is calculated.

Trend-Based Indicators:

  • Parabolic Stop and Reverse (PSAR) points are determined.

Data Display and Cleaning:

  • The calculated indicators are printed for the last few rows of the DataFrame.
  • Any rows with missing values are dropped from the DataFrame.

Final DataFrame:

  • The cleaned DataFrame with all calculated indicators and target variables is returned and displayed.

Data- Preprocessing — Setting up “Target” value for estimating future predictive values

# Target flexible way
pipdiff_percentage = 0.005 # 1% (0.01) of the asset's price for TP
SLTPRatio = 0.25 # pipdiff/Ratio gives SL
def mytarget(barsupfront, df1):
length = len(df1)
high = list(df1['High'])
low = list(df1['Low'])
close = list(df1['Close'])
open_ = list(df1['Open']) # Renamed 'open' to 'open_' to avoid conflict with Python's built-in function
trendcat = [None] * length
for line in range(0, length - barsupfront - 2):
valueOpenLow = 0
valueOpenHigh = 0
for i in range(1, barsupfront + 2):
value1 = open_[line + 1] - low[line + i]
value2 = open_[line + 1] - high[line + i]
valueOpenLow = max(value1, valueOpenLow)
valueOpenHigh = min(value2, valueOpenHigh)
if (valueOpenLow >= close[line + 1] * pipdiff_percentage) and (
-valueOpenHigh <= close[line + 1] * pipdiff_percentage / SLTPRatio):
trendcat[line] = 2 # -1 downtrend
break
elif (valueOpenLow <= close[line + 1] * pipdiff_percentage / SLTPRatio) and (
-valueOpenHigh >= close[line + 1] * pipdiff_percentage):
trendcat[line] = 1 # uptrend
break
else:
trendcat[line] = 0 # no clear trend

return trendcat

#!!! pitfall one category high frequency
df['Target'] = mytarget(2, df)
#df.tail(20)
df.replace([np.inf, -np.inf], np.nan, inplace=True)
df.dropna(axis=0, inplace=True)

# Convert columns to integer type
df = df.astype(int)
#df['Target'] = df['Target'].astype(int)
df['Target'].hist()

count_of_twos_target = df['Target'].value_counts().get(2, 0)
count_of_zeros_target = df['Target'].value_counts().get(0, 0)
count_of_ones_target = df['Target'].value_counts().get(1, 0)
percent_of_zeros_over_ones_and_twos = (100 - (count_of_zeros_target/ (count_of_zeros_target + count_of_ones_target + count_of_twos_target))*100)
print(f' count_of_zeros = {count_of_zeros_target}\n count_of_twos_target = {count_of_twos_target}\n count_of_ones_target={count_of_ones_target}\n percent_of_zeros_over_ones_and_twos = {round(percent_of_zeros_over_ones_and_twos,2)}%')

Here’s the breakdown of the code provided:

Variable Initialization:

  • pipdiff_percentage is set to 0.005, representing 0.5% of the asset's price for Take Profit (TP).
  • SLTPRatio is set to 0.25, representing the ratio of Stop Loss (SL) to TP.
  • That means, Stop Loss is at 4 times higher to that of take profit, setting at 2% stop loss.

Function Definition: mytarget

  • The function mytarget calculates the target category based on the provided number of bars upfront and DataFrame df1.
  • It iterates over each row in the DataFrame, considering a window of barsupfront bars ahead.
  • For each row, it calculates the difference between the opening price and the highest and lowest prices within the window.
  • Based on the calculated differences and the predefined pipdiff percentage and SLTPRatio, it assigns a target category:
  • 2: Downtrend
  • 1: Uptrend
  • 0: No clear trend

Applying the Function:

  • The function mytarget is applied to the DataFrame df, with barsupfront set to 2.
  • Any rows containing infinite or NaN values are dropped from the DataFrame.
  • Columns in the DataFrame are converted to integer type.

Analysis:

  • The distribution of target categories is visualized using a histogram.
  • The counts of each target category are computed.
  • The percentage of zeros (representing no clear trend) over ones and twos (representing uptrend and downtrend) is calculated and displayed.

This code segment effectively calculates target categories based on predefined criteria and provides insights into the distribution of these categories within the dataset.

scaler = MinMaxScaler(feature_range=(0,1))

df_model = df.copy()
# Split into Learning (X) and Target (y) Data
X = df_model.iloc[:, : -1]
y = df_model.iloc[:, -1]

X_scale = scaler.fit_transform(X)

# print(X_scale,y)

# Split the data into training and testing sets
# (as we are working on time series, this split won't work, I'm editing it to below code)

# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

train_size = int(len(X) * 0.7) # 70% train, 30% test
X_train, X_test = X_scale[:train_size], X_scale[train_size:]
y_train, y_test = y[:train_size], y[train_size:]

# Define the CatBoostClassifier with parameters
classifier = CatBoostClassifier(
iterations=800, # Increase iterations for better convergence
learning_rate=0.2, # Experiment with learning rate
depth=6, # Adjust depth of trees
l2_leaf_reg=4, # Regularization parameter
border_count=80, # Number of splits for numerical features
random_strength=0.5, # Regularization coefficient for random selection of trees
#scale_pos_weight=scale_pos_weight, # Balance positive and negative weights
leaf_estimation_iterations=8, # Number of gradient steps for leaf value calculation
leaf_estimation_method='Newton', # Method for leaf value calculation
class_weights=[1, 2, 2], # Adjust class weights to give more weight to classes 1 and 2
verbose=0 # Set verbose to 0 for no output during training
)

# Resample the training data using ADASYN
adasyn = ADASYN()
X_train_resampled, y_train_resampled = adasyn.fit_resample(X_train, y_train)

# Train the classifier on the resampled data
classifier.fit(X_train_resampled, y_train_resampled)

# Predict on the test set
y_pred = classifier.predict(X_test)

# Evaluate the model
print("Classification Report:")
print(classification_report(y_test, y_pred))
print("Confusion Matrix:")
print(confusion_matrix(y_test, y_pred))

Here’s a breakdown of the code provided:

Data Preprocessing:

  • The MinMaxScaler is initialized to scale features to a range between 0 and 1.
  • A copy of the original DataFrame df is created as df_model.
  • The features (X) are selected from df_model excluding the last column, and the target variable (y) is selected as the last column.

Feature Scaling:

  • The features (X) are scaled using scaler.fit_transform(X) to ensure all features are on the same scale.

Data Splitting:

  • The scaled features and target variables are split into training and testing sets using train_test_split.
  • The training set (X_train, y_train) contains 70% of the data, while the testing set (X_test, y_test) contains 30%.

Model Definition:

  • The CatBoostClassifier is defined with specified parameters, including the number of iterations, learning rate, tree depth, regularization parameters, and class weights.
  • The verbose parameter is set to 0 to suppress output during training.

Resampling:

  • The training data is resampled using ADASYN (adasyn.fit_resample(X_train, y_train)) to address class imbalance.

Model Training:

  • The classifier is trained on the resampled training data (classifier.fit(X_train_resampled, y_train_resampled)).

Prediction:

  • The trained model is used to make predictions on the test set (y_pred = classifier.predict(X_test)).

Model Evaluation:

  • The classification report and confusion matrix are printed to evaluate the model’s performance on the test set. The classification report includes metrics such as precision, recall, F1-score, and support for each class, while the confusion matrix provides a summary of correct and incorrect predictions.

This code segment demonstrates the process of preprocessing data, training a CatBoostClassifier model, and evaluating its performance on a test set, including handling class imbalance through resampling.

here’s an explanation of each parameter used in the CatBoostClassifier:

iterations:

  • This parameter determines the number of trees (boosting iterations) to build. Increasing iterations may improve model performance, but it also increases training time and the risk of overfitting.

learning_rate:

  • Learning rate controls the step size during gradient descent. It determines how quickly the model adapts to the errors. Lower values make the model learn slower but might lead to better convergence and generalization.

depth:

  • Depth specifies the depth of each tree in the ensemble. A higher depth allows the model to capture more complex relationships in the data but may lead to overfitting.

l2_leaf_reg:

  • L2 regularization coefficient applied to leaf weights. It helps to prevent overfitting by penalizing large weights in the model.

border_count:

  • This parameter controls the accuracy of splits in numerical features. Increasing it can lead to more accurate splits but also increases training time.

random_strength:

  • Random strength is a regularization parameter that controls the degree of randomness in selecting splits during tree construction. Higher values introduce more randomness, which can help in reducing overfitting.

leaf_estimation_iterations:

  • Number of gradient steps to build a new leaf. Increasing this parameter may improve the quality of each leaf but also increases training time.

leaf_estimation_method:

  • Method used to estimate the values of leaves. ‘Newton’ is a more accurate method but requires more computation.

class_weights:

  • Specifies the weights for different classes. It’s useful for handling class imbalance by giving more weight to minority classes.

verbose:

  • Controls the verbosity of the training process. Setting it to 0 suppresses training output, while higher values provide more detailed information about the training process.

These parameters collectively influence the model’s behavior and performance during training. Adjusting them requires balancing between model complexity, training time, and generalization ability to achieve the best results for a specific dataset.

classification report for test data and entire data set, also confusion matrix plotted

From the above output, we can see that, there is around 50+ accuracy in test data and precision is low for 1’s, 2’s classes in test data compared to overall precision of entire data , there is a scope of improving the results by adding ensemble method.

Ensemble Method on Cat Boost Classifier

ensemble with CatboostClassifier

get entire code and profitable algos @ https://patreon.com/pppicasso

Ensemble Methods:

Ensemble methods combine multiple individual models to improve predictive performance and robustness.

Voting Classifier:

  • The Voting Classifier combines the predictions from multiple individual classifiers and selects the class with the most votes (hard voting) or averages the probabilities (soft voting). It can be applied to any type of base classifiers and often leads to better performance than individual classifiers.

Soft Voting:

  • Soft voting takes the average of predicted probabilities across all classifiers and selects the class with the highest probability. It works well when the classifiers provide calibrated probability estimates.

Advantages of Ensemble Methods:

  • Improved Accuracy: Ensemble methods often yield higher accuracy than individual models by leveraging the strengths of multiple classifiers.
  • Robustness: Ensembles are less sensitive to overfitting and noise in the data compared to single models, making them more robust and reliable.
  • Handling Time Series Data: Ensemble methods can effectively handle time series data by capturing complex temporal patterns and trends. They can detect subtle changes in the data and make accurate predictions even with noisy and dynamic datasets.

Individual Classifiers:

Random Forest Classifier (RF):

  • Random Forest is an ensemble learning method that constructs a multitude of decision trees during training. It randomly selects subsets of features and data points for each tree and then aggregates their predictions to make the final decision. RF is robust to overfitting and performs well on a variety of datasets.

Gradient Boosting Classifier (GB):

  • Gradient Boosting builds an ensemble of decision trees sequentially, where each tree corrects the errors made by the previous ones. It trains trees in a stage-wise manner, optimizing a differentiable loss function. GB is known for its high predictive accuracy and is less prone to overfitting compared to other methods.

CatBoost Classifier (CatBoost):

  • CatBoost is a gradient boosting library specifically designed for categorical features. It employs an efficient implementation of gradient boosting with novel techniques to handle categorical data seamlessly. CatBoost automatically handles missing values and provides robust performance without extensive hyperparameter tuning.

Summary:

Ensemble methods like Voting Classifier combine the predictions of multiple individual classifiers (such as Random Forest, Gradient Boosting, and CatBoost) to achieve better overall performance. These methods are particularly useful for improving accuracy, handling categorical features, and enhancing the robustness of predictive models, especially in time series data analysis.

from the classification report, the precision of 1’s and 2’s have improved a lot for the test data, we need to check if performance increased overall or not based on accuracy, recall, f1 score, precision. Let’s check

Crazy Improvement in overall data after ensemble

get entire code and profitable algos @ https://patreon.com/pppicasso

From the results, it is pretty clear that the overall performance has drastically increased for recall, precision, accuracy, f1 score for all 3 classes.

True positives and False negatives results are amazing from the results.

The whole processing time took a lot of time but gave amazing results after running.

Let’s Test this by doing backtest for the entire data set using the model values

Backtesting Entire dataset with predicted values from the model

We are saving the model predict values into dataframe and then run the backtest on the dataframe to check te performance of the bot.

we have kept 10% stop-loss and 2.5% take profit for the backtesting.

Backtest results of bitcoin 15m time frame data from 2021 Jan 1st to 2023 22nd of october

get entire code and profitable algos @ https://patreon.com/pppicasso

The overall return for 1022 days (from January 1st 2021 to October 22nd 2023) has given amazing returns of 4648% with a winning Rate of 81.17%.

Save this model and run on other crypto assets to check how they perform with a trained model (make sure to do scaling before running the model on the data)

# Save the trained model to a file
joblib.dump(voting_classifier_1, 'voting_classifier_1.pkl')

# Later, when you want to retest with the entire dataset:
# Load the saved model from the file
voting_classifier_1_loaded = joblib.load('voting_classifier_1.pkl')

Conclusion:

Our journey through the stages of algorithmic trading and machine learning epitomizes the relentless pursuit of excellence in the quest for superior trading performance. From pioneering strategies to cutting-edge machine learning models, each stage has contributed to our evolution as traders and innovators. As we celebrate our latest milestone with the CatBoostClassifier, we remain steadfast in our commitment to pushing the boundaries of what is possible in algorithmic trading.

With each breakthrough, we inch closer to our ultimate goal — harnessing the full potential of technology to unlock unprecedented returns in the dynamic world of financial markets.

  1. Exceptional Returns: The ensemble method, comprising classifiers like Random Forest, Gradient Boosting, and CatBoost, has delivered outstanding returns during the backtesting period. With a stop loss of 10% and take profit of 2.5%, the strategy yielded an impressive return of 4648% over a span of 1022 days.
  2. High Win Rate: The strategy exhibited a high winning rate of 81.17%, indicating its effectiveness in capturing profitable trading opportunities while minimizing losses.
  3. CatBoost Performance: CatBoost Classifier’s superior performance after ensemble can be attributed to its ability to handle categorical features effectively, robustness to noise, and automatic handling of missing values. Its adaptive nature and efficient gradient boosting implementation contributed to the overall success of the ensemble.
  4. Scaling for Re-usability: Scaling the model using techniques like Min-Max scaling can facilitate its reuse on other crypto assets. By scaling the features to a common range, the model can adapt to varying levels of volatility, volume, and momentum across different assets. However, thorough due diligence, including backtesting, is essential before deploying the model on new assets to ensure its effectiveness in different market conditions.
  5. Future Plans: Looking ahead, the focus will be on building a proprietary Binance-based futures API key for automated trading in the futures market. Leveraging predictive models like the ensemble developed in this study, the goal is to execute both long and short trades with precision and efficiency, further optimizing trading strategies for future success.

Final Conclusion:

The journey from developing and refining predictive models to achieving remarkable returns in backtesting has been both challenging and rewarding. With the demonstrated success of the ensemble method and CatBoost Classifier, there is a strong foundation for future endeavors in automated trading. By continuing to refine strategies, leverage advanced machine learning techniques, and adapt to evolving market dynamics, the goal of sustained profitability and success in the crypto futures market remains within reach.

Suggestions for Learning:

Books:

  • “Python for Finance” by Yves Hilpisch
  • “Machine Learning Yearning” by Andrew Ng
  • “Deep Learning” by Ian Goodfellow

Courses:

  • Coursera’s “Machine Learning” by Andrew Ng
  • Udacity’s “Deep Learning Nanodegree”

Resources:

  • Kaggle for real-world datasets and competitions.
  • Towards Data Science on Medium for insightful articles.

Financial Analysis:

  • “Quantitative Financial Analytics with Python” on edX.
  • “Financial Markets” on Coursera by Yale University.

Programming Practice:

  • LeetCode and HackerRank for general programming challenges.
  • GitHub repositories with open-source finance and machine learning projects.

These resources will provide a comprehensive foundation for understanding the technical aspects of algo trading and the application of Python in finance. Additionally, participating in online forums and communities such as Stack Overflow, GitHub, and Reddit’s r/algotrading can offer practical insights and peer support.

Thank you, Readers.

I hope you have found this article on Algorithmic strategy to be informative and helpful. As a creator, I am dedicated to providing valuable insights and analysis on cryptocurrency, stock market and other assets management.

If you have enjoyed this article and would like to support my ongoing efforts, I would be honored to have you as a member of my Patreon community. As a member, you will have access to exclusive content, early access to new analysis, and the opportunity to be a part of shaping the direction of my research.

Membership starts at just $10, and you can choose to contribute on a bi-monthly basis. Your support will help me to continue to produce high-quality content and bring you the latest insights on financial analytics.

Patreon https://patreon.com/pppicasso

Regards,

Puranam Pradeep Picasso

Linkedinhttps://www.linkedin.com/in/puranampradeeppicasso/

Patreon https://patreon.com/pppicasso

Facebook https://www.facebook.com/puranam.p.picasso/

Twitterhttps://twitter.com/picasso_999

--

--

Puranam Pradeep Picasso - ImbueDesk Profile

Algorithmic Trader, AI/ML & Crypto Enthusiast, Certified Blockchain Architect, Certified Lean Six SIgma Green Belt, Certified SCRUM Master and Entrepreneur