Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -2 (Hidden Markov Model — HMM)
How useful is a Machine Learning Model for trading? A practical approach
Introduction:
Embarking on the intersection of finance and artificial intelligence, we delve into the realm of Hidden Markov Models (HMMs) — a powerful tool capable of revealing hidden patterns and dynamics within financial markets. In this exploration, we not only demystify the essence of HMMs but also undertake a fascinating journey of application in the dynamic landscape of financial analysis.
Hidden Markov Models, rooted in probability theory, provide a framework for understanding systems with hidden states. In finance, these hidden states could represent various market regimes — bull, bear, or ranging. By discerning these states, HMMs offer a nuanced perspective, enabling traders to adapt strategies based on current market conditions.
Now, our quest takes a concrete turn as we apply HMMs to enhance algorithmic trading strategies. The focal point of this analysis revolves around a meticulous comparison between a strategy fortified with Hidden Markov Models and a Hyper-Optimized-only strategy. The chosen battleground for this comparison is the Bitcoin market, utilizing a 15-minute time frame spanning from January 1st, 2021, to October 22nd, 2023 — the very data canvas where we previously tested the Hyper-Optimized strategy.
The objective is clear: evaluate the performance of the HMM-enhanced strategy against the Hyper-Optimized-only strategy and a simple Buy and Hold approach during the specified period. This trifecta of comparisons aims to unravel whether integrating Hidden Markov Models into algorithmic trading strategies can outpace conventional approaches and if it stands superior to the Hyper-Optimized strategy.
Join us in this journey as we unravel the potential of HMMs in reshaping the landscape of algorithmic trading and answer the pivotal question — does Hidden Markov Model hold the key to unlocking greater success in the volatile realms of cryptocurrency trading? The answers await as we navigate through the data and decipher the intricate dance of market dynamics.
Our Algorithmic Trading Vs/+ Machine Learning Journey so far?
Stage 1:
We have developed a crypto Algorithmic Strategy which gave us huge profits when ran on multiple crypto assets (138+) with a profit range of 8787%+ in span of 3 years (almost).
“The 8787%+ ROI Algo Strategy Unveiled for Crypto Futures! Revolutionized With Famous RSI, MACD, Bollinger Bands, ADX, EMA” — Link
We have run live trading in dry-run mode for the same for 7 days and details about the same have been shared in another article.
“Freqtrade Revealed: 7-Day Journey in Algorithmic Trading for Crypto Futures Market” — Link
After successful backtest results and forward testing (live trading in dry-run mode), we planned to improve the odds of making more profit for the same. (To lower stop-losses, increase odds of winning more , reduce risk factor and other important things)
Stage 2:
We have worked on developing a strategy alone without freqtrade setup (avoiding trailing stop loss, multiple asst parallel running, higher risk management setups that freqtrade provides for free (it is a free open source platform) and then tested it in market, then optimized it using hyper parameters and then , we got some +ve profits from the strategy
“How I achieved 3000+% Profit in Backtesting for Various Algorithmic Trading Bots and how you can do the same for your Trading Strategies — Using Python Code” — Link
Stage 3:
As we have tested our strategy only on 1 Asset , i.e; BTC/USDT in crypto market, we wanted to know if we can segregate the whole collective assets we have (Which we have used for developing Freqtrade Strategy earlier) segregate them into different clusters based on their volatility, it becomes easy to do trading for certain volatile assets and won’t hit huge stop-losses for others if worked on implementing based on coin volatility.
We used K-nearest Neighbors (KNN Means) to identify different clusters of assets out of 138 crypto assets we use in our freqtrade strategy, which gave us 8000+% profits during backtest.
“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -1 (K-Nearest Neighbors)” — Link
Stage 4:
Now, we want to introduce Unsupervised Machine Learning model — Hidden Markov Model (HMMs) to identify trends in the market and trade during only profitable trends and avoid sudden pumps, dumps in market, avoid negative trends in market. Below explanation unravels the same. Let’s dive in
The Code Explanation
Step 1: Importing Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pyhhmm.gaussian import GaussianHMM
from pandas_datareader.data import DataReader
import json
from datetime import datetime
import talib as ta
import ccxt
pandas
andnumpy
for data manipulation.matplotlib.pyplot
for plotting graphs.GaussianHMM
frompyhhmm
for Hidden Markov Model (HMM) implementation.DataReader
frompandas_datareader.data
for fetching financial data.json
for JSON handling.datetime
for working with dates.talib
asta
for technical analysis functions.ccxt
for cryptocurrency exchange connectivity.
Step 2: Data Extraction, Segregation and Pre-processing
# Data Extraction
# start_date = "2017-01-1"
# end_date = "2022-06-1"
# symbol = "SPY"
# data = DataReader(name=symbol, data_source='yahoo', start=start_date, end=end_date)
# data = data[["Open", "High", "Low", "Adj Close"]]
# Define the path to your JSON file
file_path = "../BTC_USDT_USDT-15m-futures.json"
# Open the file and read the data
with open(file_path, "r") as f:
data = json.load(f)
# Check the data structure
print(data) # Should be a list of dictionaries
# jupyter notebook --NotebookApp.iopub_data_rate_limit=100000000
df = pd.DataFrame(data)
# Extract the OHLC data (adjust column names as needed)
# ohlc_data = df[["date","open", "high", "low", "close", "volume"]]
df.rename(columns={0: "Date", 1: "Open", 2: "High",3: "Low", 4: "Adj Close", 5: "Volume"}, inplace=True)
# Convert timestamps to datetime objects
df["Date"] = pd.to_datetime(df['Date'] / 1000, unit='s')
df.set_index("Date", inplace=True)
# Format the date index
df.index = df.index.strftime("%m-%d-%Y %H:%M")
# print(df.dropna(), df.describe(), df.info())
data = df
data
This code essentially reads BTC/USDT futures data from a JSON file, processes it into a Pandas DataFrame, and formats the DataFrame to represent OHLCV data.
You can use any data provider of your choice to extract the data as needed and then format it into naming the columns as same as above, this way, it becomes easy for you to replicate the entire next steps in much easier way.
# Add Returns and Range
df = data.copy()
df["Returns"] = (df["Adj Close"] / df["Adj Close"].shift(1)) - 1
df["Range"] = (df["High"] / df["Low"]) - 1
df["Volatility"] = df['Returns'].rolling(window=14).std()
df.dropna(inplace=True)
print("Length: ", len(df))
df
Returns: Represents the daily percentage return of the “Adj Close” price.
df["Returns"] = (df["Adj Close"] / df["Adj Close"].shift(1)) - 1
Range: Represents the daily percentage range, calculated as the difference between the daily “High” and “Low” prices.
df["Range"] = (df["High"] / df["Low"]) - 1
Volatility: Represents the rolling standard deviation of the “Returns” column over a 14-day window, providing a measure of price volatility.
df["Volatility"] = df['Returns'].rolling(window=14).std()
The dropna
method is then used to remove any rows with missing values. The final DataFrame is displayed, and its length is printed.
Step 3: Introducing Strategy
# Add Moving Average
df["MA_12"] = df["Adj Close"].rolling(window=12).mean()
df["MA_21"] = df["Adj Close"].rolling(window=21).mean()
def trade_signal(dataframe=df, rsi_tp=19, bb_tp=16, vol_long=42, vol_short=29):
# Compute indicators
dataframe['RSI'] = ta.RSI(dataframe['Adj Close'], timeperiod=rsi_tp)
dataframe['upper_band'], dataframe['middle_band'], dataframe['lower_band'] = ta.BBANDS(dataframe['Adj Close'], timeperiod=bb_tp)
dataframe['macd'], dataframe['signal'], _ = ta.MACD(dataframe['Adj Close'])
conditions_long = ((dataframe['RSI'] > 50) &
(dataframe['Adj Close'] > dataframe['middle_band']) &
(dataframe['Adj Close'] < dataframe['upper_band']) &
(dataframe['macd'] > dataframe['signal']) &
((dataframe['High'] - dataframe['Adj Close']) < (dataframe['Adj Close'] - dataframe['Open'])) &
(dataframe['Adj Close'] > dataframe['Open']) &
(dataframe['Volume'] > dataframe['Volume'].rolling(window=vol_long).mean()))
conditions_short = ((dataframe['RSI'] < 50) &
(dataframe['Adj Close'] < dataframe['middle_band']) &
(dataframe['Adj Close'] > dataframe['lower_band']) &
(dataframe['macd'] < dataframe['signal']) &
((dataframe['Adj Close'] - dataframe['Low']) < (dataframe['Open'] - dataframe['Adj Close'])) &
(dataframe['Adj Close'] < dataframe['Open']) &
(dataframe['Volume'] > dataframe['Volume'].rolling(window=vol_short).mean()))
dataframe['trend'] = 0
dataframe.loc[conditions_long, 'trend'] = 1
dataframe.loc[conditions_short, 'trend'] = -1
dataframe.dropna(inplace=True)
return dataframe
# ho_15m_pf = {'bb_tp': 16, 'leverage': 5, 'rsi_tp': 19,
# 'stop_loss': 0.16945844874195432, 'take_profit': 0.266730306293752,
# 'vol_long': 42, 'vol_short': 29}
# trading_signal = trade_signal(dataframe=df, rsi_tp=19, bb_tp=16, vol_long=42, vol_short=29)
df['trade_signal'] = trade_signal(dataframe=df, rsi_tp=19, bb_tp=16, vol_long=42, vol_short=29)['trend']
df.info()
# Check for inf or nan values
inf_mask = np.isinf(df) | np.isnan(df)
# Remove rows containing inf or nan values
df_cleaned = df[~inf_mask.any(axis=1)]
# Remove columns containing inf or nan values
df = df_cleaned.loc[:, ~inf_mask.any(axis=0)]
df.dropna(inplace=True)
df.info()
Moving Averages:
- Two moving averages, “MA_12” and “MA_21,” are calculated with windows of 12 and 21 days, respectively.
df["MA_12"] = df["Adj Close"].rolling(window=12).mean()
df["MA_21"] = df["Adj Close"].rolling(window=21).mean()
Trade Signal Function:
- A function named
trade_signal
is defined, which takes the DataFrame as input and calculates various technical indicators such as RSI, Bollinger Bands, and MACD. - Trading conditions are defined for both long and short positions based on the computed indicators.
- The ‘trend’ column is created in the DataFrame, where 1 represents a long signal, -1 represents a short signal, and 0 represents no signal.
- The function returns the modified DataFrame.
df['trade_signal'] = trade_signal(dataframe=df, rsi_tp=19, bb_tp=16, vol_long=42, vol_short=29)['trend']
Data Cleaning:
The code checks for and removes rows and columns containing NaN or inf values from the DataFrame.
inf_mask = np.isinf(df) | np.isnan(df)
df_cleaned = df[~inf_mask.any(axis=1)]
df = df_cleaned.loc[:, ~inf_mask.any(axis=0)]
df.dropna(inplace=True)
Step 4: Data Splitting and Rounding
import math
# Assuming int_data_length is the result of len(df) * 0.8
int_data_length = int(len(df) * 0.4)
# Specify the desired multiple
multiple = 1000 # You can change this to your desired multiple
# Round up to the nearest multiple
rounded_length = math.ceil(int_data_length / multiple) * multiple
# Print the result
print(rounded_length)
# int_data_length = int(len(df)* 0.8)
X_train = df[["Returns", "Range", "Volatility"]].iloc[:rounded_length]
X_test = df[["Returns", "Range", "Volatility"]].iloc[rounded_length:]
X = df[["Returns", "Range", "Volatility"]]
save_df = df.iloc[rounded_length:]
print("Train Length: ", len(X_train))
print("Test Length: ", len(X_test))
print("X_train From: ", X_train.head(1).index.item())
print("X_train To: ", X_train.tail(1).index.item())
print("X_test From: ", X_test.head(1).index.item())
print("X_test To: ", X_test.tail(1).index.item())
This code segment involves data splitting and rounding to the nearest multiple. Here’s a breakdown:
Rounding Length:
- A specified fraction (40%) of the DataFrame ‘df’ length is calculated and rounded up to the nearest multiple (specified as 1000).
int_data_length = int(len(df) * 0.4)
rounded_length = math.ceil(int_data_length / multiple) * multiple
Data Splitting:
- The DataFrame is split into training (X_train) and testing (X_test) sets based on the rounded length.
- The original DataFrame ‘X’ is also defined for reference.
- ‘save_df’ contains the remaining portion of the DataFrame after the split.
X_train = df[["Returns", "Range", "Volatility"]].iloc[:rounded_length]
X_test = df[["Returns", "Range", "Volatility"]].iloc[rounded_length:]
X = df[["Returns", "Range", "Volatility"]]
save_df = df.iloc[rounded_length:]
Print Information:
. Lengths and index information for the training and testing sets are printed.
print("Train Length: ", len(X_train))
print("Test Length: ", len(X_test))
print("X_train From: ", X_train.head(1).index.item())
print("X_train To: ", X_train.tail(1).index.item())
print("X_test From: ", X_test.head(1).index.item())
print("X_test To: ", X_test.tail(1).index.item())
This code essentially prepares the data for training and testing a model, ensuring a rounded length for consistency
Step 5: Training HMM Model with Test Data
# Train HMM
model = GaussianHMM(n_states=4, covariance_type='full', n_emissions=3)
# model.train([np.array(X_train_cleaned.values)])
model.train([np.array(X_train.values)])
model.predict([X_train.values])[0][:10]
# Make Prediction on Test Data
df_main = save_df.copy()
df_main.drop(columns=["High", "Low"], inplace=True)
hmm_results = model.predict([X_test.values])[0]
df_main["HMM"] = hmm_results
df_main
Initialize HMM Model:
- A Gaussian Hidden Markov Model (HMM) is initialized with 4 states, full covariance type, and 3 emissions.
model = GaussianHMM(n_states=4, covariance_type='full', n_emissions=3)
Train HMM Model:
- The model is trained using the training data (
X_train.values
).
model.train([np.array(X_train.values)])
Generate Predictions:
- Predictions are made on the training data to get the hidden states sequence.
hmm_results = model.predict([X_train.values])[0][:10]
Make Predictions on Test Data:
- The trained model is used to predict the hidden states sequence for the test data (
X_test.values
). - The results are added to the main DataFrame (
df_main
) under the "HMM" column.
hmm_results = model.predict([X_test.values])[0]
df_main["HMM"] = hmm_results
This code applies a Gaussian Hidden Markov Model to the provided financial data, capturing hidden states and assigning them to the “HMM” column in the DataFrame.
Step 6: TESTING THE MARKET WITH TEST DATA
# Add MA Signals
df_main.loc[df_main["trend"] > 0, "MA_Signal"] = 1
df_main.loc[df_main["trend"] <= 0, "MA_Signal"] = 0
# Add HMM Signals
favourable_states = [0,1]
hmm_values = df_main["HMM"].values
hmm_values = [1 if x in favourable_states else 0 for x in hmm_values]
df_main["HMM_Signal"] = hmm_values
# Add Combined Signal
df_main["Main_Signal"] = 0
df_main.loc[(df_main["MA_Signal"] == 1) & (df_main["HMM_Signal"] == 1), "Main_Signal"] = 1
df_main["Main_Signal"] = df_main["Main_Signal"].shift(1)
df_main["MA_Signal"] = df_main["MA_Signal"].shift(1)
# Hold-On Returns
df_main["lrets_holdon"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1))
df_main["holdon_prod"] = df_main["lrets_holdon"].cumsum()
df_main["holdon_prod_exp"] = np.exp(df_main["holdon_prod"]) - 1
# Benchmark Returns
# df_main["lrets_bench"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1))
df_main["lrets_bench"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1)) * df_main["MA_Signal"]
df_main["bench_prod"] = df_main["lrets_bench"].cumsum()
df_main["bench_prod_exp"] = np.exp(df_main["bench_prod"]) - 1
# Strategy Returns
df_main["lrets_strat"] = np.log(df_main["Open"].shift(-1) / df_main["Open"]) * df_main["Main_Signal"]
df_main["lrets_prod"] = df_main["lrets_strat"].cumsum()
df_main["strat_prod_exp"] = np.exp(df_main["lrets_prod"]) - 1
# Review Results Table
df_main.dropna(inplace=True)
df_main.tail()
# Sharpe Ratio Function
def sharpe_ratio(returns_series):
N = 365
NSQRT = np.sqrt(N)
rf = 0.01 / NSQRT
mean = returns_series.mean() * N
sigma = returns_series.std() * NSQRT
sharpe_ratio = round((mean - rf) / sigma, 2)
return sharpe_ratio
# Metrics
bench_rets = round(df_main["bench_prod_exp"].values[-1] * 100, 1)
strat_rets = round(df_main["strat_prod_exp"].values[-1] * 100, 1)
holdon_rets = round(df_main["holdon_prod_exp"].values[-1] * 100, 1)
bench_sharpe = sharpe_ratio(df_main["lrets_bench"].values)
strat_sharpe = sharpe_ratio(df_main["lrets_strat"].values)
holdon_sharpe = sharpe_ratio(df_main["lrets_holdon"].values)
# Print Metrics
print("TESTING THE MARKET WITH TEST DATA")
print("---- ---- ---- ---- ---- ----")
print(f"Returns Hold-on: {holdon_rets}%")
print(f"Returns Benchmark: {bench_rets}%")
print(f"Returns Strategy: {strat_rets}%")
print("---- ---- ---- ---- ---- ----")
print(f"Sharpe Hold-on: {holdon_sharpe}")
print(f"Sharpe Benchmark: {bench_sharpe}")
print(f"Sharpe Strategy: {strat_sharpe}")
print("---- ---- ---- ---- ---- ----")
print("Returns start Date From: ", df_main.head(1).index.item())
print("Returns End Date Upto: ", df_main.tail(1).index.item())
# Assuming X_train and X_test are your DataFrames
start_date = pd.to_datetime(df_main.head(1).index.item())
end_date = pd.to_datetime(df_main.tail(1).index.item())
# Calculate the difference
days_difference = (end_date - start_date).days
# Print or use the result as needed
print(f"Total number of days: {days_difference}")
print("---- ---- ---- ---- ---- ----")
print(f"Total Daily Average Returns for Strategy: {round(((strat_rets/days_difference)),3)}%")
print(f"Total Monthly Average Returns for Strategy: {round(((strat_rets/days_difference)*12),3)}%")
print(f"Total yearly Average Returns for Strategy: {round(((strat_rets/days_difference)*365),3)}%")
Add MA (Trade_signal) Signals:
- The code assigns signals (1 for Buy, 0 for Hold) based on trade_signal (MA) strategy.
df_main.loc[df_main["trend"] > 0, "MA_Signal"] = 1
df_main.loc[df_main["trend"] <= 0, "MA_Signal"] = 0
Add HMM Signals:
- Signals are assigned based on Hidden Markov Model (HMM) predictions, considering only favorable states (0 and 1).
favourable_states = [0, 1]
hmm_values = [1 if x in favourable_states else 0 for x in df_main["HMM"].values]
df_main["HMM_Signal"] = hmm_values
Add Combined Signal:
- A combined signal is generated based on both MA and HMM signals.
df_main["Main_Signal"] = 0
df_main.loc[(df_main["MA_Signal"] == 1) & (df_main["HMM_Signal"] == 1), "Main_Signal"] = 1
df_main["Main_Signal"] = df_main["Main_Signal"].shift(1)
df_main["MA_Signal"] = df_main["MA_Signal"].shift(1)
Calculate Returns:
- Log returns are calculated for holding, benchmark, and strategy.
df_main["lrets_holdon"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1))
df_main["lrets_bench"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1)) * df_main["MA_Signal"]
df_main["lrets_strat"] = np.log(df_main["Open"].shift(-1) / df_main["Open"]) * df_main["Main_Signal"]
Calculate Performance Metrics:
- Sharpe ratios and returns are computed for hold-on, benchmark, and strategy.
def sharpe_ratio(returns_series): # Sharpe Ratio Function
# ...
bench_rets = round(df_main["bench_prod_exp"].values[-1] * 100, 1)
strat_rets = round(df_main["strat_prod_exp"].values[-1] * 100, 1)
holdon_rets = round(df_main["holdon_prod_exp"].values[-1] * 100, 1)
Print Metrics:
- The final metrics are printed, including returns, Sharpe ratios, and other relevant information.
print("TESTING THE MARKET WITH TEST DATA")
# ... (printing other metrics)
This code evaluates the performance of a trading strategy based on combined signals from Moving Averages and Hidden Markov Models, providing metrics such as returns and Sharpe ratios.
You can find the whole code here — https://patreon.com/pppicasso
Overall Interpretation:
- The combined strategy outperformed both the hold-on and MA strategies in terms of total returns and risk-adjusted returns (Sharpe ratio).
- Positive Sharpe ratios indicate that the strategy is generating returns with better risk-adjusted performance.
- The strategy seems to have provided a more favorable outcome compared to simply holding or following the MA strategy alone during the specified period.
Remember, past performance may not be indicative of future results, and it’s crucial to consider various factors before making trading decisions.
Step 7: Plot Equity Curves for Test Data
# Plot Equity Curves
fig, ax = plt.subplots(figsize=(30, 15))
# Plot Hold-on Strategy
ax.plot(df_main.index, df_main["holdon_prod_exp"] * 100, color="red", label="Hold-on Strategy")
# Plot Benchmark Strategy
ax.plot(df_main.index, df_main["bench_prod_exp"] * 100, color="blue", label="Benchmark Strategy")
# Plot Combined Strategy
ax.plot(df_main.index, df_main["strat_prod_exp"] * 100, color="green", label="Combined Strategy")
# Label the x-axis and y-axis
ax.set_xlabel('Time')
ax.set_ylabel('Profit %')
# Add a legend
ax.legend()
# Show the plot
plt.show()
Step 8: Saving the Model and Whole Data for Future Use
import joblib
joblib.dump(model, 'hmm_model.joblib')
# Reset the index to make "Date" a regular column
df_main = df.copy()
df_main.drop(columns=["High", "Low"], inplace=True)
# Load the trained HMM model
loaded_model = joblib.load('hmm_model.joblib')
hmm_results = loaded_model.predict([X.values])[0]
df_main["HMM"] = hmm_results
dataset = df_main.copy()
dataset.to_csv('./data/dataset.csv', index=False)
dataset
- The trained HMM model is saved using joblib with the filename ‘hmm_model.joblib.’ Further in futre, we can directly use the saved model for testing or live trading else where.
- The dataset, including the HMM results, issaved to a CSV file named ‘dataset.csv.’ This contains entire Train + Test data.
Testing the Strategy During up-Trend, Down-Trend and for Entire Market Cycle
Step 9: Saving The Data for Up-Trend, Down trend and Side ways Market
max_adj_close_idx = df_main['Adj Close'].idxmax()
max_adj_close_idx
# df_main.loc['03-28-2022 19:15']
# start_date = '01-01-2021 00:00:00'
max_adj_close_idx
df_filtered_1 = df_main.loc[:max_adj_close_idx].copy()
df_filtered_2 = df_main.loc[max_adj_close_idx:].copy()
min_adj_close_idx_df_fintered_1 = df_filtered_1['Adj Close'].idxmin()
min_adj_close_idx_df_fintered_1
# df_filtered_1.loc['02-24-2022 05:30']
# start_date = min_adj_close_idx_2
dataset1 = df_filtered_1.loc[min_adj_close_idx_df_fintered_1:max_adj_close_idx].copy()
# Save the datasets to separate CSV files
dataset1.to_csv('./data/dataset1.csv', index=False)
# Set "Date" as the index again
# dataset1.set_index('Date', inplace=True)
dataset1
min_adj_close_idx_df_fintered_2 = df_filtered_2['Adj Close'].idxmin()
dataset2 = df_filtered_2.loc[max_adj_close_idx:min_adj_close_idx_df_fintered_2].copy()
dataset2.to_csv('./data/dataset2.csv', index=False)
dataset2
# Find the mean and standard deviation of "Adj Close"
mean_adj_close = df_main['Adj Close'].mean()
std_dev_adj_close = df_main['Adj Close'].std()
# Create a mask for filtering the data within 1.5 times standard deviation from the mean
mask = (df_main['Adj Close'] >= mean_adj_close - 1.5 * std_dev_adj_close) & (df_main['Adj Close'] <= mean_adj_close + 1.5 * std_dev_adj_close)
# Find the longest continuous time dataset within the given criteria
longest_continuous_dataset = df_main[mask].copy()
longest_continuous_dataset.to_csv('./data/longest_continuous_dataset.csv', index=False)
longest_continuous_dataset
Finding the Date of Maximum “Adj Close”:
- The code identifies the date with the maximum “Adj Close” value using the
idxmax()
function.
Splitting the Data before and after the Maximum “Adj Close” Date:
- The dataframe is split into two parts: data before the date of the maximum “Adj Close” and data from that date onwards.
Finding the Date of Minimum “Adj Close” in the First Sub-dataset:
- For the first sub-dataset (before the maximum “Adj Close” date), the code finds the date with the minimum “Adj Close” value.
Creating Dataset 1 (For Up trend):
- A dataset is created (
dataset1
) by selecting data from the minimum "Adj Close" date in the first sub-dataset to the date of the maximum "Adj Close."
Saving Dataset 1 to CSV:
- The first dataset is saved to a CSV file named ‘dataset1.csv’ for further analysis or reference.
Finding the Date of Minimum “Adj Close” in the Second Sub-dataset:
- For the second sub-dataset (from the maximum “Adj Close” date onwards), the code finds the date with the minimum “Adj Close” value.
Creating Dataset 2 (For Down trend):
- Another dataset is created (
dataset2
) by selecting data from the maximum "Adj Close" date to the date of the minimum "Adj Close" in the second sub-dataset.
Saving Dataset 2 to CSV:
- The second dataset is saved to a CSV file named ‘dataset2.csv’ for further analysis or reference.
Filtering Data Within 1.5 Times Standard Deviation from Mean:
- The code calculates the mean and standard deviation of the “Adj Close” values for the entire dataframe.
- A mask is created to filter the data within 1.5 times the standard deviation from the mean.
Finding the Longest Continuous Time Dataset within Criteria (For Side ways market):
- The code extracts the longest continuous time dataset within the criteria defined by the mask.
- This dataset is saved to a CSV file named ‘longest_continuous_dataset.csv’ for further analysis or reference.
Printing Results:
- The resulting datasets (
dataset1
,dataset2
, andlongest_continuous_dataset
) are printed for examination.
Step 10: Testing DURING UP-TREND OF THE MARKET
# Make Prediction on Test Data
df_main = dataset1.copy()
df_main
# Add MA Signals
df_main.loc[df_main["trend"] > 0, "MA_Signal"] = 1
df_main.loc[df_main["trend"] <= 0, "MA_Signal"] = 0
# Add HMM Signals
favourable_states = [0,1,2,3]
hmm_values = df_main["HMM"].values
hmm_values = [1 if x in favourable_states else 0 for x in hmm_values]
df_main["HMM_Signal"] = hmm_values
# Add Combined Signal
df_main["Main_Signal"] = 0
df_main.loc[(df_main["MA_Signal"] == 1) & (df_main["HMM_Signal"] == 1), "Main_Signal"] = 1
df_main["Main_Signal"] = df_main["Main_Signal"].shift(1)
df_main["MA_Signal"] = df_main["MA_Signal"].shift(1)
# Hold-On Returns
df_main["lrets_holdon"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1))
df_main["holdon_prod"] = df_main["lrets_holdon"].cumsum()
df_main["holdon_prod_exp"] = np.exp(df_main["holdon_prod"]) - 1
# Benchmark Returns
# df_main["lrets_bench"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1))
df_main["lrets_bench"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1)) * df_main["MA_Signal"]
df_main["bench_prod"] = df_main["lrets_bench"].cumsum()
df_main["bench_prod_exp"] = np.exp(df_main["bench_prod"]) - 1
# Strategy Returns
df_main["lrets_strat"] = np.log(df_main["Open"].shift(-1) / df_main["Open"]) * df_main["Main_Signal"]
df_main["lrets_prod"] = df_main["lrets_strat"].cumsum()
df_main["strat_prod_exp"] = np.exp(df_main["lrets_prod"]) - 1
# Review Results Table
df_main.dropna(inplace=True)
df_main.tail()
# Sharpe Ratio Function
def sharpe_ratio(returns_series):
N = 365
NSQRT = np.sqrt(N)
rf = 0.01 / NSQRT
mean = returns_series.mean() * N
sigma = returns_series.std() * NSQRT
sharpe_ratio = round((mean - rf) / sigma, 2)
return sharpe_ratio
# Metrics
bench_rets = round(df_main["bench_prod_exp"].values[-1] * 100, 1)
strat_rets = round(df_main["strat_prod_exp"].values[-1] * 100, 1)
holdon_rets = round(df_main["holdon_prod_exp"].values[-1] * 100, 1)
bench_sharpe = sharpe_ratio(df_main["lrets_bench"].values)
strat_sharpe = sharpe_ratio(df_main["lrets_strat"].values)
holdon_sharpe = sharpe_ratio(df_main["lrets_holdon"].values)
# Print Metrics
print("DURING UP-TREND OF THE MARKET")
print("---- ---- ---- ---- ---- ----")
print(f"Returns Hold-on (If you only entered at Lowest price of the asset during UP-Trend): {holdon_rets}%")
print(f"Returns Benchmark: {bench_rets}%")
print(f"Returns Strategy: {strat_rets}%")
print("---- ---- ---- ---- ---- ----")
print(f"Sharpe Hold-on: {holdon_sharpe}")
print(f"Sharpe Benchmark: {bench_sharpe}")
print(f"Sharpe Strategy: {strat_sharpe}")
print("---- ---- ---- ---- ---- ----")
print("Returns start Date From: ", dataset1.head(1).index.item())
print("Returns End Date Upto: ", dataset1.tail(1).index.item())
# Assuming X_train and X_test are your DataFrames
start_date = pd.to_datetime(dataset1.head(1).index.item())
end_date = pd.to_datetime(dataset1.tail(1).index.item())
# Calculate the difference
days_difference = (end_date - start_date).days
# Print or use the result as needed
print(f"Total number of days: {days_difference}")
print("---- ---- ---- ---- ---- ----")
print(f"Total Daily Average Returns for Strategy: {round(((strat_rets/days_difference)),3)}%")
print(f"Total Monthly Average Returns for Strategy: {round(((strat_rets/days_difference)*12),3)}%")
print(f"Total yearly Average Returns for Strategy: {round(((strat_rets/days_difference)*365),3)}%")
Code description is same as “STEP 6” except for importing dataset1
During the up-trend of the market, the following results were obtained for dataset1:
Returns Hold-on (If you only entered at the Lowest price of the asset during UP-Trend):
- 135.4%
- This metric represents the hypothetical returns if an investor had entered the market at the lowest price during the upward trend and held the position until the end of the dataset.
Returns Benchmark:
- 35.3%
- This is the returns achieved by a benchmark strategy.
Returns Strategy:
- 35.3%
- This is the returns achieved by the implemented strategy.
Sharpe Ratio Hold-on:
- 0.1
- The Sharpe ratio measures the risk-adjusted performance of an investment. A higher Sharpe ratio indicates better risk-adjusted returns.
Sharpe Ratio Benchmark:
- 0.18
- The Sharpe ratio for the benchmark strategy.
Sharpe Ratio Strategy:
- 0.18
- The Sharpe ratio for the implemented strategy.
Returns Start Date From:
- 01–01–2021 18:30
- The starting date of the analyzed period.
Returns End Date Upto:
- 11–10–2021 14:00
- The ending date of the analyzed period.
Total Number of Days:
- 312
- The total number of days considered in the dataset.
Total Daily Average Returns for Strategy:
- 0.113%
- The average daily returns for the implemented strategy.
Total Monthly Average Returns for Strategy:
- 1.358%
- The average monthly returns for the implemented strategy.
Total Yearly Average Returns for Strategy:
- 41.296%
- The average yearly returns for the implemented strategy.
These results provide insights into the performance of the strategy during the up-trend market conditions, comparing it with a benchmark strategy and a hold-on approach. The Sharpe ratios also indicate the risk-adjusted performance of each strategy.
Though Hold-on has given a lot of profit (only if bought at lowest price during market cycle) , even then, sharpe ratio is less compared to benchmark or strategy, this is because of number of trades/days it kept holding on, increasing the risk of loosing money if at all market fluctuates in -ve direction.
same returns for benchmark and strategy says that, during up-trend, both perform almost same. We have to check how they perform during down trend too.
Step 11: Plotting the Results During up Trend market
# Plot Equity Curves
fig, ax = plt.subplots(figsize=(30, 15))
# Plot Hold-on Strategy
ax.plot(df_main.index, df_main["holdon_prod_exp"] * 100, color="red", label="Hold-on Strategy")
# Plot Benchmark Strategy
ax.plot(df_main.index, df_main["bench_prod_exp"] * 100, color="blue", label="Benchmark Strategy")
# Plot Combined Strategy
ax.plot(df_main.index, df_main["strat_prod_exp"] * 100, color="green", label="Combined Strategy")
# Label the x-axis and y-axis
ax.set_xlabel('Time')
ax.set_ylabel('Profit %')
# Add a legend
ax.legend()
# Show the plot
plt.show()
From the above graph plot, it is clear that , though hold on strategy gave huge 135% + profit, it is visible that, it has fallen huge from 120%+ , back to 0% and then again raise back to 135%+ , shows that it is too risky to just hold the coin.
Step 12: Test DURING DOWN-TREND OF THE MARKET
# Make Prediction on Test Data
df_main = dataset2.copy()
df_main
# Add MA Signals
df_main.loc[df_main["trend"] > 0, "MA_Signal"] = 1
df_main.loc[df_main["trend"] <= 0, "MA_Signal"] = 0
# Add HMM Signals
favourable_states = [1]
hmm_values = df_main["HMM"].values
hmm_values = [1 if x in favourable_states else 0 for x in hmm_values]
df_main["HMM_Signal"] = hmm_values
# Add Combined Signal
df_main["Main_Signal"] = 0
df_main.loc[(df_main["MA_Signal"] == 1) & (df_main["HMM_Signal"] == 1), "Main_Signal"] = 1
df_main["Main_Signal"] = df_main["Main_Signal"].shift(1)
df_main["MA_Signal"] = df_main["MA_Signal"].shift(1)
# Hold-On Returns
df_main["lrets_holdon"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1))
df_main["holdon_prod"] = df_main["lrets_holdon"].cumsum()
df_main["holdon_prod_exp"] = np.exp(df_main["holdon_prod"]) - 1
# Benchmark Returns
# df_main["lrets_bench"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1))
df_main["lrets_bench"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1)) * df_main["MA_Signal"]
df_main["bench_prod"] = df_main["lrets_bench"].cumsum()
df_main["bench_prod_exp"] = np.exp(df_main["bench_prod"]) - 1
# Strategy Returns
df_main["lrets_strat"] = np.log(df_main["Open"].shift(-1) / df_main["Open"]) * df_main["Main_Signal"]
df_main["lrets_prod"] = df_main["lrets_strat"].cumsum()
df_main["strat_prod_exp"] = np.exp(df_main["lrets_prod"]) - 1
# Review Results Table
df_main.dropna(inplace=True)
df_main.tail()
# Sharpe Ratio Function
def sharpe_ratio(returns_series):
N = 365
NSQRT = np.sqrt(N)
rf = 0.01 / NSQRT
mean = returns_series.mean() * N
sigma = returns_series.std() * NSQRT
sharpe_ratio = round((mean - rf) / sigma, 2)
return sharpe_ratio
# Metrics
bench_rets = round(df_main["bench_prod_exp"].values[-1] * 100, 1)
strat_rets = round(df_main["strat_prod_exp"].values[-1] * 100, 1)
holdon_rets = round(df_main["holdon_prod_exp"].values[-1] * 100, 1)
bench_sharpe = sharpe_ratio(df_main["lrets_bench"].values)
strat_sharpe = sharpe_ratio(df_main["lrets_strat"].values)
holdon_sharpe = sharpe_ratio(df_main["lrets_holdon"].values)
# Print Metrics
print("DURING DOWN-TREND OF THE MARKET")
print("---- ---- ---- ---- ---- ----")
print(f"Returns Hold-on (If you only entered at Lowest price of the asset during DOWN-Trend): {holdon_rets}%")
print(f"Returns Benchmark: {bench_rets}%")
print(f"Returns Strategy: {strat_rets}%")
print("---- ---- ---- ---- ---- ----")
print(f"Sharpe Hold-on: {holdon_sharpe}")
print(f"Sharpe Benchmark: {bench_sharpe}")
print(f"Sharpe Strategy: {strat_sharpe}")
print("---- ---- ---- ---- ---- ----")
print("Returns start Date From: ", dataset2.head(1).index.item())
print("Returns End Date Upto: ", dataset2.tail(1).index.item())
# Assuming X_train and X_test are your DataFrames
start_date = pd.to_datetime(dataset2.head(1).index.item())
end_date = pd.to_datetime(dataset2.tail(1).index.item())
# Calculate the difference
days_difference = (end_date - start_date).days
# Print or use the result as needed
print(f"Total number of days: {days_difference}")
print("---- ---- ---- ---- ---- ----")
print(f"Total Daily Average Returns for Strategy: {round(((strat_rets/days_difference)),3)}%")
print(f"Total Monthly Average Returns for Strategy: {round(((strat_rets/days_difference)*12),3)}%")
print(f"Total yearly Average Returns for Strategy: {round(((strat_rets/days_difference)*365),3)}%")
Code Description is as same as from “STEP 6” and only difference is , we have used dataset2 , which is for down trend market data.
Results during Down-Trend:
Hold-On Returns (During Down-Trend):
- Experienced a significant loss of -77.1% if one entered the market at the lowest price during the down-trend and held the position.
Benchmark Returns:
- The benchmark strategy achieved marginal returns of 3.3%, indicating a small positive performance during the analyzed down-trend period.
Strategy Returns:
- The implemented trading strategy generated returns of 13.2%, demonstrating its ability to capture positive returns even in a down-trend market.
Sharpe Ratios:
- The Sharpe ratios indicate the risk-adjusted performance.
- Sharpe Ratio for Hold-On: -0.22
- Sharpe Ratio for Benchmark: -0.02
- Sharpe Ratio for Strategy: 0.09
- The strategy’s positive Sharpe ratio suggests a better risk-adjusted performance compared to the hold-on approach.
Total Duration:
- The analysis covered a total of 376 days during the down-trend.
Average Daily Returns for Strategy:
- The strategy yielded an average daily return of 0.035%.
Average Monthly Returns for Strategy:
- Monthly returns averaged at 0.421%.
Average Yearly Returns for Strategy:
- The strategy showed an average yearly return of 12.814%.
Conclusion:
Performance Evaluation:
- The strategy demonstrated resilience during the down-trend, outperforming both the hold-on approach and the benchmark in terms of returns.
Risk-Adjusted Performance:
- The positive Sharpe ratio for the strategy indicates a favorable risk-adjusted performance, suggesting that the strategy achieved returns while managing risk effectively.
Market Conditions Impact:
- The results highlight the strategy’s adaptability to different market conditions, showcasing its potential to generate positive returns even in challenging down-trend phases.
Overall Assessment:
- The strategy’s ability to provide positive returns during both up-trend and down-trend periods suggests its potential as a robust and versatile trading approach.
Step 13: Plotting the Results During Down Trend market
# Plot Equity Curves
fig, ax = plt.subplots(figsize=(30, 15))
# Plot Hold-on Strategy
ax.plot(df_main.index, df_main["holdon_prod_exp"] * 100, color="red", label="Hold-on Strategy")
# Plot Benchmark Strategy
ax.plot(df_main.index, df_main["bench_prod_exp"] * 100, color="blue", label="Benchmark Strategy")
# Plot Combined Strategy
ax.plot(df_main.index, df_main["strat_prod_exp"] * 100, color="green", label="Combined Strategy")
# Label the x-axis and y-axis
ax.set_xlabel('Time')
ax.set_ylabel('Profit %')
# Add a legend
ax.legend()
# Show the plot
plt.show()
The strategy’s ability to provide positive returns during both up-trend and down-trend periods suggests its potential as a robust and versatile trading approach.
These findings contribute to a comprehensive understanding of the strategy’s performance across different market scenarios, enhancing its potential as a viable investment approach.
Step 14: Test DURING THE WHOLE TESTING MARKET (Also Side Ways) PERIOD
# Make Prediction on Test Data
df_main = dataset.copy()
df_main
# Add MA Signals
df_main.loc[df_main["trend"] > 0, "MA_Signal"] = 1
df_main.loc[df_main["trend"] <= 0, "MA_Signal"] = 0
# Add HMM Signals
favourable_states = [0,1,3]
hmm_values = df_main["HMM"].values
hmm_values = [1 if x in favourable_states else 0 for x in hmm_values]
df_main["HMM_Signal"] = hmm_values
# Add Combined Signal
df_main["Main_Signal"] = 0
df_main.loc[(df_main["MA_Signal"] == 1) & (df_main["HMM_Signal"] == 1), "Main_Signal"] = 1
df_main["Main_Signal"] = df_main["Main_Signal"].shift(1)
df_main["MA_Signal"] = df_main["MA_Signal"].shift(1)
# Hold-On Returns
df_main["lrets_holdon"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1))
df_main["holdon_prod"] = df_main["lrets_holdon"].cumsum()
df_main["holdon_prod_exp"] = np.exp(df_main["holdon_prod"]) - 1
# Benchmark Returns
# df_main["lrets_bench"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1))
df_main["lrets_bench"] = np.log(df_main["Adj Close"] / df_main["Adj Close"].shift(1)) * df_main["MA_Signal"]
df_main["bench_prod"] = df_main["lrets_bench"].cumsum()
df_main["bench_prod_exp"] = np.exp(df_main["bench_prod"]) - 1
# Strategy Returns
df_main["lrets_strat"] = np.log(df_main["Open"].shift(-1) / df_main["Open"]) * df_main["Main_Signal"]
df_main["lrets_prod"] = df_main["lrets_strat"].cumsum()
df_main["strat_prod_exp"] = np.exp(df_main["lrets_prod"]) - 1
# Review Results Table
df_main.dropna(inplace=True)
df_main.tail()
# Sharpe Ratio Function
def sharpe_ratio(returns_series):
N = 365
NSQRT = np.sqrt(N)
rf = 0.01 / NSQRT
mean = returns_series.mean() * N
sigma = returns_series.std() * NSQRT
sharpe_ratio = round((mean - rf) / sigma, 2)
return sharpe_ratio
# Metrics
bench_rets = round(df_main["bench_prod_exp"].values[-1] * 100, 1)
strat_rets = round(df_main["strat_prod_exp"].values[-1] * 100, 1)
holdon_rets = round(df_main["holdon_prod_exp"].values[-1] * 100, 1)
bench_sharpe = sharpe_ratio(df_main["lrets_bench"].values)
strat_sharpe = sharpe_ratio(df_main["lrets_strat"].values)
holdon_sharpe = sharpe_ratio(df_main["lrets_holdon"].values)
# Print Metrics
print("DURING THE WHOLE TESTING MARKET PERIOD")
print("---- ---- ---- ---- ---- ----")
print(f"Returns Hold-on (If you only entered at Start Day price of the asset during WHOLE MARKET CYCLE): {holdon_rets}%")
print(f"Returns Benchmark: {bench_rets}%")
print(f"Returns Strategy: {strat_rets}%")
print("---- ---- ---- ---- ---- ----")
print(f"Sharpe Hold-on: {holdon_sharpe}")
print(f"Sharpe Benchmark: {bench_sharpe}")
print(f"Sharpe Strategy: {strat_sharpe}")
print("---- ---- ---- ---- ---- ----")
print("Returns start Date From: ", dataset.head(1).index.item())
print("Returns End Date Upto: ", dataset.tail(1).index.item())
# Assuming X_train and X_test are your DataFrames
start_date = pd.to_datetime(dataset.head(1).index.item())
end_date = pd.to_datetime(dataset.tail(1).index.item())
# Calculate the difference
days_difference = (end_date - start_date).days
# Print or use the result as needed
print(f"Total number of days: {days_difference}")
print("---- ---- ---- ---- ---- ----")
print(f"Total Daily Average Returns for Strategy: {round(((strat_rets/days_difference)),3)}%")
print(f"Total Monthly Average Returns for Strategy: {round(((strat_rets/days_difference)*12),3)}%")
print(f"Total yearly Average Returns for Strategy: {round(((strat_rets/days_difference)*365),3)}%")
Same Description as mentioned in “STEP 6” , here we have used dataset which contains the whole market cycle, though I have written a code to short list based on 1.5 standard deviation (STD)from the mean value of the whole data, it too gave same number of days as whole market days, maybe because of values being near to each other. So, we have concluded the same for the both from our findings (maybe it needs further proper division, instead of 1.5 STD, we should try 1 STD next time)
Overall Results Summary:
Hold-On Returns (Throughout the Testing Period):
- Gained a modest 1.7% return if one entered the market at the start and held the position.
Benchmark Returns:
- The benchmark strategy achieved substantial returns of 59.1%.
Strategy Returns:
- The implemented trading strategy closely mirrored the benchmark with returns of 59.2%, indicating its effectiveness in capturing market trends.
Sharpe Ratios:
- The Sharpe ratios provide insights into risk-adjusted performance.
- Sharpe Ratio for Hold-On: -0.01
- Sharpe Ratio for Benchmark: 0.09
- Sharpe Ratio for Strategy: 0.09
- The strategy’s positive Sharpe ratio signifies a favorable risk-adjusted performance compared to the hold-on approach.
Total Duration:
- The analysis covered a total of 1024 days throughout the testing period.
Average Daily Returns for Strategy:
- The strategy exhibited an average daily return of 0.058%.
Average Monthly Returns for Strategy:
- Monthly returns averaged at 0.694%.
Average Yearly Returns for Strategy:
- The strategy demonstrated an average yearly return of 21.102%.
Conclusion:
Performance Overview:
- The trading strategy showcased strong performance, closely matching the benchmark returns throughout the entire testing period.
Risk-Adjusted Performance:
- The positive Sharpe ratio indicates that the strategy achieved returns while effectively managing risk, outperforming the hold-on approach in terms of risk-adjusted performance.
Market Adaptability:
- The strategy’s ability to consistently generate positive returns suggests its adaptability to varying market conditions, contributing to its robustness.
Investment Consideration:
- The strategy’s competitive performance in comparison to the benchmark positions it as a potential investment approach. Further fine-tuning and optimization may enhance its effectiveness.
Overall Assessment:
- The strategy’s ability to deliver positive and competitive returns across the entire testing period underscores its potential as a viable and resilient trading approach. Further analysis and refinement can contribute to optimizing its performance and applicability in different market scenarios.
Step 13: Plotting the Results for Whole market Trend Cycle
# Plot Equity Curves
fig, ax = plt.subplots(figsize=(30, 15))
# Plot Hold-on Strategy
ax.plot(df_main.index, df_main["holdon_prod_exp"] * 100, color="red", label="Hold-on Strategy")
# Plot Benchmark Strategy
ax.plot(df_main.index, df_main["bench_prod_exp"] * 100, color="blue", label="Benchmark Strategy")
# Plot Combined Strategy
ax.plot(df_main.index, df_main["strat_prod_exp"] * 100, color="green", label="Combined Strategy")
# Label the x-axis and y-axis
ax.set_xlabel('Time')
ax.set_ylabel('Profit %')
# Add a legend
ax.legend()
# Show the plot
plt.show()
You can find the whole code here — https://patreon.com/pppicasso
Conclusion of HMM Code Integration and Overall Strategy Evolution:
1. Initial Strategy Development:
Freqtrade Strategy:
- Started with a traditional algorithmic trading strategy using Freqtrade.
- Conducted backtesting and forward testing to evaluate performance.
- “The 8787%+ ROI Algo Strategy Unveiled for Crypto Futures! Revolutionized With Famous RSI, MACD, Bollinger Bands, ADX, EMA” — Link .
- We have run live trading in dry-run mode for the same for 7 days and details about the same have been shared in another article.
- “Freqtrade Revealed: 7-Day Journey in Algorithmic Trading for Crypto Futures Market” — Link
- You can find the whole code here — https://patreon.com/pppicasso
2. Introduction of ML Techniques:
Algorithmic Strategies:
- Implemented individual strategies based on RSI, MACD, and Bollinger Bands.
- Conducted hyperparameter optimization to enhance strategy performance.
- “How I achieved 3000+% Profit in Backtesting for Various Algorithmic Trading Bots and how you can do the same for your Trading Strategies — Using Python Code” — Link
Clustering with KNN:
- Utilized K-Nearest Neighbors (KNN) to cluster similar-volatility crypto assets.
- Improved asset selection for specific strategies.
- “Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -1 (K-Nearest Neighbors)” — Link
3. Trend Identification with HMM:
- Hidden Markov Model (HMM):
- Integrated HMM to identify trending markets favorable for the strategy.
- Analyzed results during up-trends and down-trends.
4. Future Plans:
Supervised Learning with XGBoost:
- Plan to develop and test the strategy using XGBoost for supervised learning.
Deep Learning Techniques:
- Experimenting with RNN, CNN for further strategy enhancement.
Reinforcement Learning:
- Aspire to integrate reinforcement learning for automated trading strategy development. Develop a FreqAI Strategy
5. Overall Assessment:
Adaptability and Robustness:
- The strategy evolution showcases adaptability and robustness.
Multi-Technique Integration:
- Combining traditional algorithms with ML and deep learning techniques enhances strategy versatility.
Continuous Improvement:
- Ongoing exploration of advanced techniques for continuous improvement.
This framework, once established, becomes a valuable tool in any trader’s arsenal, providing a methodical approach to enhancing trading performance.
Suggestions for Learning:
Books:
- “Python for Finance” by Yves Hilpisch
- “Machine Learning Yearning” by Andrew Ng
- “Deep Learning” by Ian Goodfellow
Courses:
- Coursera’s “Machine Learning” by Andrew Ng
- Udacity’s “Deep Learning Nanodegree”
Resources:
- Kaggle for real-world datasets and competitions.
- Towards Data Science on Medium for insightful articles.
Financial Analysis:
- “Quantitative Financial Analytics with Python” on edX.
- “Financial Markets” on Coursera by Yale University.
Programming Practice:
- LeetCode and HackerRank for general programming challenges.
- GitHub repositories with open-source finance and machine learning projects.
These resources will provide a comprehensive foundation for understanding the technical aspects of algo trading and the application of Python in finance. Additionally, participating in online forums and communities such as Stack Overflow, GitHub, and Reddit’s r/algotrading can offer practical insights and peer support.
Thank you, Readers.
I hope you have found this article on Algorithmic strategy to be informative and helpful. As a creator, I am dedicated to providing valuable insights and analysis on cryptocurrency, stock market and other assets management.
If you have enjoyed this article and would like to support my ongoing efforts, I would be honored to have you as a member of my Patreon community. As a member, you will have access to exclusive content, early access to new analysis, and the opportunity to be a part of shaping the direction of my research.
Membership starts at just $10, and you can choose to contribute on a bi-monthly basis. Your support will help me to continue to produce high-quality content and bring you the latest insights on financial analytics.
Patreon — https://patreon.com/pppicasso
Regards,
Puranam Pradeep Picasso
Linkedin — https://www.linkedin.com/in/puranampradeeppicasso/
Patreon — https://patreon.com/pppicasso
Facebook — https://www.facebook.com/puranam.p.picasso/
Twitter — https://twitter.com/picasso_999