A Kalman Filter based High Frequency Trading Strategy
Teammates
Norman Yeo - nyeo2@illinois.edu (Team Leader)
- Norman is a graduating Bachelor's Student at the University of Illinois at Urbana-Champaign in the Engineering Department majoring in Computer Science. Norman is graduating in May 2023. Norman is interested in the areas of low latency computing and cybersecurity.
King Chak Ho - kch6@illinois.edu
-
King is a graduating Master Student at the University of Illinois at Urbana-Champaign in the Mathematics Department majoring in Applied Mathematics. King is graduating in Dec 2022. King received his B.S. in Mathematics with Specialization in Computing from University of California, Los Angeles. King has broad interest and exposure in the areas of both mathematics and computer science such as probability, machine learning, and algorithms.
-
Feel free to reach me at kch6@illinois.edu or my linkedin profile: https://www.linkedin.com/in/king-chak-ho/
Yipu Jin - Yipujin2@illinois.edu
-
Yipu is a graduating Master Student at the University of Illinois at Urbana-Champaign in the Industrial & Enterprise System Engineering Department majoring in Financial Engineering. Yipu is graduating Dec. 2022.
-
Yipu has experience in quantitative research and data analysis: https://www.linkedin.com/in/yipujin/
Yuhao Wang - yuhaow8@illinois.edu
-
Yuhao is a graduating Master Student at the University of Illinois at Urbana-Champaign in the Industrial & Enterprise System Engineering Department majoring in Financial Engineering. He will be graduating Dec. 2022.
-
He has experience in mortgage research and trading strategy design, feel free to reach at yuhaow8@illinois.edu.
Project Description:
This is the final report for semester long project for "FIN556 - Algorithmic Market Microstructure " Under Prof. David Lariviere.
Now, we further break down our project into four sections: market data, kalman filter research, strategy developement and analysis.
-
Market Data:
The intraday market data sources for this project are the IEX Historical Data.
-
Kalman Filter Research:
In the early phrase of the project, we learned and researched about different applications of Kalman Filter in Finance, and then we used Python and pykalman package to build Kalman Filter using the IEX intraday data. We then visualized the result of the Kalman Filter and tried to develop a trading strategy from those results. Once we developed the strategy logic, we then implemented the trading strategy in Strategy Studio.
-
Strategy Development:
We used Strategy Studio to develop and backtest trading strategy. Strategy Studio is a proprietary software from RCM-X used for trading strategy development and testing. We wrote our trading strategy by implementing the StrategyStudio's Strategy interface. Then, we backtested our strategy in Strategy Studio using IEX Historical DEEPS Data Feed. The backtest results which included a profit and loss csv file was used to evaluate the strategy's performance.
-
Analysis:
We wrote a python script to interpret the backtest csv file generated by Strategy Studio. We analyzed the strategy performance by generating visualizations and evaluating different metrics such as cummulative return percentage, sharpe ratio, sortino ratio and maximum drawdown.
Technologies:
Languages
Python
- Python was used for strategy research before implementation in C++. It's convenient to deal with market data inputs and can provide visualizations to obviously show the backtest results.
C++
- C++ was used in Strategy Studio for data parser and backtesting. It's a high performance language and is ideal for strategy implementation.
Databases
IEX daily data
- IEX daily data was used to backtest our strategy. Since it's intraday data, we could try to detect more trading signals. We used Strategy Studio to capture intraday trading signals and implemented the strategy.
Automated testing frameworks
Gitlab
- Gitlab was used for version control so that we can track changes on the project.
Virtual Box
- It was used to set up a virtual machine so that we can use Strategy Studio.
Strategy Studio
- It is a software for strategy development based on C++. We used it to implement the strategy and obtained the backtest results.
Packages
This part will list packages that we used in Python.
- numpy
- pandas
- matplotlib
- plotly
- pykalman
Numpy and Pandas were used for matrix calculation and data manipulation because of their high efficiency. Matplotlib and Plotly were used to generate trading signal plots and backtest result plots. Pykalman was used to generate predictions of stock prices, the inputs were bid ask prices and the parameters would need to be set.
Components:
Git Repo Layout:
├── backtest-analysis
│ ├── backtest-analysis.ipynb
│ ├── backtest-data
│ │ ├── ...
│ └── fig
│ └── pnl-percentage-graph.jpg
├── backtest.sh
├── images
│ └── OldKF
│ ├── ...
├── kalman_research
│ ├── kalman_order_book.ipynb
│ ├── KF_optimized.ipynb
│ ├── kalman-crossover-strategy.ipynb
│ └── data
│ └── 20211105_book_updates.csv
├── parse.sh
├── README.md
└── strategy
├── kalman.cpp
├── kalman.hpp
├── Kalman.so
├── kalmanstrategy.cpp
├── kalmanstrategy.h
└── Makefile
Instruction for using project:
To run our compiled demo, follow these steps:
- Download and install virtual box and vagrant
- Clone professor's vagrant setup (https://gitlab.engr.illinois.edu/shared_code/strategystudioubuntu2004)
-
cd strategystudioubuntu2004
and clone this repo - Using the recursively cloned repo
iexdownloadparser
, download and parse market data for the desired backtesting period and symbols. Details on directions of using the IEX downloader/parser can be found on the README.md under the project root directory. - Boot up vagrant machine and
cd /vagrant/fin556/strategy
andmake
-
../backtest.sh
to run the backtest. Modify the script for the desired backtesting period and symbols - Use results stored in
~/ss/bt/backtesting-results-cra
and thebacktest-analysis/backtest-analysis.ipynb
to analyze the results
Detailed Description
A General Kalman Filter Concept:
The Kalman filter keeps track of the estimated state of the system and the variance or uncertainty of the estimate. The real power of it is that it has the ability to estimate system parameters that can not be measured or observed with accuracy. In this case, we have a lot of hidden parameters in modeling a stock's price movement.
A Kalman Filter can be seperated into 2 parts: Predict and Update.
Predict:
Predicted (a priori) state estimate. The process model defines the evolution of the state from time k−1 to time k as:
where F is the state transition matrix applied to the previous state vector
The process model is paired with the measurement model that describes the relationship between the state and the measurement at the current time step k
Predicted (a priori) estimate covariance
Update:
Measurement pre-fit residual:
Innovation (or pre-fit residual) covariance
Optimal Kalman gain
Updated (a posteriori) state estimate
Updated (a posteriori) estimate covariance
Measurement post-fit residual
Source: https://en.wikipedia.org/wiki/Kalman_filter
Initial Kalman Filter Strategy and the Python Backtest Result
In our initial strategy, we are using APPL
data on 2021/11/05 to do the backtest. We collected the price and size of the first 3 Bid and Ask levels respectively from the IEX intraday data. Then we calculated the average bid and ask prices( weighted by order size) as our input for Kalman Filter. We used pykalman package in Python first to test the initial strategy. The transition matrices and covariance matrices were set to be identity matrices. We fed the average Bid and Ask price into pykalman to get the state mean for each time. Then we calculated the average of the Kalman Bid price and Ask price as the final prediction for stock price. The average predicted prices are in the KF_MID column in the dataframe. If we take a look at the KF_MID prices and the original Bid and Ask prices, we can see some overlaps which may indicate trading signals.
Then we came up with the initial trading strategy, which is based on comparing the KF_MID with current Best Bid and Ask prices. Since it will take some time for Kalman Filter to converge, we chose data after index 40 to start backtesting. Specifically, our initial trading strategy is the following:
-
If KF_MID is larger than current Best Ask, we go long.
-
If KF_MID is smaller than current Best Bid, we go short.
We also made an assumption of trading costs to be 0.2% and tried to see if there would be any return. For long only strategy, we observed over 200 signals within one afternoon. The net value growth is 8.9% at the end of the trading day. The graph below shows the net value growth.
In order to improve the prediction accuracy, we then tried another implementation of Kalman Filter.
An Updated method of the initial Kalman Filter Strategy
In order to calibrate the Kalman Filter, we have updated our algorithms and matrices a bit. In this updated version of Kalman Filter, we will be using 6 inputs as state variables in
For the Measurement, we will be using 3 measurement variables for
We will be using the first 500 orderbook data to calculate some of the parameters we use for this Kalman Filter.
-
: the average time interval for first 500 points
-
: the average price for a certain price level (e.g. Bid Price 1) over the first 500 points
-
: the standard deviation for a certain price level (e.g. Bid Price 1) over the first 500 points
- covariance: the covariance for certain price levels (e.g. Bid Price 1) over the first 500 points
We have also plotted the bid/ask spread with time for visualization purpose.
the new state extrapolation equation can be written as:
The system state
A Prediction from time n to time n+1 is:
in matrix form, the extrapolated state for time n+1 can be described by the following system of equations
Where according to Geometric Brownian Motion
The estimate uncertainty in matrix form is as such, which is a generated covariance matrix from the first 500 data points.
\boldsymbol{P}=\left[ \begin{matrix} \sigma_{x_{b1}}^{2} & \sigma_{x_{b1}x_{b2}}^{2} & \sigma_{x_{b1}x_{b3}}^{2} & \sigma_{x_{b1}x_{a1}}^{2} & \sigma_{x_{b1}x_{a2}}^{2} & \sigma_{x_{b1}x_{a3}}^{2} \\ \sigma_{x_{b2}x_{b1}}^{2} & \sigma_{x_{b2}}^{2} & \sigma_{x_{b2}x_{b3}}^{2} & \sigma_{x_{b2}x_{a1}}^{2} & \sigma_{x_{b2}x_{a2}}^{2} & \sigma_{x_{b2}x_{a3}}^{2} \\ \sigma_{x_{b3}x_{b1}}^{2} & \sigma_{x_{b3}x_{b2}}^{2} & \sigma_{x_{b3}}^{2} & \sigma_{x_{b3}x_{a1}}^{2} & \sigma_{x_{b3}x_{a2}}^{2} & \sigma_{x_{b3}x_{a3}}^{2} \\ \sigma_{x_{a1}x_{b1}}^{2} & \sigma_{x_{a1}x_{b2}}^{2} & \sigma_{x_{a1}x_{b3}}^{2} & \sigma_{x_{a1}}^{2} & \sigma_{x_{a1}x_{a2}}^{2} & \sigma_{x_{a1}x_{a3}}^{2} \\ \sigma_{x_{a2}x_{b1}}^{2} & \sigma_{x_{a2}x_{b2}}^{2} & \sigma_{x_{a2}x_{b3}}^{2} & \sigma_{x_{a2}x_{a1}}^{2} & \sigma_{x_{a2}}^{2} & \sigma_{x_{a2}x_{a3}}^{2} \\ \sigma_{x_{a3}x_{b1}}^{2} & \sigma_{x_{a3}x_{b2}}^{2} & \sigma_{x_{a3}x_{b3}}^{2} & \sigma_{x_{a3}x_{a1}}^{2} & \sigma_{x_{a3}x_{a2}}^{2} & \sigma_{x_{a3}}^{2} \\ \end{matrix} \right]
We assume Q is an identity matrix.
the observation matrix H is a 3*6 matrix as such:
The measurement covariance matrix R is a covariance matrix obtained from first 500 points of the measurements:
This strategy basically means we are using the first 500 points to predict the true ask, bid, and mid price. For the kalman filter prediction we did get a very inplausible result:
The predicted mid price goes to $ 60,000 while the real price is around $ 150. We believe there are several reasons behind this issue:
-
The biggest issue is that we are using a linear model (Kalman Filter) to predict a non-linear model (stock price). When we are transforming a non linear data into a linear system, it causes information loss. To solve this issue, we'll need a better F and H matrix to approximate the stocks.
-
Another issue is that we are making the assumption that the first 500 points can represent the next 500. It is not a very reasonable assumption to make in high frequency trading. A solution could be we use some of the hidden parameters like
andas state variables for our kalman filter. -
It could also be possible that since we are monitoring 6 price levels, the geometric brownian motion assumption does not work. We could add in some boundaries or clean up the data a bit to see if it is possible to improve it.
Overall, this research does not show a good result. We have proposed another strategy that is implemented in Strategy Studio.
Strategy Studio Implementation and Result:
The initial Kalman Filter strategy mentioned above depends on the bid and ask data, and when we tried to implement the above strategy into Strategy Studio, we discovered in the last two weeks that the OnQuote
callback did not work and only the OnTrade
callback worked. As a result, we have to develop another strategy that only works with trade data.
We learned from the fact that Kalman Filter is a more accurate smoothing or prediction algorithm than the moving average because it is adaptive and it accounts for estimation errors and tries to adjust its predictions from the information it learned in the previous stage.
To show Kalman Filter is more adaptive than moving average, in the below figure, we used the Yahoo Finance 1-minute Adjusted Close data to fit a Kalman Filter (orange line). We could see that the Kalman Filter line follows the candlesticks more closely than that of 5 Simple Moving Average(SMA) line (blue line). We also noticed that there are some crossovers between those two lines, which may indicate some trading signals.
Then we came up with the final strategy, which is based on the assumption that prediction of Kalman Filter is a leading indicator compared to a Simple Moving Average. In this final strategy, we fed the two dimensional IEX trade data (trade price and trade size) into the Kalman Filter, and it will output the predicted trade price and trade size at every timestamp. On the other hand, we took the last 100 trade prices to calculate a simple moving average (SMA) value.
- If Kalman Filter value crosses above the SMA value, then we will buy with quantity equals to the predicted trade size.
- If the Kalman Filter value dips below the SMA value, then we will sell the entire position.
The Strategy Studio Backtest Result Analysis
Due to limited hard drive storage, we are only able to backtest our strategy from 2022-Mar-02 to 2022-Mar-08. After Strategy Studio finished backtesting, it outputs three files, which are fills, orders, and PnL(Profit and Loss), and we are going to focus mainly on the PnL csv
file, which has three columns: Strategy Name, Time and Cumulative PnL.
We measure the stratgey performance by the following metrics:
- Cumulative PnL Percentage
- Sharpe Ratio (which measures the performance of an investment by taking risk into account)
- Sortino Ratio (A return/ risk measure that only penalizes downside volatility)
Although Sharpe Ratio is the most widely used return/ risk measure, it penalizes upside volatility (which means having huge return in some data point can lower the Sharpe). On the other hand, Sortino Ratio is a similiar metric like Sharpe Ratio but it only penalizes downside volatility. So, we can evaluate our strategy better by looking at both metrics.
- Maximum Drawdown
Here is the Strategy Studio Backtest Result of our strategy:
Analysis:
For the backtesting, we only trade the following stocks: DIA
, AAPL
, TSLA
from 2022-Mar-02 to 2022-Mar-08. From the above figures, the strategy loses about 2.72%, has a 7.15% max drawdown, and negative sharpe and sortino ratio. There are few possible reasons that account for the poor performance of the strategy. Firstly, we did not implement a stop loss feature in this strategy, which can let losses can get out of control and result in a huge drawdown. Secondly, the 100 trades simple moving average and the matrix in the Kalman Filter may not work best with each other, and thus it may require some parameters tuning to acheive the best result. Lastly, we are only backtesting with few days of data and only three stocks and thus the result may not fully reflect the actual performance of the strategy. In general, we are not confident with our result, and we believe the performance can be greatly improved by the following: perform hyperparameters tuning, implement stop-loss, and backtest with more historical data and stocks.
Postmortem Summary:
Norman Yeo
-
What did you specifically do individually for this project?
- I worked on most things technology related. This included setting up the environment, Vagrant, StrategyStudio Setup, creating backtesting scripts and git workflow.
- I worked on the C++ side of the project, including writing the entire strategy in C++. I worked with King to alter our strategy based on StrategyStudio's limitations.
-
What did you learn as a result of doing your project?
- I learned how to work with strategy studio and how to do research and backtest strategies on python as well.
- I also learnt the basics of managing and setting up a technical project.
-
If you had a time machine and could go back to the beginning, what would you have done differently?
- I would have set up the development environment for strategy studio much earlier to figure out the various quirks. For example, the only quote data we were able to get was trade executions.
- I would have improved my development environment to reduce the time needed to compile and backtest. This would have resulted in a shorter feedback loop and overall more time put into development. For example, I would have set up a proper IDE and more modular backtest scripts that I didn't have to update every time I wanted to run.
-
If you were to continue working on this project, what would you continue to do to improve it, how, and why?
- I would get the datasource running for quote data. I believe this is an issue not with the parser but with the StrategyStudio settings. This would allow us to write better informed strategies.
-
What advice do you offer to future students taking this course and working on their semester long project (be sides “start earlier”... everyone ALWAYS says that). Providing detailed thoughtful advice to future students will be weighed heavily in evaluating your responses.
- Have a log of what is discussed every week during meetings
- Have each team member report what they did for the week, what they are going to do, and any potential blockers that other team members need to settle
King Chak Ho
-
What did you specifically do individually for this project?
-
I worked on the Kalman Filter research part in Python. I created the first Kalman Filter notebook that reads the IEX book update data, performs feature engineering on the different Bid and Ask Price levels, creates a simple Kalman Filter model from those data, and plots visualizations of the predicted Kalman Price with the average Bid and Ask Price.
-
Once we realized the Strategy Studio
OnQuote
callbacks did not work in the last two weeks, I quickly suggested the crossover strategy that depends only on theOnTrade
callbacks, and made a simple notebook that visualizes the interactions of both simple moving average and the Kalman Filter. Also, I worked with Norman to discuss the details of the strategy: the logic behind the strategy, the buying and selling rule of the strategy, and the position size of each order. -
For documentation, I wrote the Project Description, Components, and the Strategy Studio Implementation and Result parts.
-
-
What did you learn as a result of doing your project?
-
I learned a lot about Kalman Filter by reading various research papers and textbook about Kalman Filter. Most Kalman Filter research papers and textbook are full of esoteric mathematics formulas, and as a result I learned a lot about the mathematics of the Kalman Filter algorithm. Moreover, I discovered a great book called "Kalman Filter Made Easy", which mainly focused on the applications of Kalman Filter, and it teaches me how to implement and tune Kalman Filter that suits for specifc application.
-
I also learned a lot using git and gitlab as a tool to track and manage changes of the project.
-
-
If you had a time machine and could go back to the beginning, what would you have done differently?
- I would solely focus on developing the trading strategy using Strategy Studio OnTrade functions only because we discovered in the last two week that the Strategy Studio OnQuote and OnBar function did not work, and we had to modify our strategy accordingly to work with OnTrade function.
- I would spend more time reading more papers about the applications of Kalman Filter rather than trying to understand all of the mathematics derivations of Kalman Filter because it has been proven to work over the past 50 years.
- I would buy an external hard drive to download the IEX data as soon as I can, so that we can have a much long term and accurate backtest result.
-
If you were to continue working on this project, what would you continue to do to improve it, how, and why?
- I would continue to experiment different parameters of the strategy, which includes testing different simple or exponential moving averages, tuning different covariance matrices in the Kalman Filter implementation, and implement stop loss in the strategy.
- I would backtest the strategy using more historical data possibly more than one year of data.
-
What advice do you offer to future students taking this course and working on their semester long project (be sides “start earlier”... everyone ALWAYS says that). Providing detailed thoughtful advice to future students will be weighed heavily in evaluating your responses.
- It is very important to have a clear and detailed task for every team member every week since a vague description of task can slow down the overall progress of the team project. Also, each team member should report their progress every week, which can faciliate the coordination of the project.
- If you are building a trading strategy, I would suggest allocating more time on implementating the strategy itself rather than spending too much time researching on the best trading ideas. Fail fast and move on is the nature of developing trading strategy.
Yipu Jin
-
What did you specifically do individually for this project?
-
I mostly worked on Kalman Filter research part in Python. We were orgininally using a simple version of Kalman filter with identity matrix, and I added a 500 ms time delay of our prediction to mimic the time delay of orders.
-
I have optimized our kalman filter logic by proposing a new kalman filter algorithm that takes in the depth of order book into consideration. With this new logic of setting up the matrices and parameters, I was able to approximately reduce a non-linear price movement into a linear system. I was also able to successfully find a reasonable linear system setup for the new Kalman Filter and I have also documented the new matrix in detail.
-
-
What did you learn as a result of doing your project?
-
I have learned a lot about Kalman Filter. Kalman Filter is a great linear system algorithm to estimate system parameters that can not be measured or observed with accuracy. From this project, I learned how to set up a Kalman Filter system and the mathematical meaning behind it.
-
I have also learned that the most important thing of developing a strategy is the basic fundation and its logic. For this Kalman Filter prediction algorithm, we should better understand the fundamentals behind it, like why it is good or not good for a stock prediction strategy, and what are some good examples in finance that a Kalman Filter can be utilized.
-
I have also learned to utilize a lot of technology stacks such as Git, and high frequency trading frameworks such as Linux Operating System and Strategy Studio.
-
-
If you had a time machine and could go back to the beginning, what would you have done differently?
-
Validation. I would have tried to validate the fundamentals of our strategy. We spent a lot of times trying to optimize it instead of a validation that this algorithm can indeed work in the market. For example, Kalman Filter is not a great algorithm for non linear systems. In this case, we would have used a Extended Kalman Filter to solve the non-linearality information loss issue.
-
A backtest framework. Our research part is limited because of the lack of a good high frequency backtest framework. I would have written a more rigorous backtest system to more accurately reflect our strategy's behaviour in the market.
-
Better Teamwork. I would have tried to setup a timeline for this strategy. There were multiple technical difficulties that our team encountered, such as the data inquiry for our trading strategy. Therefore, having a pre-determined schedule would really help in this case.
-
-
If you were to continue working on this project, what would you continue to do to improve it, how, and why?
-
If given more time, I will try to optimize the kalman filter more. Right now the biggest issue is that we are using a linear model (Kalman Filter) to predict a non-linear model (stock price). When we are transforming a non linear data into a linear system, it causes information loss. I will research a better F and H matrix to approximate the stocks transitions. I will also explore a non-linear version of Kalman Filter: Extendend Kalman Filter, to see if it can solve the information loss issue.
-
If I were to continue working on this project, I would like to optimize my assumption that we are assuming the first 500 points can represent the next 500. It is not a very reasonable assumption to make in high frequency trading. A solution could be we implement some of the hidden parameters like miu and sigma as state variables into our kalman filter.
-
Another future work that I would like to improve is that to improve the stock price model. Right now I am simply assuming all 6 prices follows GBM, while it is an unreasonable assumption to make because of the correlations between these price levels. I will add in some boundarys or clean up the data a bit to see if it is possible to improve it.
-
-
What advice do you offer to future students taking this course and working on their semester long project (be sides “start earlier”... everyone ALWAYS says that). Providing detailed thoughtful advice to future students will be weighed heavily in evaluating your responses.
-
To explore more options and think thoroughly. A lot of times when you are half way through your project and then you realize either it is not going to work, or it is going to take too much time. If you spend time to validate that this idea will work, then the whole process and workflow will be smoother. For example, in a Kalman Filter project I would suggest spending time to set up the most basic matrices, like transition matrix and observation matrix. Validate why this is a good/bad matrix setup first before dive into parameters tuning.
-
To meet multiple times every week and write the contribution reports. It will be really helpful if every team memeber is on the same track and contributing. A project at such scale is
-
Yuhao Wang
-
What did you specifically do individually for this project?
- For this project, I was mainly doing three tasks. The first is to back-test in Python as a reference for later C++ implementation. I wrote the return calculation part using pandas dataframe and also made some net value plots using matplotlib to show the back-test result. The second task is to try to improve the accuracy of the Kalman prediction. I tried some matrix implementation to see if there have been better results. The third one is to try to change the strategy based on the data problem that we encountered. For documentation part, I wrote the Technology and Old Strategy Backtest part.
-
What did you learn as a result of doing your project?
- First thing is that I learned about the strategy development process through this project. When we have an idea, we can start with a very simple strategy to test on. Then we can add more complicated things to try to improve the strategy performance. The second thing is that I learned about the calculation of risk measures and how Kalman Filter works. Lastly, I learned about how to use Gitlab to see what changes have been done to the project branches. This would be very useful for version control.
-
If you had a time machine and could go back to the beginning, what would you have done differently?
- When we discussed about something in the meeting, we should write it down in Discord chat so that we will not forget what we’ve decided to do after the meeting. I also wish I had more time to do more research on correlation matrices and be more responsive to my teammates.
-
If you were to continue working on this project, what would you continue to do to improve it, how, and why?
-
I would like to try more methods to determine better correlation and noise matrices. The correlation should be time corresponding. Current matrices are only predetermined and don’t give a good estimation of future prices.
-
Also I may look into the noise distribution. It may not be gaussian distributed and we will need to find some other distributions to improve the accuracy.
-
-
What advice do you offer to future students taking this course and working on their semester long project (be sides “start earlier”... everyone ALWAYS says that). Providing detailed thoughtful advice to future students will be weighed heavily in evaluating your responses.
-
Try to meet twice a week, one for assigning tasks, one for checking progress. Make sure after each meeting, everyone knows his to-do list, better take notes in Discord while meeting.
-
If you’ve found useful materials, don’t hesitate to put it on Discord. If you have trouble reading those, the team could be very helpful.
-