src.MMAR package
Submodules
src.MMAR.MMAR module
- class src.MMAR.MMAR.MMAR(price: Series, seed: int = 42, volume: Series | None = None, silent: bool = False)[source]
Bases:
object
- property H: float
The Hurst exponent
- Returns:
the Hurst exponent
- Return type:
float
- static adf_test(timeseries: Series | ndarray, conf_level: float = 0.05) bool [source]
Augmented Dickey-Fuller test Wrapper aorund statsmodels.tsa.stattools.adfuller
- Parameters:
timeseries (pd.Series | np.ndarray) – the series to analyze
conf_level (float, optional) – confidence level for p-value. Defaults to 0.05.
- Returns:
the series is strict stationary
- Return type:
bool
- property alpha_min: float
Alpha min
The minimum allowable value for alpha
- Returns:
alpha min
- Return type:
float
- check_autocorrelation(conf_level: float = 0.05, lags: int = 10) None [source]
Test series for autocorrelation
- Parameters:
conf_level (float, optional) – confidence level. Defaults to 0.05.
lags (int, optional) – umber of lags to consider. Defaults to 10.
- check_normality(conf_level: float = 0.05) None [source]
Test distribution for Normality assumption
- Parameters:
conf_level (float, optional) – confidence level. Defaults to 0.05.
- check_stationarity(conf_level: float = 0.05) None [source]
Check if the series is stationary
- Parameters:
conf_level (float, optional) – confidence level. Defaults to 0.05.
- See [statsmodels](https://www.statsmodels.org/dev/examples/notebooks/generated/stationarity_detrending_adf_kpss.html):
Case 1: Both tests conclude that the series is not stationary - The series is not stationary
Case 2: Both tests conclude that the series is stationary - The series is stationary
Case 3: KPSS indicates stationarity and ADF indicates non-stationarity - The series is trend stationary. Trend needs to be removed to make series strict stationary. The detrended series is checked for stationarity.
Case 4: KPSS indicates non-stationarity and ADF indicates stationarity - The series is difference stationary. Differencing is to be used to make series stationary. The differenced series is checked for stationarity.
- Returns:
None
- static compute_p(mul: ndarray[float, Any], S0: float, n: int = 30, num_sim: int = 10000, seed: int = 1968) ndarray[float, Any] [source]
Compute geometric MMAR using Numba
- Parameters:
mul (np.ndarray[float]) – _description_
S0 (float) – initial price
n (int, optional) – number of steps. Defaults to 30.
num_sim (int, optional) – number of simulations. Defaults to 10_000.
seed (int, optional) – seed. Defaults to 1968.
\[S(t) = S_{0} e^{B_{H}[\theta(t)]}\]- Returns:
the simulated MMAR series
- Return type:
np.ndarray[float]
- static divisors(n: int) ndarray[int, Any] [source]
Compute divisors
- Parameters:
n (int) – number to compute divisors
Note
The choice for the scale factor is arbitrary and different scales will give different outcomes. Using the divisors is time agnostic*. An alternative might be to choose a different range of scales according to the time horizon of the series. E.g., for daily data: [1, 5, 10, 15, 21, 63, 126, 252], that is daily, weekly, bi-weekly, three-weekly, month, quarter, half-year, year.
- Returns:
divisors
- Return type:
np.ndarray[int]
- get_MMAR_MC(S0: float, n: int = 30, num_sim: int = 10000, seed: int = 1968) ndarray[float, Any] [source]
Monte Carlo simulation accordig to MMAR model
- Parameters:
S0 (float) – initial price
n (int, optional) – number of steps. Defaults to 30.
num_sim (int, optional) – number of simulations. Defaults to 10_000.
seed (int, optional) – seed. Defaults to 1968.
\[ \begin{align}\begin{aligned}X(t,1) = \underbrace {\sqrt {b^k \cdot \theta_{k}(t)}}_{\sigma(t)} \cdot \sigma \cdot [B_{H}(t) -B_{H}(t-1)]\\\text{mul } = \sqrt {b^k \cdot \theta_{k}(t)} \cdot \sigma\end{aligned}\end{align} \]Note
When the length of the simulation n is different than that of Theta, we should decide what values to use. In this case we decided to use the last values theta[-n:], but maybe a random selection might be preferable. Something like np.random.choice(theta, size=n, replace=False).
- Returns:
simulated data
- Return type:
np.ndarray[float, Any]
- get_alpha_min() float [source]
Compute alpha zero The smallest alpha for which the multifractal spectrum is defined.
\[\alpha_{0} = \frac {\tau(1.0)-\tau(0.9999)}{q(1.0)-q(0.9999)}\]- Returns:
alpha zero
- Return type:
float
- get_hurst() float [source]
Compute Hurst exponent
\[H = \frac {1}{\tau(q) =0}\]- Returns:
Hurst exponent
- Return type:
float
- get_params() tuple[ndarray[float, Any], float] [source]
Compute main parameters for the MMAR model
- Returns:
Theta values, returns volatility
- Return type:
tuple[np.ndarray[float], float]
- get_scaling() tuple[ndarray[float, Any], ndarray[float, Any], ndarray[float, Any]] [source]
Compute scaling function
- Returns:
Taus, Cs, qs
- Return type:
tuple[np.ndarray[float], np.ndarray[float], np.ndarray[float]]
- static kpss_test(timeseries: Series | ndarray, conf_level: float = 0.05) bool [source]
Kwiatkowski-Phillips-Schmidt-Shin test for stationarity Wrapper aorund statsmodels.tsa.stattools.kpss
- Parameters:
timeseries (pd.Series | np.ndarray) – the series to analyze
conf_level (float, optional) – confidence level for p-value. Defaults to 0.05.
- Returns:
the series is NOT trend stationary
- Return type:
bool
- legendre(alpha: float) float [source]
Compute Legendre transformation
- Parameters:
alpha (float) – estimation point
- Returns:
transformed value
- Return type:
float
- property m: float
M of alpha
Is the first exponent that makes the multifractal spectrum = 1
\[m_{\alpha}\]Note
It’s also called \(\alpha_{0}\)
- Returns:
m of alpha
- Return type:
float
- property mu: float
Mu of alpha
\[ \begin{align}\begin{aligned}\mu_{\alpha} = \frac {m_{\alpha}}{H}\\\textrm{where } f_{\theta}(\mu_{\alpha}) = 1\end{aligned}\end{align} \]Note
It’s also called \(\lambda\)
- Returns:
Mu
- Return type:
float
- plot_alpha_theoretical() None [source]
Plot canonical f(alpha)
\[f_X(\alpha) = 1 - \frac {(\alpha - m_{\alpha})^2} {4 \cdot H \cdot (m_{\alpha}-H)}\]- Returns:
None
- property q: ndarray[float, Any]
q values (exponents)
- Returns:
qs
- Return type:
float
- property sigma: float
Sigma of alpha
\[\sigma_{\alpha} = \sqrt {\frac {2 (\mu_{\alpha}-1)} {\log(b)}}\]- Returns:
Sigma of alpha
- Return type:
float
- property sigma_ret: float
Instant volatiltiy of returns
- Returns:
volatility of returns
- Return type:
float
- property tau: ndarray[float, Any]
Tau values The value for the scaling function
\[\tau(q)\]- Returns:
Taus
- Return type:
np.ndarray[float, Any]
- tauf(x: float) float [source]
Return tau(x) via interpolation
- Parameters:
x (float) – value for which compute tau
\[\tau(x)\]- Returns:
tau(x)
- Return type:
float
- property theta: ndarray[float, Any]
Theta values
- Returns:
Thetas
- Return type:
np.ndarray[float, Any]
- src.MMAR.MMAR.normal(loc=0.0, scale=1.0, size=None)
Draw random samples from a normal (Gaussian) distribution.
The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently [2], is often called the bell curve because of its characteristic shape (see the example below).
The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution [2].
Note
New code should use the ~numpy.random.Generator.normal method of a ~numpy.random.Generator instance instead; please see the random-quick-start.
- Parameters:
loc (float or array_like of floats) – Mean (“centre”) of the distribution.
scale (float or array_like of floats) – Standard deviation (spread or “width”) of the distribution. Must be non-negative.
size (int or tuple of ints, optional) – Output shape. If the given shape is, e.g.,
(m, n, k)
, thenm * n * k
samples are drawn. If size isNone
(default), a single value is returned ifloc
andscale
are both scalars. Otherwise,np.broadcast(loc, scale).size
samples are drawn.
- Returns:
out – Drawn samples from the parameterized normal distribution.
- Return type:
ndarray or scalar
See also
scipy.stats.norm
probability density function, distribution or cumulative density function, etc.
random.Generator.normal
which should be used for new code.
Notes
The probability density for the Gaussian distribution is
\[p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },\]where \(\mu\) is the mean and \(\sigma\) the standard deviation. The square of the standard deviation, \(\sigma^2\), is called the variance.
The function has its peak at the mean, and its “spread” increases with the standard deviation (the function reaches 0.607 times its maximum at \(x + \sigma\) and \(x - \sigma\) [2]). This implies that normal is more likely to return samples lying close to the mean, rather than those far away.
References
Examples
Draw samples from the distribution:
>>> mu, sigma = 0, 0.1 # mean and standard deviation >>> s = np.random.normal(mu, sigma, 1000)
Verify the mean and the variance:
>>> abs(mu - np.mean(s)) 0.0 # may vary
>>> abs(sigma - np.std(s, ddof=1)) 0.1 # may vary
Display the histogram of the samples, along with the probability density function:
>>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), ... linewidth=2, color='r') >>> plt.show()
Two-by-four array of samples from the normal distribution with mean 3 and standard deviation 2.5:
>>> np.random.normal(3, 2.5, size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random