For this package, we have written methods to estimate regressions trees and random forests to minimize the spectral objective:
\[\hat{f} = \text{argmin}_{f' \in \mathcal{F}} \frac{||Q(\mathbf{Y} - f'(\mathbf{X}))||_2^2}{n}\] The package is currently fully written in R Core Team (2024) for now and it gets quite slow for larger sample sizes. There might be a faster cpp version in the future, but for now, there are a few ways to increase the computations if you apply the methods to larger data sets.
Some speedup can be achieved by taking advantage of modern hardware.
When estimating an SDForest, the most obvious way to increase the computations is to fit the individual trees on different cores in parallel. Parallel computing is supported for both Unix and Windows. Depending on how your system is set up, some linear algebra libraries might already run in parallel. In this case, the speed improvement from choosing more than one core to run on might not be that large. Be aware of potential RAM-overflows.
In a few places, approximations perform almost as well as if we run
the whole procedure. Reasonable split points to divide the space of
\(\mathbb{R}^p\) are, in principle, all
values between the observed ones. In practice and with many
observations, the number of potential splits grows too large. We,
therefore, evaluate maximal max_candidates splits of the
potential ones and choose them according to the quantiles of the
potential ones.
# approximation of candidate splits
fit <- SDForest(x = X, y = Y, max_candidates = 100)
tree <- SDTree(x = X, y = Y, max_candidates = 50)If we have many observations, we can reduce computing time by only
sampling max_size observations from the data instead of
\(n\). This can dramatically reduce
computing time compared to a full bootstrap sample but could also
decrease performance.
# draws maximal 500 samples from the data for each tree
fit <- SDForest(x = X, y = Y, max_size = 500)