CrunchDAO
CrunchDAO is a research team of data scientists leveraging the power of collective intelligence and Web3 to produce and sell predictive financial insights.
core-proposal tokenomics
The proposal details the Crunch Foundation's strategic shift towards an innovative tokenomics framework, using the Burn and Mint Equilibrium model to create a proportional and sustainable revenue sharing mechanism and foster new AI markets across our community.
machine learning quantitative finance kernel methods
CrunchDAO's Machine-Learning-enabled ensemble framework builds on top of traditional econometric risk models, requiring a number of steps in the data preparation: features orthogonalization, standardization, model order reduction, and data obfuscation will be discussed. It is discussed how, in the context of ensemble learning and bagging in particular, combining a variety of orthogonal models yields more accurate estimates of expectations. Moreover, the statistics of the set of predictions can be used to infer a measure of risk in the portfolio management process. We discuss how to integrate this in modern portfolio theory. We briefly discuss the necessary relation between these design choices and the ergodic hypothesis on financial data.
CrunchDAO serves as a secure intermediary, enabling data-scientists and quants to keep control of their models while powering hedge funds and financial institutions. CrunchDAO uses a meritocratic reward-driven tournament model to produce, collect and sell machine learning models. Our protocol speeds up the research process while avoiding overfitting. By selecting our models, our clients can select unbiased models to power their products and services. The CrunchDAO tournament brings decentralized scientific innovation (DeSci) to the world of quantitative research. Thousands of data scientists and PhDs compete to accurately predict the stock market, earning rewards for their predictions based on the performance of live financial data.
desci machine-learning ipfs aerospace
Accurate prediction of the thermospheric density field has recently been gaining a lot of attention, due to an outstanding increase in space operations in Low-Earth Orbit, in the context of the NewSpace. In order to model such high-dimensional systems, Reduced-Order Models (ROMs) have been developed against existing physics-based density models. In general, the data-driven reconstruction of dynamical systems usually consists of two steps: compression and prediction. In this paper, we focus on the compression step and assess state-of-the-art order reduction methodologies such as autoencoders, linear, and nonlinear Principal Component Analysis (PCA). Kernel-based PCA, which is a nonlinear variation of PCA, outperforms Neural Networks in representing the model in low-dimensional space in terms of accuracy, computational time, and energy consumption for both the Lorenz system, a chaotic dynamical system developed in the context of atmospheric modeling, and the thermospheric density.