diff --git a/v2/.ipynb_checkpoints/Aragon_Conviction_Voting_Model-checkpoint.ipynb b/v2/.ipynb_checkpoints/Aragon_Conviction_Voting_Model-checkpoint.ipynb index 99841fe..50523c6 100644 --- a/v2/.ipynb_checkpoints/Aragon_Conviction_Voting_Model-checkpoint.ipynb +++ b/v2/.ipynb_checkpoints/Aragon_Conviction_Voting_Model-checkpoint.ipynb @@ -60,8 +60,8 @@ "![](https://i.imgur.com/euAei5R.png)\n", "\n", "### Understanding Alpha\n", - "https://www.desmos.com/calculator/x9uc6w72lm\n", - "https://www.desmos.com/calculator/0lmtia9jql\n", + "* https://www.desmos.com/calculator/x9uc6w72lm\n", + "* https://www.desmos.com/calculator/0lmtia9jql\n", "\n", "\n", "## Converting Signals to Discrete Decisions\n", @@ -164,6 +164,18 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "## cadCAD Overview\n", + "\n", + "In the cadCAD simulation [methodology](https://community.cadcad.org/t/differential-specification-syntax-key/31), we operate on four layers: **Policies, Mechanisms, States**, and **Metrics**. Information flows do not have explicit feedback loop unless noted. **Policies** determine the inputs into the system dynamics, and can come from user input, observations from the exogenous environment, or algorithms. **Mechanisms** are functions that take the policy decisions and update the States to reflect the policy level changes. **States** are variables that represent the system quantities at the given point in time, and **Metrics** are computed from state variables to assess the health of the system. Metrics can often be thought of as KPIs, or Key Performance Indicators. \n", + "\n", + "At a more granular level, to setup a model, there are system conventions and configurations that must be [followed.](https://community.cadcad.org/t/introduction-to-simulation-configurations/34)\n", + "\n", + "The way to think of cadCAD modeling is analogous to machine learning pipelines which normally consist of multiple steps when training and running a deployed model. There is preprocessing, which includes segregating features between continuous and categorical, transforming or imputing data, and then instantiating, training, and running a machine learning model with specified hyperparameters. cadCAD modeling can be thought of in the same way as states, roughly translating into features, are fed into pipelines that have built-in logic to direct traffic between different mechanisms, such as scaling and imputation. Accuracy scores, ROC, etc. are analogous to the metrics that can be configured on a cadCAD model, specifying how well a given model is doing in meeting its objectives. The parameter sweeping capability of cadCAD can be thought of as a grid search, or way to find the optimal hyperparameters for a system by running through alternative scenarios. A/B style testing that cadCAD enables is used in the same way machine learning models are A/B tested, except out of the box, in providing a side by side comparison of muliple different models to compare and contrast performance. Utilizing the field of Systems Identification, dynamical systems models can be used to \"online learn\" by providing a feedback loop to generative system mechanisms. \n", + "\n", + "\n", + "## Differential Specification \n", + "![](images/Aragon_v2.png)\n", + "\n", "## Schema of the states \n", "The model consists of a temporal in memory graph database called *network* containing nodes of type **Participant** and type **Proposal**. Participants will have *holdings* and *sentiment* and Proposals will have *funds_required, status*(candidate or active), *conviction* Tthe model as three kinds of edges:\n", "* (Participant, participant), we labeled this edge type \"influencer\" and it contains information about how the preferences and sentiment of one participant influence another \n", @@ -305,19 +317,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Simulation\n", - "\n", - "## cadCAD Overview\n", - "\n", - "In the cadCAD simulation [methodology](https://community.cadcad.org/t/differential-specification-syntax-key/31), we operate on four layers: **Policies, Mechanisms, States**, and **Metrics**. Information flows do not have explicit feedback loop unless noted. **Policies** determine the inputs into the system dynamics, and can come from user input, observations from the exogenous environment, or algorithms. **Mechanisms** are functions that take the policy decisions and update the States to reflect the policy level changes. **States** are variables that represent the system quantities at the given point in time, and **Metrics** are computed from state variables to assess the health of the system. Metrics can often be thought of as KPIs, or Key Performance Indicators. \n", - "\n", - "At a more granular level, to setup a model, there are system conventions and configurations that must be [followed.](https://community.cadcad.org/t/introduction-to-simulation-configurations/34)\n", - "\n", - "The way to think of cadCAD modeling is analogous to machine learning pipelines which normally consist of multiple steps when training and running a deployed model. There is preprocessing, which includes segregating features between continuous and categorical, transforming or imputing data, and then instantiating, training, and running a machine learning model with specified hyperparameters. cadCAD modeling can be thought of in the same way as states, roughly translating into features, are fed into pipelines that have built-in logic to direct traffic between different mechanisms, such as scaling and imputation. Accuracy scores, ROC, etc. are analogous to the metrics that can be configured on a cadCAD model, specifying how well a given model is doing in meeting its objectives. The parameter sweeping capability of cadCAD can be thought of as a grid search, or way to find the optimal hyperparameters for a system by running through alternative scenarios. A/B style testing that cadCAD enables is used in the same way machine learning models are A/B tested, except out of the box, in providing a side by side comparison of muliple different models to compare and contrast performance. Utilizing the field of Systems Identification, dynamical systems models can be used to \"online learn\" by providing a feedback loop to generative system mechanisms. \n", - "\n", - "\n", - "## Differential Specification \n", - "![](images/Aragon_v2.png)" + "# Simulation" ] }, { diff --git a/v2/Aragon_Conviction_Voting_Model.ipynb b/v2/Aragon_Conviction_Voting_Model.ipynb index 99841fe..50523c6 100644 --- a/v2/Aragon_Conviction_Voting_Model.ipynb +++ b/v2/Aragon_Conviction_Voting_Model.ipynb @@ -60,8 +60,8 @@ "![](https://i.imgur.com/euAei5R.png)\n", "\n", "### Understanding Alpha\n", - "https://www.desmos.com/calculator/x9uc6w72lm\n", - "https://www.desmos.com/calculator/0lmtia9jql\n", + "* https://www.desmos.com/calculator/x9uc6w72lm\n", + "* https://www.desmos.com/calculator/0lmtia9jql\n", "\n", "\n", "## Converting Signals to Discrete Decisions\n", @@ -164,6 +164,18 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "## cadCAD Overview\n", + "\n", + "In the cadCAD simulation [methodology](https://community.cadcad.org/t/differential-specification-syntax-key/31), we operate on four layers: **Policies, Mechanisms, States**, and **Metrics**. Information flows do not have explicit feedback loop unless noted. **Policies** determine the inputs into the system dynamics, and can come from user input, observations from the exogenous environment, or algorithms. **Mechanisms** are functions that take the policy decisions and update the States to reflect the policy level changes. **States** are variables that represent the system quantities at the given point in time, and **Metrics** are computed from state variables to assess the health of the system. Metrics can often be thought of as KPIs, or Key Performance Indicators. \n", + "\n", + "At a more granular level, to setup a model, there are system conventions and configurations that must be [followed.](https://community.cadcad.org/t/introduction-to-simulation-configurations/34)\n", + "\n", + "The way to think of cadCAD modeling is analogous to machine learning pipelines which normally consist of multiple steps when training and running a deployed model. There is preprocessing, which includes segregating features between continuous and categorical, transforming or imputing data, and then instantiating, training, and running a machine learning model with specified hyperparameters. cadCAD modeling can be thought of in the same way as states, roughly translating into features, are fed into pipelines that have built-in logic to direct traffic between different mechanisms, such as scaling and imputation. Accuracy scores, ROC, etc. are analogous to the metrics that can be configured on a cadCAD model, specifying how well a given model is doing in meeting its objectives. The parameter sweeping capability of cadCAD can be thought of as a grid search, or way to find the optimal hyperparameters for a system by running through alternative scenarios. A/B style testing that cadCAD enables is used in the same way machine learning models are A/B tested, except out of the box, in providing a side by side comparison of muliple different models to compare and contrast performance. Utilizing the field of Systems Identification, dynamical systems models can be used to \"online learn\" by providing a feedback loop to generative system mechanisms. \n", + "\n", + "\n", + "## Differential Specification \n", + "![](images/Aragon_v2.png)\n", + "\n", "## Schema of the states \n", "The model consists of a temporal in memory graph database called *network* containing nodes of type **Participant** and type **Proposal**. Participants will have *holdings* and *sentiment* and Proposals will have *funds_required, status*(candidate or active), *conviction* Tthe model as three kinds of edges:\n", "* (Participant, participant), we labeled this edge type \"influencer\" and it contains information about how the preferences and sentiment of one participant influence another \n", @@ -305,19 +317,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Simulation\n", - "\n", - "## cadCAD Overview\n", - "\n", - "In the cadCAD simulation [methodology](https://community.cadcad.org/t/differential-specification-syntax-key/31), we operate on four layers: **Policies, Mechanisms, States**, and **Metrics**. Information flows do not have explicit feedback loop unless noted. **Policies** determine the inputs into the system dynamics, and can come from user input, observations from the exogenous environment, or algorithms. **Mechanisms** are functions that take the policy decisions and update the States to reflect the policy level changes. **States** are variables that represent the system quantities at the given point in time, and **Metrics** are computed from state variables to assess the health of the system. Metrics can often be thought of as KPIs, or Key Performance Indicators. \n", - "\n", - "At a more granular level, to setup a model, there are system conventions and configurations that must be [followed.](https://community.cadcad.org/t/introduction-to-simulation-configurations/34)\n", - "\n", - "The way to think of cadCAD modeling is analogous to machine learning pipelines which normally consist of multiple steps when training and running a deployed model. There is preprocessing, which includes segregating features between continuous and categorical, transforming or imputing data, and then instantiating, training, and running a machine learning model with specified hyperparameters. cadCAD modeling can be thought of in the same way as states, roughly translating into features, are fed into pipelines that have built-in logic to direct traffic between different mechanisms, such as scaling and imputation. Accuracy scores, ROC, etc. are analogous to the metrics that can be configured on a cadCAD model, specifying how well a given model is doing in meeting its objectives. The parameter sweeping capability of cadCAD can be thought of as a grid search, or way to find the optimal hyperparameters for a system by running through alternative scenarios. A/B style testing that cadCAD enables is used in the same way machine learning models are A/B tested, except out of the box, in providing a side by side comparison of muliple different models to compare and contrast performance. Utilizing the field of Systems Identification, dynamical systems models can be used to \"online learn\" by providing a feedback loop to generative system mechanisms. \n", - "\n", - "\n", - "## Differential Specification \n", - "![](images/Aragon_v2.png)" + "# Simulation" ] }, {