Compare commits

...

344 Commits

Author SHA1 Message Date
Markus Buhatem Koch f5ca0857f3
Merge pull request #76 from BlockScience/staging
Update README.md
2019-10-02 15:22:25 -03:00
Markus Buhatem Koch d9ff2997a1
Update README.md
`System Model Configuration` is now the base MD in the documentation folder
2019-10-02 15:22:02 -03:00
Joshua E. Jodesty ddabdbb381
Merge pull request #75 from BlockScience/staging
Returning Build From Source and Made documentation prominent
2019-09-26 15:05:48 -04:00
Joshua E. Jodesty 17316277ff
Merge pull request #74 from JEJodesty/staging
Staging
2019-09-26 15:02:15 -04:00
Joshua E. Jodesty c326f3c8c0 docs 2019-09-26 14:57:00 -04:00
Joshua E. Jodesty 422b0fb671 documentation & build from source is important for contributors 2019-09-26 09:16:20 -04:00
Joshua E. Jodesty 319f74a89c documentation is important for contributors . . . 2019-09-24 20:22:01 -04:00
Joshua E. Jodesty 154a653c7f
Merge pull request #73 from BlockScience/staging
deploying 0.3.1
2019-09-21 11:05:42 -04:00
Markus Buhatem Koch 3683aa30dc
Merge pull request #72 from markusbkoch/master
Update README
2019-09-19 22:46:42 -03:00
Markus Buhatem Koch 837aad310e
Update README.md 2019-09-19 22:33:54 -03:00
Markus Buhatem Koch 4e5dca0cf9
Rename Simulation_Configuration.md to README.md 2019-09-19 22:33:06 -03:00
Markus Buhatem Koch 4fe3419b23
Update README.md 2019-09-19 22:32:40 -03:00
Joshua E. Jodesty 7c870e584b
Merge pull request #71 from JEJodesty/staging
Merging dist containing tp.clear()
2019-09-16 23:21:21 -04:00
Joshua E. Jodesty b515dd3fdb new dist 2019-09-16 23:18:10 -04:00
Joshua E. Jodesty 5048976f71
Merge pull request #69 from markusbkoch/markusbkoch-patch-1
merge setup.py update
2019-09-16 23:08:50 -04:00
Markus Buhatem Koch 4e1f730c27
update version number 2019-09-11 17:20:33 -03:00
Markus Buhatem Koch 9f96821b89
add dependencies to setup.py
so that pypi installs dependencies automatically (https://packaging.python.org/discussions/install-requires-vs-requirements/)
2019-09-11 17:10:20 -03:00
Markus Buhatem Koch 460b1ff67c
Merge pull request #1 from BlockScience/master
merge
2019-09-11 22:03:02 +02:00
Joshua E. Jodesty b8fa090222
Merge pull request #68 from BlockScience/staging
contribution draft rename
2019-09-07 20:57:38 -04:00
Joshua E. Jodesty 01285e6320
Merge pull request #67 from JEJodesty/master
contribution draft rename
2019-09-07 20:56:48 -04:00
Joshua E. Jodesty d3ef3d23f5 contribution draft 2019-09-07 20:55:48 -04:00
Joshua E. Jodesty ee8b3de331
Merge pull request #66 from BlockScience/staging
contributing.md
2019-09-07 20:53:30 -04:00
Joshua E. Jodesty ce3eacd971
Merge pull request #65 from JEJodesty/master
contributing.md
2019-09-07 20:52:17 -04:00
Joshua E. Jodesty 86e683b268 contribution draft 2019-09-07 20:50:48 -04:00
Joshua E. Jodesty a2346046f3 contribution draft 2019-09-07 20:48:51 -04:00
Joshua E. Jodesty 0619764aef
Merge pull request #64 from BlockScience/staging
Staging
2019-09-07 19:55:33 -04:00
Joshua E. Jodesty faae27f21e
Merge pull request #63 from JEJodesty/master
Open Source cadCAD 0.3.0!!!!!!
2019-09-07 19:54:38 -04:00
Joshua E. Jodesty dd872c3878 open sourced 0.3.0 2019-09-07 19:50:48 -04:00
Joshua E. Jodesty 130f85f0ef
Merge pull request #62 from JEJodesty/master
Hell
2019-09-07 19:31:42 -04:00
Joshua E. Jodesty bc4ab3113d behind gates of hell pt. 1 2019-09-07 19:30:06 -04:00
Joshua E. Jodesty c57e2d9840 behind gates of hell 2019-09-07 19:28:44 -04:00
Joshua E. Jodesty 81d666ce3e hell gates pt. 7 2019-09-07 19:27:20 -04:00
Joshua E. Jodesty f00b14d52e
Merge pull request #61 from BlockScience/staging
Update Description
2019-09-07 19:07:28 -04:00
Joshua E. Jodesty 5d0b1c4aec
Merge pull request #60 from JEJodesty/staging
Update Description
2019-09-07 19:05:47 -04:00
Joshua E. Jodesty 8e76f3323b update desc seperate 2019-09-07 18:59:41 -04:00
Joshua E. Jodesty 99495d08dc ascii art 2019-09-05 19:06:02 -04:00
Joshua E. Jodesty 9a12b5d0d6
Merge pull request #58 from JEJodesty/staging
Staging
2019-09-05 16:18:18 -04:00
Joshua E. Jodesty 429d2c9e0f hell gates pt.6 2019-09-05 16:15:53 -04:00
Joshua E. Jodesty f249814aa9 hell gates pt.5 2019-09-05 16:14:43 -04:00
Joshua E. Jodesty 4f491bc8c9 updated authors 2019-09-05 11:02:13 -04:00
Joshua E. Jodesty 56c38dfd44 Merge branch 'staging' of https://github.com/BlockScience/cadCAD into staging 2019-09-04 16:55:15 -04:00
Joshua E. Jodesty 411975913c misc 2019-09-04 16:41:20 -04:00
Markus Buhatem Koch f931945eaf
Merge pull request #57 from BlockScience/staging
add link to readme
2019-08-28 09:13:56 -03:00
Markus d260754dc1 add link to readme 2019-08-28 09:13:33 -03:00
Markus Buhatem Koch d56b5c1c5f
Merge pull request #56 from BlockScience/staging
Tutorial part 6
2019-08-28 09:12:18 -03:00
Markus 792c62c213 Tutorial part 6 2019-08-28 09:11:37 -03:00
Joshua E. Jodesty 4f58a169c5
Merge pull request #55 from BlockScience/staging
restart threads
2019-08-27 15:28:54 -04:00
Joshua E. Jodesty db7de4fe4f restart threads 2019-08-27 15:28:10 -04:00
Joshua E. Jodesty 8768819790 restart threads 2019-08-27 14:46:16 -04:00
Joshua E. Jodesty de9a708d43
Merge pull request #54 from BlockScience/staging
Open Sourcing cadCad Pt. 2
2019-08-22 19:57:37 -04:00
Joshua E. Jodesty 6489a75f1e opp cp 2019-08-22 19:49:09 -04:00
Joshua E. Jodesty ac6e6eebda kudos xd pt 2 2019-08-22 18:55:44 -04:00
Joshua E. Jodesty 342f3a519c added liscence and authors 2019-08-22 18:33:17 -04:00
Joshua E. Jodesty fc655d3741 added liscence and authors 2019-08-22 18:32:40 -04:00
Joshua E. Jodesty f9996163d0
Merge pull request #53 from BlockScience/staging
Open Sourcing cadCAD!!!!
2019-08-22 18:29:47 -04:00
Joshua E. Jodesty 3c584a05bd added dist, chaned repo name in setup 2019-08-22 18:27:57 -04:00
Joshua E. Jodesty 50c830db38 added dist, chaned repo name in setup 2019-08-22 18:27:31 -04:00
Joshua E. Jodesty 4b381f81d7 added python version 2019-08-22 18:24:01 -04:00
Joshua E. Jodesty 0d74ec5285 added python version 2019-08-22 18:22:46 -04:00
Joshua E. Jodesty 8ad580e0fb
Merge pull request #52 from BlockScience/tutorials
tutorials from cadCAD-Tutorials repo + fixed links
2019-08-22 17:36:45 -04:00
Joshua E. Jodesty 4faba2a37c added Liscense + Authors 2019-08-22 17:22:46 -04:00
Joshua E. Jodesty fe9c5f6caa Merge branch 'staging' of https://github.com/BlockScience/DiffyQ-SimCAD into staging 2019-08-22 17:19:39 -04:00
Joshua E. Jodesty 4dc06f581d added Liscense + Authors 2019-08-22 17:18:56 -04:00
Joshua E. Jodesty f5d5b28292 added Liscense + Authors 2019-08-22 17:17:51 -04:00
Joshua E. Jodesty 063e56dc76 added Liscense + Authors 2019-08-22 17:16:45 -04:00
Joshua E. Jodesty 2bfd37fecd added Liscense + Authors 2019-08-22 17:13:05 -04:00
Joshua E. Jodesty d7e6c1ba0d update readme 2019-08-22 16:18:03 -04:00
Joshua E. Jodesty 104da824a2 Merge branch 'staging' of https://github.com/BlockScience/DiffyQ-SimCAD into staging 2019-08-22 16:07:36 -04:00
Markus Buhatem Koch ef8e23481a
Merge branch 'staging' into tutorials 2019-08-22 15:28:02 -03:00
Joshua E. Jodesty 2a37eb5c02 fixing likns 2019-08-22 14:26:03 -04:00
Markus Buhatem Koch 7b428ddb81
relative link 2019-08-22 15:06:17 -03:00
Joshua E. Jodesty 7d0a14efbf added docs from tutorial 2019-08-22 12:52:32 -04:00
Markus 9ac9e238bb tutorials from cadCAD-Tutorials repo + fixed links 2019-08-21 21:08:23 -03:00
Joshua E. Jodesty 747ec36e50 os refactor pt 1 2019-08-21 14:27:31 -04:00
Joshua E. Jodesty 67c46cfe09 diverged docs 2019-08-21 14:16:31 -04:00
Joshua E. Jodesty 9399c6b728 improved readme 2019-07-30 12:53:25 -04:00
Joshua E. Jodesty 176593ae0f included execution in readme 2019-07-30 12:41:13 -04:00
Joshua E. Jodesty 715e6f9a74 pre refactor upload 2019-07-30 11:17:49 -04:00
Joshua E. Jodesty c55e433920 docs pending review 2019-07-19 10:59:05 -04:00
Joshua E. Jodesty bfdc7d0ad3
Update README.md 2019-06-12 18:21:35 -04:00
Joshua E. Jodesty d7fe3331f8 added output 2019-06-07 10:45:43 -04:00
Joshua E. Jodesty 964e3f7bc1 test partially done 2019-06-07 10:40:45 -04:00
Joshua E. Jodesty fe8d9a1eac
Merge pull request #51 from BlockScience/staging
ver. 0.2.4 - overwrite master
2019-05-31 17:42:54 -04:00
Joshua E. Jodesty 5877d20fc6 Merge branch 'master' into staging 2019-05-31 17:38:51 -04:00
Joshua E. Jodesty 45b684f2bb
Merge pull request #50 from BlockScience/udc
ver 0.2.4 overwrite staging
2019-05-31 17:26:14 -04:00
Joshua E. Jodesty d892d74e31 ver. 0.2.4 2019-05-31 17:24:36 -04:00
Joshua E. Jodesty 16aa71664d Merge branch 'staging' into udc 2019-05-31 17:15:53 -04:00
Joshua E. Jodesty 3019715d83 ver. 0.2.4 2019-05-31 17:10:28 -04:00
Joshua E. Jodesty 0a0d85c257 ver. 0.2.4 2019-05-31 16:57:30 -04:00
Joshua E. Jodesty 4870f2db92 pre push mad stuff 2019-05-31 16:26:58 -04:00
Joshua E. Jodesty f224df3ed4 param sweep patch 2019-05-31 11:25:54 -04:00
Joshua E. Jodesty 9f181e6b3f dist added? 2019-05-21 12:44:29 -04:00
Joshua E. Jodesty 7f28bae21a changed proc_trigger to var_trigger in sweep test 2019-05-16 16:33:53 -04:00
Joshua E. Jodesty 2acf33d1f3 fixed bug2: re-included deepcopy 2019-05-16 13:05:30 -04:00
Joshua E. Jodesty b020d9e23f fixed bug: re-included deepcopy 2019-05-16 12:56:28 -04:00
Joshua E. Jodesty 2de989db0a fefactored env_process 2019-05-16 12:51:56 -04:00
Joshua E. Jodesty 1c89d28ab5 add new ver 2019-05-11 12:28:31 -04:00
Joshua E. Jodesty 01c5945724 deepcopy gate 2019-05-09 16:16:07 -04:00
Joshua E. Jodesty 71264c1c8f udo dirty publish to 'side branch' 2019-05-09 13:30:47 -04:00
Joshua E. Jodesty 3c91040401 udo refactor 2019-05-02 14:10:58 -04:00
Joshua E. Jodesty 30e1c336e6 feature example refactoring pt. 1 2019-04-25 11:02:27 -04:00
Joshua E. Jodesty 9dbb866bd0 agent perception 2019-04-16 20:16:40 -04:00
Joshua E. Jodesty c4863a838d udc w/ policies 2019-04-05 13:38:55 -04:00
Joshua E. Jodesty 875f370c5e json udc working - meets spec 2019-04-03 15:33:38 -04:00
Joshua E. Jodesty a57e9d5ea3 json udc working but not spec 2019-04-03 10:59:45 -04:00
Joshua E. Jodesty 30127989c9
Merge pull request #49 from BlockScience/staging
Staging
2019-03-29 11:01:38 -04:00
Joshua E. Jodesty 2387dc071b cadCAD==0.2.1 clean2 2019-03-29 10:59:55 -04:00
Joshua E. Jodesty c05cb7ad05 cadCAD==0.2.1 clean2 2019-03-29 10:57:43 -04:00
Joshua E. Jodesty c9ecf54d0d cadCAD==0.2.1 clean 2019-03-29 10:36:33 -04:00
Joshua E. Jodesty b2b466493b
Merge pull request #48 from BlockScience/staging
Staging
2019-03-29 09:30:45 -04:00
Joshua E. Jodesty 295968b71f cadCAD==0.2.1 2019-03-29 09:29:41 -04:00
Joshua E. Jodesty d56d60d7a3 hydra 2019-03-29 09:10:31 -04:00
Joshua E. Jodesty ac44a7bee8 checkpoint 2019-03-18 11:42:37 -04:00
Joshua E. Jodesty b3b0356a8f moved config_sim 2019-03-06 15:11:25 -05:00
Joshua E. Jodesty 3cf3f45c08 udc workarround 2019-03-06 14:50:46 -05:00
Joshua E. Jodesty 5f2d0801ca multithreaded runs for uat 2019-03-04 09:12:06 -05:00
Joshua E. Jodesty e37601ae22 parallelized runs 2019-03-01 17:30:37 -05:00
Joshua E. Jodesty d56e843fcc type savety skipe ended 2019-03-01 11:34:50 -05:00
Joshua E. Jodesty 7fc2e6503c trying to fix cadCAD/engine/__init__.py 2019-02-27 14:43:22 -05:00
Joshua E. Jodesty 9e9f7be17e wrapping up type stuff 2019-02-27 11:36:10 -05:00
Joshua E. Jodesty cb6acce3d9 type annotations: simulation.py 2019-02-26 11:49:00 -05:00
Joshua E. Jodesty 9d9e33b766 type annotations: simulation.py 2019-02-26 11:47:41 -05:00
Joshua E. Jodesty b0934b70aa
Merge pull request #47 from BlockScience/staging
readme update
2019-02-21 15:35:30 -05:00
Joshua E. Jodesty 2fb0dcf754 readme update 2019-02-21 15:34:27 -05:00
Joshua E. Jodesty 26d4a2d398
Merge pull request #46 from BlockScience/staging
readme update
2019-02-21 15:32:38 -05:00
Joshua E. Jodesty fe1960797e readme update 2019-02-21 15:32:11 -05:00
Joshua E. Jodesty c0e7f821a2
Merge pull request #45 from BlockScience/staging
Installation Update
2019-02-21 15:31:21 -05:00
Joshua E. Jodesty d932332fcc readme update 2019-02-21 15:30:23 -05:00
Joshua E. Jodesty 516b77d693 readme update 2019-02-21 15:29:36 -05:00
Joshua E. Jodesty 9c848b5cb9 readme update 2019-02-21 15:28:20 -05:00
Joshua E. Jodesty 398565ecff
Merge pull request #44 from BlockScience/staging
Rebase Staging
2019-02-21 15:10:54 -05:00
Joshua E. Jodesty a2453e8adf rebased master 2019-02-21 15:09:19 -05:00
Joshua E. Jodesty 92559494d3 fixed typo 2019-02-21 15:07:16 -05:00
Joshua E. Jodesty dbf8e11d0b
Merge pull request #43 from BlockScience/staging
not seeing multi examples in readme?
2019-02-21 14:49:52 -05:00
Joshua E. Jodesty ca81e4c2e2 added gemfury install option 2019-02-21 14:48:24 -05:00
Joshua E. Jodesty 6b064707fc added gemfury install option 2019-02-21 14:45:56 -05:00
Joshua E. Jodesty 03b53b59af
Merge pull request #42 from BlockScience/staging
added gemfury install option
2019-02-21 14:43:33 -05:00
Joshua E. Jodesty 80e51f6c8c added gemfury install option 2019-02-21 14:40:23 -05:00
Joshua E. Jodesty 7f4f6ddd77 local merge for renaming 2019-02-20 12:02:24 -05:00
Joshua E. Jodesty 7fb764056f renameing hell pt. 4 2019-02-20 11:51:55 -05:00
Joshua E. Jodesty d9002d4950 rename hell pt. 3 2019-02-18 16:26:12 -05:00
Joshua E. Jodesty 2b9ab7cd46 rename hell pt. 3 2019-02-18 16:24:56 -05:00
Joshua E. Jodesty 1862416b86 clean staging 2019-02-18 15:31:39 -05:00
Joshua E. Jodesty 19feab55e0
Merge pull request #38 from BlockScience/refactor_terminology
Refactor terminology
2019-02-18 15:15:42 -05:00
Joshua E. Jodesty e30388ff6b e-courage: modularizing backwards compatability 2019-02-18 14:20:59 -05:00
Joshua E. Jodesty 5863188617 e-courage: modularizing backwards compatability 2019-02-18 14:12:06 -05:00
Joshua E. Jodesty ef9d73a32c e-courage 2 2019-02-18 14:00:39 -05:00
Joshua E. Jodesty 0c234e2f00 ecourage 2019-02-18 13:10:23 -05:00
Joshua E. Jodesty d04e2bb7e4 changed jupyter file 2019-02-18 12:32:47 -05:00
Joshua E. Jodesty fe730c3e6c changed jupyter file 2019-02-18 12:30:36 -05:00
Joshua E. Jodesty e00605c073 Merge branch 'param-sweep-multi-proc' of https://github.com/BlockScience/DiffyQ-SimCAD into param-sweep-multi-proc 2019-02-18 12:22:50 -05:00
Joshua E. Jodesty 59ba3d9f21 update state update 2019-02-18 12:22:22 -05:00
Markus 69dfaf391a update trigger 2019-02-18 13:39:26 -03:00
Markus 0895019991 fix bug demoed at commit 129b11f 2019-02-18 10:32:20 -03:00
Markus 129b11fa4c bug when partial_state_update_blocks dict is empty 2019-02-18 10:30:47 -03:00
Markus 2c4b775d86 tests 2019-02-18 10:28:51 -03:00
Markus ffd90b9ecd bug fix
to match the renamed argument in Configuration constructor
2019-02-18 10:17:42 -03:00
Markus 00f5d53888 renaming some user-facing terms 2019-02-15 14:33:55 -02:00
Joshua E. Jodesty 9697ed488a
Merge pull request #37 from BlockScience/param-sweep-multi-proc
renamed b_identity to p_identitiy
2019-02-15 09:51:11 -05:00
Joshua E. Jodesty e2d68a0587 renamed b_identity to p_identitiy 2019-02-15 09:49:52 -05:00
Joshua E. Jodesty b910c38ad9 removed *.egg-info 2019-02-15 09:43:33 -05:00
Markus 47cfc12560 misc bug fixes 2019-02-15 09:11:58 -02:00
Markus Buhatem Koch ed2ccf5421
Merge branch 'staging' into refactor/terminology 2019-02-15 08:56:34 -02:00
Joshua E. Jodesty 1988558fd4
Merge pull request #36 from BlockScience/param-sweep-multi-proc
rename hell
2019-02-14 20:08:19 -05:00
Joshua E. Jodesty 50e4a38df7 rename hell 2019-02-14 20:05:46 -05:00
Markus 2d4b7b612c renaming SimCAD to cadCAD 2019-02-14 22:35:28 -02:00
Joshua E. Jodesty 11f394cd8f
Merge pull request #34 from BlockScience/revert-32-param-sweep-multi-proc
Revert "Renaming PR hack"
2019-02-14 16:07:43 -05:00
Joshua E. Jodesty 2310d5042c
Revert "Renaming PR hack" 2019-02-14 16:06:31 -05:00
Joshua E. Jodesty dfb9c433b1
Merge pull request #33 from BlockScience/param-sweep-multi-proc
renaming PR Hack
2019-02-14 16:03:22 -05:00
Joshua E. Jodesty 7cecd7d534
Merge pull request #32 from BlockScience/param-sweep-multi-proc
Renaming PR hack
2019-02-14 15:44:47 -05:00
Joshua E. Jodesty fcda21d513 renaming 2019-02-14 15:38:57 -05:00
Joshua E. Jodesty 76fb452508 renaming 2019-02-14 15:27:51 -05:00
Joshua E. Jodesty ef4c0c1968
Merge pull request #31 from BlockScience/param-sweep-multi-proc
backwards merge
2019-02-14 10:35:17 -05:00
Joshua E. Jodesty 06367f0573 backwards merge 2019-02-14 10:34:05 -05:00
Joshua E. Jodesty ed2f31cffc merge heaven: working 2019-02-14 10:29:45 -05:00
Joshua E. Jodesty 8d56cf2939 restore test2 2019-02-14 10:18:31 -05:00
Joshua E. Jodesty 11d7ba7cf1 restore test 2019-02-14 10:17:28 -05:00
Joshua E. Jodesty 2989dc2554 merge hell 2019-02-14 10:16:24 -05:00
Joshua E. Jodesty 6064647e4c
Merge pull request #30 from BlockScience/param-sweep-multi-proc
Param sweep multi proc
2019-02-14 09:36:05 -05:00
Joshua E. Jodesty df4b7ce747 rename config: sweep_config 2019-02-14 09:33:10 -05:00
Joshua E. Jodesty 20d7d620b7 misc 2019-02-14 09:30:33 -05:00
Joshua E. Jodesty e73113754f misc 2019-02-13 16:50:04 -05:00
Joshua E. Jodesty e78bfa3c8a param sweep full spec: identity handling 2019-02-13 16:04:31 -05:00
Joshua E. Jodesty 368fcaa13d param sweep full spec working pre-release 2019-02-13 08:18:33 -05:00
Joshua E. Jodesty 522d6dd343 param sweep full spec working pre-release 2019-02-13 08:16:58 -05:00
Joshua E. Jodesty ddc67531bd param sweep full spec working 2019-02-13 00:38:15 -05:00
Joshua E. Jodesty cccb491f2c temp parama sweep 2019-02-12 21:13:08 -05:00
Markus eaf2f4d291 change the env_proc trigger from `timestamp` to `timestep`
This achieves the same results without requiring `timestamp` to be a mandatory state variable
2019-02-12 15:59:43 -02:00
Markus 011e322706 comment print statement
it confuses the reader of the analysis notebook, as it looks like an output
2019-02-12 14:19:47 -02:00
Markus 36512142fb (time_step, mech_step) -> (timestep, substep) 2019-02-12 14:17:51 -02:00
Markus ccdf7ba80d Revert "rename mech_step > sub_timestep"
This reverts commit f7955b78fd.
2019-02-11 17:04:18 -02:00
Markus f7955b78fd rename mech_step > sub_timestep
open to other suggestions
2019-02-11 10:28:09 -02:00
Markus 893f1d280a comment 2019-02-11 10:12:07 -02:00
Markus 53c8764563 support list of mechsteps instead of dict 2019-02-11 10:01:36 -02:00
Markus Buhatem Koch ef7b42a39a
Merge pull request #28 from BlockScience/fix/mechsteps_ordering
support list of mechsteps instead of dict
2019-02-11 09:44:49 -02:00
Markus b19819bd7d rename keys in partial state update blocks 2019-02-11 09:42:01 -02:00
Markus 25aa912c2b rename arguments in the Configuration constructor 2019-02-11 09:10:55 -02:00
Markus Buhatem Koch a4c04ee20c
Merge pull request #27 from BlockScience/refactor/optional_configs
Refactor/optional configs
2019-02-11 08:35:16 -02:00
Joshua E. Jodesty 45f8fffe83 funct middleware 2019-02-08 13:14:10 -05:00
Joshua E. Jodesty 2d752176eb middleware working 2019-02-06 20:37:10 -05:00
Joshua E. Jodesty c58f2d65a6 middle ground pt. 3 2019-02-05 20:00:39 -05:00
Joshua E. Jodesty 6b4ed2dfce fixed import 2019-02-05 11:51:32 -05:00
Joshua E. Jodesty f9b3b1ea18 middleware middleground 2019-02-05 11:48:57 -05:00
Joshua E. Jodesty 52fbac381c fixed typo in sim doc 2019-02-04 20:15:20 -05:00
Joshua E. Jodesty 17362884dc middleware pt 2 2019-02-04 20:13:28 -05:00
Joshua E. Jodesty 20a8bd3026 middleware 2019-02-04 16:40:06 -05:00
Joshua E. Jodesty 3719ead0b1 env-proc sweep: Not producing multiple dicts 2019-02-01 12:43:42 -05:00
Joshua E. Jodesty eaf9cf21ff Not producing multiple dicts 2019-02-01 12:42:42 -05:00
Joshua E. Jodesty 5729ffc0ed param-sweep-multi-proc 2019-01-31 09:27:54 -05:00
Joshua E. Jodesty a9c97467ae pivot2 2019-01-30 16:04:51 -05:00
Joshua E. Jodesty 9e277d3cf0 pivot 2019-01-30 15:04:56 -05:00
Joshua E. Jodesty cd729bf0a1 ongoing 2019-01-28 16:27:16 -05:00
Joshua E. Jodesty e06cb00536 Merge 'staging's' core architecture into 'param-sweep-multi-proc' 2019-01-28 13:45:29 -05:00
Markus fae948d885 make some configuration arguments optional
Commit 2065287 would break configuration files that relied on the order of the arguments
2019-01-25 14:45:03 -02:00
Markus 0e5daaf723 Revert "make some arguments of the constructor optional"
This reverts commit 2065287a5b.
2019-01-25 13:50:22 -02:00
Joshua E. Jodesty 20977436ec unid'ed sim.py change 2019-01-23 12:57:56 -05:00
Markus 2065287a5b make some arguments of the constructor optional
SimCAD can run without seed, exogenous_states and env_processes. Making those arguments optional in the configuration constructor reduces the overhead in the user's configuration file.
2019-01-17 10:22:25 -02:00
Markus e81801c4cb support list of mechsteps instead of dict
Keeping support to dictionaries so it doesn't break existing configuration files. Issuing warning so that everyone is aware.
2019-01-17 09:39:21 -02:00
Joshua E. Jodesty b2ae2ded30 updated readme 2019-01-15 12:56:33 -05:00
Joshua E. Jodesty 421dc7f184 updated readme 2019-01-15 12:51:24 -05:00
Joshua E. Jodesty 02d848e617 updated readme 2019-01-15 12:50:06 -05:00
Joshua E. Jodesty ea56c55049 removed tar 2019-01-14 14:08:33 -05:00
Joshua E. Jodesty d8e911adc0 added tar 2019-01-14 13:00:03 -05:00
Joshua E. Jodesty 311219ca70 update readme 2019-01-11 14:22:03 -05:00
Joshua E. Jodesty be15871fe9
Merge pull request #21 from BlockScience/staging
Beta Release Cleanup
2019-01-11 12:29:55 -05:00
Joshua E. Jodesty 5ed273450f beta alignment: example_run 2019-01-11 12:05:57 -05:00
Joshua E. Jodesty 06fd76d096 new whl for beta 2019-01-11 11:52:45 -05:00
Joshua E. Jodesty a7d79c6806 update whl 2019-01-11 11:42:24 -05:00
Joshua E. Jodesty 18aa5e6da6 update whl 2019-01-11 11:40:59 -05:00
Joshua E. Jodesty c1da72c1d2
Merge pull request #20 from BlockScience/staging
SimCAD Beta Pre-Release
2019-01-11 11:28:26 -05:00
Joshua E. Jodesty 0116dc49d4 resolve local merge conflict for beta release 2019-01-11 11:26:04 -05:00
Joshua E. Jodesty c6f5e5cce2 update readme 2019-01-11 11:12:31 -05:00
Joshua E. Jodesty 9a7af89691 update 2019-01-11 11:00:47 -05:00
Joshua E. Jodesty 06de968a60 added licenses 2019-01-11 09:52:24 -05:00
Joshua E. Jodesty 460bbbacd7 moved licenses 2019-01-11 09:47:27 -05:00
Joshua E. Jodesty 796bf023ec added dist 2019-01-10 21:41:46 -05:00
Joshua E. Jodesty 19503e3d32 resolved ModuleNotFoundError 2019-01-10 21:38:38 -05:00
Joshua E. Jodesty 141680e3a1 absolut path issue 2019-01-10 14:02:16 -05:00
Joshua E. Jodesty f9f945c20f init 2019-01-10 13:44:55 -05:00
Joshua E. Jodesty ae25a9ff04 update setup 2019-01-10 10:12:15 -05:00
Joshua E. Jodesty 609e40ac40 master cleanup 2 2019-01-09 18:45:35 -05:00
Joshua E. Jodesty 45530ae91f Merge branch 'master' of https://github.com/BlockScience/DiffyQ-SimCAD 2019-01-09 18:43:48 -05:00
Joshua E. Jodesty 43e8b8cfab master cleanup 2019-01-09 18:43:45 -05:00
Joshua E. Jodesty df57071821 test 2019-01-09 18:28:33 -05:00
Joshua E. Jodesty 16fc324773 removed LICENSE.txt 2019-01-08 18:59:27 -05:00
Joshua E. Jodesty 73c6d21f12 check comments 2019-01-08 11:29:09 -05:00
Joshua E. Jodesty b3b50d0189
Merge pull request #18 from BlockScience/staging
Staging
2019-01-08 09:35:36 -04:00
Joshua E. Jodesty 7325980159
Merge pull request #17 from BlockScience/az-staging
Az staging
2019-01-08 09:35:15 -04:00
Joshua E. Jodesty 0eeed616e0 put back notebooks/test.ipynb 2019-01-07 20:42:42 -05:00
Joshua E. Jodesty c8634c5331 put back notebooks/test.ipynb 2019-01-07 20:41:35 -05:00
Joshua E. Jodesty 54a06a671b put back notebooks/test.ipynb 2019-01-07 20:39:42 -05:00
Joshua E. Jodesty 061e60e98c LICENSE.txt 2019-01-07 20:35:23 -05:00
Joshua E. Jodesty fe7c5a53fc blah 2019-01-07 20:31:05 -05:00
Joshua E. Jodesty 6bd54bd9d8 LICENSE.txt 2019-01-07 20:27:30 -05:00
Joshua E. Jodesty 80ec1a36b9 new readme 2019-01-07 18:48:07 -05:00
Joshua E. Jodesty cdcc207871 pre clean for az pt. 2.5 2019-01-07 18:15:15 -05:00
Joshua E. Jodesty 81a200fe9c prepack az pt. 2 2019-01-07 18:07:58 -05:00
Joshua E. Jodesty d39bca6700 pre clean for az pt. 1 2019-01-07 17:55:45 -05:00
Joshua E. Jodesty 1b52a4bebf 50% done 2019-01-07 17:40:09 -05:00
Joshua E. Jodesty ab3a9e370d
Initial commit 2019-01-07 13:50:34 -05:00
Joshua E. Jodesty 4f9e320109 refactored 2018-12-16 10:26:52 -05:00
Joshua E. Jodesty 2bb378fbf2 param-sweep: cadillac pt. 1 2018-12-16 00:47:42 -05:00
Joshua E. Jodesty 9201f9f20e param-sweep work 2018-12-15 23:08:47 -05:00
Joshua E. Jodesty 40f24f0909 added simulation documentation 2018-12-14 00:09:55 -05:00
Joshua E. Jodesty b394f2be46 merge issue ? 2018-12-14 00:06:04 -05:00
Joshua E. Jodesty a5623cc621 merge issue ? 2018-12-13 23:57:39 -05:00
Joshua E. Jodesty b7f6d284a7 demonstation 2018-12-13 23:44:35 -05:00
Joshua E. Jodesty 7285449242
Merge pull request #14 from BlockScience/documentation
add simcad documentation
2018-12-13 16:18:38 -05:00
zixuanzh f0f7456a76 move to Simulation.md 2018-12-13 16:05:12 -05:00
Markus 427a6a93cc Update notebooks/test.ipynb 2018-12-11 15:54:03 -02:00
Markus 8a04f670b3 Update notebooks/test.ipynb 2018-12-11 15:50:42 -02:00
Joshua E. Jodesty e2752161c3 misc. 2018-12-11 10:25:25 -05:00
Joshua E. Jodesty d7a25176ec investigating len(configs) in block_pipeline 2018-12-11 01:10:16 -05:00
Joshua E. Jodesty a0266641f7 multithreading mech_pipeline again, block_pipeline later 2018-12-11 01:07:23 -05:00
Joshua E. Jodesty 181b7cf986 multithreaded mech_pipeline in progress 2018-12-10 23:47:08 -05:00
Joshua E. Jodesty f55124fbb0 multithreaded mech_step, mech_pipeline in progress 2018-12-10 22:54:29 -05:00
Joshua E. Jodesty 980bba081a add tensor field to output 2018-12-10 10:06:01 -05:00
zixuanzh 0014700208 simcad documentation 2018-12-06 11:25:26 -05:00
Joshua E. Jodesty 42e93f501e gitignore 2018-12-05 17:20:55 -05:00
Joshua E. Jodesty 588d62331a readme 2018-12-05 17:18:05 -05:00
Joshua E. Jodesty e6c25fea95 dantes-inferno2 Pt.2 2018-12-04 21:00:18 -05:00
Joshua E. Jodesty 27ed2c9031 Revert "Correctly print tensor field"
This reverts commit 7efc2be2f1.
2018-12-04 20:44:22 -05:00
Joshua E. Jodesty a0160d7606 stuff 2018-12-04 20:44:19 -05:00
Joshua E. Jodesty 3554968c68 trying type checking 2018-12-04 20:08:01 -05:00
Joshua E. Jodesty 3508c58e3a trying type checking 2018-12-04 20:07:12 -05:00
Joshua E. Jodesty 7efc2be2f1 Correctly print tensor field 2018-12-04 16:39:53 -05:00
Joshua E. Jodesty e3179a6e8e
Merge pull request #13 from BlockScience/jj-dev
Bug Fix: Can't use environments without proc_trigger
2018-12-04 15:28:22 -05:00
Joshua E. Jodesty 4234bc03fd Bug: Can't use environments without proc_trigger 2018-12-04 14:47:41 -05:00
Joshua E. Jodesty a3bbf12325 Bug: Can't use environments without proc_trigger 2018-12-04 14:46:08 -05:00
Joshua E. Jodesty 932b158672 Include all param names in config & execution examples 2018-12-03 14:34:45 -05:00
Joshua E. Jodesty a1d83f0a28 trying to force merge to master 2018-12-03 12:56:48 -05:00
Joshua E. Jodesty 930ce5d17a update readme 2018-12-03 12:46:47 -05:00
Joshua E. Jodesty fb0c90124b rename 'sandbox' to 'simulation' 2018-12-03 12:32:22 -05:00
Joshua E. Jodesty 2599cc424a update readme, refactoring continued pt.3 barlin 2018-12-03 12:03:50 -05:00
Matt Barlin cb09b213cb Merge branch 'staging' of https://github.com/BlockScience/DiffyQ-SimCAD into staging 2018-12-03 11:50:58 -05:00
Matt Barlin 7a84af853f Barlin Cleanup 2018-12-03 11:50:49 -05:00
Joshua E. Jodesty 54bccfe915 update readme, refactoring continued pt.2 barlin 2018-12-03 11:48:40 -05:00
Joshua E. Jodesty dd0b50faf8 update readme, refactoring continued 2018-12-03 11:20:20 -05:00
Joshua E. Jodesty 9139e148fa Barlin 12/03/18 2018-12-03 11:16:54 -05:00
Joshua E. Jodesty 8e699b199d Hand-Off Refactor Pt. 1 2018-11-30 16:53:51 -05:00
Joshua E. Jodesty d60411b7b4 timedelta input 2018-11-30 13:37:18 -05:00
Joshua E. Jodesty 21f1155ae7 BUG: run value for Genesis State always last run for large datasets 2018-11-27 14:26:22 -05:00
Joshua E. Jodesty 4cc180b9d4 behavior id bug pt. 2 2018-11-26 15:53:17 -05:00
Joshua E. Jodesty d46e9ad255 behavior id bug 2018-11-26 15:04:15 -05:00
Markus cfbbb73e31 test notebook update and folder cleanup 2018-11-26 14:46:20 -02:00
Joshua E. Jodesty fc733b5283 reset to Genesis value between runs 2018-11-21 13:50:09 -05:00
Joshua E. Jodesty d7c423b8be reset to Genesis value between runs 2018-11-21 13:45:00 -05:00
Joshua E. Jodesty 7a28a9095a Execution Ctx pt.1 2018-11-19 22:19:57 -05:00
Joshua E. Jodesty 3cdf7689cd update readme 2018-11-18 14:34:27 -05:00
Joshua E. Jodesty e6a66f332c update readme 2018-11-18 14:31:42 -05:00
Joshua E. Jodesty 5fabb373e3 update readme 2018-11-18 14:28:28 -05:00
Joshua E. Jodesty 13d8d26ed4 Reafactor Pt. 6: changed project structure / pip install 2018-11-17 11:37:29 -05:00
Joshua E. Jodesty f1a9694470 updated readme, yay2 2018-11-15 20:24:08 -05:00
Joshua E. Jodesty 993a4e490b need license 2018-11-15 18:55:49 -05:00
Joshua E. Jodesty edc7882458 updated readme, yay 2018-11-15 18:53:55 -05:00
Joshua E. Jodesty 7ecd4e6c86 Reafactor Pt. 5: Improved Runtime Env / ux Pt.3 2018-11-15 18:48:39 -05:00
Joshua E. Jodesty cee90b564c Reafactor Pt. 4: Improved Runtime Env / ux Pt.2 2018-11-15 18:08:22 -05:00
Joshua E. Jodesty d29658ecbe Reafactor Pt. 3: Improved Runtime Env / ui Pt.1 2018-11-15 17:35:49 -05:00
Joshua E. Jodesty c420dce00d Reafactor Pt. 2 2018-11-15 16:22:47 -05:00
Joshua E. Jodesty 29ca7ac177 Reafactor Pt. 1 2018-11-15 16:22:21 -05:00
Joshua E. Jodesty 311c867000 Reafactor Pt. 1 2018-11-15 13:02:16 -05:00
Joshua E. Jodesty 7d96a78907 Dirty Parallelize Simulations 2018-11-14 16:07:09 -05:00
Joshua E. Jodesty 026e799c74 Parallelize Simulations 2018-11-14 14:58:55 -05:00
Joshua E. Jodesty 7fb728da6a rm --cached Pipfile & Pipfile.lock 2018-11-13 15:30:31 -05:00
Joshua E. Jodesty b5ef8b991c merge new readme 2018-11-13 15:28:01 -05:00
Joshua E. Jodesty f30778873e .gitignore Pipfile & Pipfile.lock + etc. 2018-11-13 15:25:45 -05:00
Markus Buhatem Koch f0ada7b820
Update README.md 2018-11-13 18:16:15 -02:00
Joshua E. Jodesty ba2a6484f9 Allow for missing keys in dictionaries of combined behaviors 2018-11-13 15:12:23 -05:00
Joshua E. Jodesty 0e276006a1
Merge pull request #7 from BlockScience/mk-missing-keys
Allow for missing keys in dictionaries of combined behaviors
2018-11-13 14:44:22 -05:00
Joshua E. Jodesty afc48e72b7 Fill output NANs with previous value 2018-11-13 14:13:00 -05:00
Markus Buhatem Koch cc9d85e384
fix duplicates 2018-11-13 16:10:03 -02:00
Markus Buhatem Koch 284b10e8f0
support missing keys in combined behaviors 2018-11-13 16:05:24 -02:00
Joshua E. Jodesty f6f6e7b3fd Enable: behavior output to be a map of behavior function to value pt.2 2018-11-13 08:24:05 -05:00
Joshua E. Jodesty 2d4d20eb5e Enable: behavior output to be a map of behavior function to value 2018-11-12 10:09:11 -05:00
Joshua E. Jodesty 15c1d2fa8c bug: Cannot run with single state update 2018-11-08 17:42:18 -05:00
Joshua E. Jodesty 84963ae8c1 save problems with pycharm 2018-11-08 16:14:25 -05:00
Joshua E. Jodesty ee5f912908 exo_proc per ts decorator & refactor mech_step 2018-11-08 10:18:47 -05:00
Joshua E. Jodesty 098fc04f3d bug fix: not displaying all mech steps 2018-11-06 15:04:23 -05:00
Joshua E. Jodesty 1d820486ae can accept a mechanism config with at least a single state 2018-11-06 09:22:21 -05:00
Joshua E. Jodesty f2bf23cc37 identity fix pt.2 (behaviors) 2018-11-06 08:22:40 -05:00
Joshua E. Jodesty 16fe59c106 identity fix pt.1 2018-10-30 08:51:26 +01:00
Joshua E. Jodesty b4f4bc8b96 vacay cleanup 2018-10-27 14:39:34 +02:00
Joshua E. Jodesty bc2a1da70c
Merge pull request #3 from BlockScience/staging
Staging
2018-09-24 16:11:30 -04:00
Markus Buhatem Koch afbd3a6073
Merge pull request #2 from BlockScience/structure
install common packages
2018-09-17 13:39:00 -03:00
141 changed files with 17051 additions and 5249 deletions

34
.gitignore vendored
View File

@ -1,8 +1,30 @@
.ipynb_checkpoints/*
.idea
jupyter notebook
.ipynb_checkpoints
.DS_Store
.idea
engine/__pycache__
engine/.ipynb_checkpoints
notebooks/.ipynb_checkpoints
ui/__pycache__
SimCAD.egg-info
.pytest_cache/
notebooks
*.egg-info
__pycache__
Pipfile
Pipfile.lock
results
.mypy_cache
*.csv
simulations/.ipynb_checkpoints
simulations/validation/config3.py
cadCAD.egg-info
build
cadCAD.egg-info
testing/example.py
testing/example2.py
testing/multi_config_test.py
testing/udo.py
testing/udo_test.py
Simulation.md
monkeytype.sqlite3

View File

@ -1,6 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$" vcs="Git" />
</component>
</project>

29
AUTHORS.txt Normal file
View File

@ -0,0 +1,29 @@
Authors
=======
cadCAD was originally implemented by Joshua E. Jodesty and designed by Michael Zargham, Markus B. Koch, and
Matthew V. Barlin from 2018 to 2019.
Project Maintainers:
- Joshua E. Jodesty <joshua@block.science, joshua.jodesty@gmail.com>
- Markus B. Koch <markus@block.science>
Contributors:
- Joshua E. Jodesty
- Markus B. Koch
- Matthew V. Barlin
- Michael Zargham
- Zixuan Zhang
- Charles Rice
Wed also like to thank:
- Andrew Clark
- Nikhil Jamdade
- Nick Hirannet
- Jonathan Gabler
- Chris Frazier
- Harry Goodnight
- Charlie Hoppes

21
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,21 @@
# Contributing to cadCAD (Draft)
:+1::tada: First off, thanks for taking the time to contribute! :tada::+1:
The following is a set of guidelines for contributing to cadCAD. These are mostly guidelines, not rules.
Use your best judgment, and feel free to propose changes to this document in a pull request.
### Pull Requests:
Pull Request (PR) presented as "->".
General Template:
fork/branch -> BlockScience/staging
Contributing a new feature:
fork/feature -> BlockScience/staging
Contributing to an existing feature:
fork/feature -> BlockScience/feature
Thanks! :heart:

21
LICENSE.txt Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2018-2019 BlockScience
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

17
Pipfile
View File

@ -1,17 +0,0 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
ipykernel = "*"
pandas = "*"
jupyter = "*"
scipy = "*"
matplotlib = "*"
seaborn = "*"
[dev-packages]
[requires]
python_version = "3.6"

550
Pipfile.lock generated
View File

@ -1,550 +0,0 @@
{
"_meta": {
"hash": {
"sha256": "76927ed95e0de5668160ccfcd430fdb91ab2b1597b3e55e14a89fb3855f05757"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.6"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.org/simple",
"verify_ssl": true
}
]
},
"default": {
"appnope": {
"hashes": [
"sha256:5b26757dc6f79a3b7dc9fab95359328d5747fcb2409d331ea66d0272b90ab2a0",
"sha256:8b995ffe925347a2138d7ac0fe77155e4311a0ea6d6da4f5128fe4b3cbe5ed71"
],
"markers": "sys_platform == 'darwin'",
"version": "==0.1.0"
},
"backcall": {
"hashes": [
"sha256:38ecd85be2c1e78f77fd91700c76e14667dc21e2713b63876c0eb901196e01e4",
"sha256:bbbf4b1e5cd2bdb08f915895b51081c041bac22394fdfcfdfbe9f14b77c08bf2"
],
"version": "==0.1.0"
},
"bleach": {
"hashes": [
"sha256:0ee95f6167129859c5dce9b1ca291ebdb5d8cd7e382ca0e237dfd0dad63f63d8",
"sha256:24754b9a7d530bf30ce7cbc805bc6cce785660b4a10ff3a43633728438c105ab"
],
"version": "==2.1.4"
},
"cycler": {
"hashes": [
"sha256:1d8a5ae1ff6c5cf9b93e8811e581232ad8920aeec647c37316ceac982b08cb2d",
"sha256:cd7b2d1018258d7247a71425e9f26463dfb444d411c39569972f4ce586b0c9d8"
],
"version": "==0.10.0"
},
"decorator": {
"hashes": [
"sha256:2c51dff8ef3c447388fe5e4453d24a2bf128d3a4c32af3fabef1f01c6851ab82",
"sha256:c39efa13fbdeb4506c476c9b3babf6a718da943dab7811c206005a4a956c080c"
],
"version": "==4.3.0"
},
"defusedxml": {
"hashes": [
"sha256:24d7f2f94f7f3cb6061acb215685e5125fbcdc40a857eff9de22518820b0a4f4",
"sha256:702a91ade2968a82beb0db1e0766a6a273f33d4616a6ce8cde475d8e09853b20"
],
"version": "==0.5.0"
},
"entrypoints": {
"hashes": [
"sha256:10ad569bb245e7e2ba425285b9fa3e8178a0dc92fc53b1e1c553805e15a8825b",
"sha256:d2d587dde06f99545fb13a383d2cd336a8ff1f359c5839ce3a64c917d10c029f"
],
"markers": "python_version >= '2.7'",
"version": "==0.2.3"
},
"html5lib": {
"hashes": [
"sha256:20b159aa3badc9d5ee8f5c647e5efd02ed2a66ab8d354930bd9ff139fc1dc0a3",
"sha256:66cb0dcfdbbc4f9c3ba1a63fdb511ffdbd4f513b2b6d81b80cd26ce6b3fb3736"
],
"version": "==1.0.1"
},
"ipykernel": {
"hashes": [
"sha256:00d88b7e628e4e893359119b894451611214bce09776a3bf8248fe42cb48ada6",
"sha256:a706b975376efef98b70e10cd167ab9506cf08a689d689a3c7daf344c15040f6",
"sha256:c5a498c70f7765c34f3397cf943b069057f5bef4e0218e4cfbb733e9f38fa5fa"
],
"index": "pypi",
"version": "==4.9.0"
},
"ipython": {
"hashes": [
"sha256:007dcd929c14631f83daff35df0147ea51d1af420da303fd078343878bd5fb62",
"sha256:b0f2ef9eada4a68ef63ee10b6dde4f35c840035c50fd24265f8052c98947d5a4"
],
"markers": "python_version >= '3.3'",
"version": "==6.5.0"
},
"ipython-genutils": {
"hashes": [
"sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8",
"sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"
],
"version": "==0.2.0"
},
"ipywidgets": {
"hashes": [
"sha256:0f2b5cde9f272cb49d52f3f0889fdd1a7ae1e74f37b48dac35a83152780d2b7b",
"sha256:a3e224f430163f767047ab9a042fc55adbcab0c24bbe6cf9f306c4f89fdf0ba3"
],
"version": "==7.4.2"
},
"jedi": {
"hashes": [
"sha256:b409ed0f6913a701ed474a614a3bb46e6953639033e31f769ca7581da5bd1ec1",
"sha256:c254b135fb39ad76e78d4d8f92765ebc9bf92cbc76f49e97ade1d5f5121e1f6f"
],
"version": "==0.12.1"
},
"jinja2": {
"hashes": [
"sha256:74c935a1b8bb9a3947c50a54766a969d4846290e1e788ea44c1392163723c3bd",
"sha256:f84be1bb0040caca4cea721fcbbbbd61f9be9464ca236387158b0feea01914a4"
],
"version": "==2.10"
},
"jsonschema": {
"hashes": [
"sha256:000e68abd33c972a5248544925a0cae7d1125f9bf6c58280d37546b946769a08",
"sha256:6ff5f3180870836cae40f06fa10419f557208175f13ad7bc26caa77beb1f6e02"
],
"version": "==2.6.0"
},
"jupyter": {
"hashes": [
"sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7",
"sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78",
"sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"
],
"index": "pypi",
"version": "==1.0.0"
},
"jupyter-client": {
"hashes": [
"sha256:27befcf0446b01e29853014d6a902dd101ad7d7f94e2252b1adca17c3466b761",
"sha256:59e6d791e22a8002ad0e80b78c6fd6deecab4f9e1b1aa1a22f4213de271b29ea"
],
"version": "==5.2.3"
},
"jupyter-console": {
"hashes": [
"sha256:3f928b817fc82cda95e431eb4c2b5eb21be5c483c2b43f424761a966bb808094",
"sha256:545dedd3aaaa355148093c5609f0229aeb121b4852995c2accfa64fe3e0e55cd"
],
"version": "==5.2.0"
},
"jupyter-core": {
"hashes": [
"sha256:927d713ffa616ea11972534411544589976b2493fc7e09ad946e010aa7eb9970",
"sha256:ba70754aa680300306c699790128f6fbd8c306ee5927976cbe48adacf240c0b7"
],
"version": "==4.4.0"
},
"kiwisolver": {
"hashes": [
"sha256:0ee4ed8b3ae8f5f712b0aa9ebd2858b5b232f1b9a96b0943dceb34df2a223bc3",
"sha256:0f7f532f3c94e99545a29f4c3f05637f4d2713e7fd91b4dd8abfc18340b86cd5",
"sha256:1a078f5dd7e99317098f0e0d490257fd0349d79363e8c923d5bb76428f318421",
"sha256:1aa0b55a0eb1bd3fa82e704f44fb8f16e26702af1a073cc5030eea399e617b56",
"sha256:2874060b91e131ceeff00574b7c2140749c9355817a4ed498e82a4ffa308ecbc",
"sha256:379d97783ba8d2934d52221c833407f20ca287b36d949b4bba6c75274bcf6363",
"sha256:3b791ddf2aefc56382aadc26ea5b352e86a2921e4e85c31c1f770f527eb06ce4",
"sha256:4329008a167fac233e398e8a600d1b91539dc33c5a3eadee84c0d4b04d4494fa",
"sha256:45813e0873bbb679334a161b28cb9606d9665e70561fd6caa8863e279b5e464b",
"sha256:53a5b27e6b5717bdc0125338a822605084054c80f382051fb945d2c0e6899a20",
"sha256:574f24b9805cb1c72d02b9f7749aa0cc0b81aa82571be5201aa1453190390ae5",
"sha256:66f82819ff47fa67a11540da96966fb9245504b7f496034f534b81cacf333861",
"sha256:79e5fe3ccd5144ae80777e12973027bd2f4f5e3ae8eb286cabe787bed9780138",
"sha256:83410258eb886f3456714eea4d4304db3a1fc8624623fc3f38a487ab36c0f653",
"sha256:8b6a7b596ce1d2a6d93c3562f1178ebd3b7bb445b3b0dd33b09f9255e312a965",
"sha256:9576cb63897fbfa69df60f994082c3f4b8e6adb49cccb60efb2a80a208e6f996",
"sha256:95a25d9f3449046ecbe9065be8f8380c03c56081bc5d41fe0fb964aaa30b2195",
"sha256:a424f048bebc4476620e77f3e4d1f282920cef9bc376ba16d0b8fe97eec87cde",
"sha256:aaec1cfd94f4f3e9a25e144d5b0ed1eb8a9596ec36d7318a504d813412563a85",
"sha256:acb673eecbae089ea3be3dcf75bfe45fc8d4dcdc951e27d8691887963cf421c7",
"sha256:b15bc8d2c2848a4a7c04f76c9b3dc3561e95d4dabc6b4f24bfabe5fd81a0b14f",
"sha256:b1c240d565e977d80c0083404c01e4d59c5772c977fae2c483f100567f50847b",
"sha256:c595693de998461bcd49b8d20568c8870b3209b8ea323b2a7b0ea86d85864694",
"sha256:ce3be5d520b4d2c3e5eeb4cd2ef62b9b9ab8ac6b6fedbaa0e39cdb6f50644278",
"sha256:e0f910f84b35c36a3513b96d816e6442ae138862257ae18a0019d2fc67b041dc",
"sha256:ea36e19ac0a483eea239320aef0bd40702404ff8c7e42179a2d9d36c5afcb55c",
"sha256:efabbcd4f406b532206b8801058c8bab9e79645b9880329253ae3322b7b02cd5",
"sha256:f923406e6b32c86309261b8195e24e18b6a8801df0cfc7814ac44017bfcb3939"
],
"markers": "python_version != '3.1.*' and python_version >= '2.7' and python_version != '3.0.*' and python_version != '3.2.*' and python_version != '3.3.*'",
"version": "==1.0.1"
},
"markupsafe": {
"hashes": [
"sha256:a6be69091dac236ea9c6bc7d012beab42010fa914c459791d627dad4910eb665"
],
"version": "==1.0"
},
"matplotlib": {
"hashes": [
"sha256:0ba8e3ec1b0feddc6b068fe70dc38dcf2917e301ad8d2b3f848c14ad463a4157",
"sha256:10a48e33e64dbd95f0776ba162f379c5cc55301c2d155506e79ce0c26b52f2ce",
"sha256:1376535fe731adbba55ab9e48896de226b7e89dbb55390c5fbd8f7161b7ae3be",
"sha256:16f0f8ba22df1e2c9f06c87088de45742322fde282a93b5c744c0f969cf7932e",
"sha256:1c6c999f2212858021329537f8e0f98f3f29086ec3683511dd1ecec84409f51d",
"sha256:2316dc177fc7b3d8848b49365498de0c385b4c9bba511edddd24c34fbe3d37a4",
"sha256:3398bfb533482bf21974cecf28224dd23784ad4e4848be582903f7a2436ec12e",
"sha256:3477cb1e1061b34210acc43d20050be8444478ff50b8adfac5fe2b45fc97df01",
"sha256:3cc06333b8264428d02231804e2e726b902e9161dc16f573183dee6cb7ef621f",
"sha256:4259ea7cb2c238355ee13275eddd261d869cefbdeb18a65f35459589d6d17def",
"sha256:4addcf93234b6122f530f90f485fd3d00d158911fbc1ed24db3fa66cd49fe565",
"sha256:50c0e24bcbce9c54346f4a2f4e97b0ed111f0413ac3fe9954061ae1c8aa7021f",
"sha256:62ed7597d9e54db6e133420d779c642503c25eba390e1178d85dfb2ba0d05948",
"sha256:69f6d51e41a17f6a5f70c56bb10b8ded9f299609204495a7fa2782a3a755ffc5",
"sha256:6d232e49b74e3d2db22c63c25a9a0166d965e87e2b057f795487f1f244b61d9d",
"sha256:7355bf757ecacd5f0ac9dd9523c8e1a1103faadf8d33c22664178e17533f8ce5",
"sha256:886b1045c5105631f10c1cbc999f910e44d33af3e9c7efd68c2123efc06ab636",
"sha256:9e1f353edd7fc7e5e9101abd5bc0201946f77a1b59e0da49095086c03db856ed",
"sha256:b3a343dfcbe296dbe0f26c731beee72a792ff948407e6979524298ae7bc3234e",
"sha256:d93675af09ca497a25f4f8d62f3313cf0f21e45427a87487049fe84898b99909",
"sha256:e2409ef9d37804dfb566f39c962e6ed70f281ff516b8131b3e6b4e6442711ff1",
"sha256:f8b653b0f89938ba72e92ab080c2f3aa24c1b72e2f61add22880cd1b9a6e3cdd"
],
"index": "pypi",
"version": "==2.2.3"
},
"mistune": {
"hashes": [
"sha256:b4c512ce2fc99e5a62eb95a4aba4b73e5f90264115c40b70a21e1f7d4e0eac91",
"sha256:bc10c33bfdcaa4e749b779f62f60d6e12f8215c46a292d05e486b869ae306619"
],
"version": "==0.8.3"
},
"nbconvert": {
"hashes": [
"sha256:08d21cf4203fabafd0d09bbd63f06131b411db8ebeede34b0fd4be4548351779",
"sha256:a8a2749f972592aa9250db975304af6b7337f32337e523a2c995cc9e12c07807"
],
"version": "==5.4.0"
},
"nbformat": {
"hashes": [
"sha256:b9a0dbdbd45bb034f4f8893cafd6f652ea08c8c1674ba83f2dc55d3955743b0b",
"sha256:f7494ef0df60766b7cabe0a3651556345a963b74dbc16bc7c18479041170d402"
],
"version": "==4.4.0"
},
"notebook": {
"hashes": [
"sha256:66dd59e76e755584ae9450eb015c39f55d4bb1d8ec68f2c694d2b3cba7bf5c7e",
"sha256:e2c8e931cc19db4f8c63e6a396efbc13a228b2cb5b2919df011b946f28239a08"
],
"version": "==5.6.0"
},
"numpy": {
"hashes": [
"sha256:1c362ad12dd09a43b348bb28dd2295dd9cdf77f41f0f45965e04ba97f525b864",
"sha256:2156a06bd407918df4ac0122df6497a9c137432118f585e5b17d543e593d1587",
"sha256:24e4149c38489b51fc774b1e1faa9103e82f73344d7a00ba66f6845ab4769f3f",
"sha256:340ec1697d9bb3a9c464028af7a54245298502e91178bddb4c37626d36e197b7",
"sha256:35db8d419345caa4eeaa65cd63f34a15208acd87530a30f0bc25fc84f55c8c80",
"sha256:361370e9b7f5e44c41eee29f2bb5cb3b755abb4b038bce6d6cbe08db7ff9cb74",
"sha256:36e8dcd1813ca92ce7e4299120cee6c03adad33d89b54862c1b1a100443ac399",
"sha256:378378973546ecc1dfaf9e24c160d683dd04df871ecd2dcc86ce658ca20f92c0",
"sha256:419e6faee16097124ee627ed31572c7e80a1070efa25260b78097cca240e219a",
"sha256:4287104c24e6a09b9b418761a1e7b1bbde65105f110690ca46a23600a3c606b8",
"sha256:549f3e9778b148a47f4fb4682955ed88057eb627c9fe5467f33507c536deda9d",
"sha256:5e359e9c531075220785603e5966eef20ccae9b3b6b8a06fdfb66c084361ce92",
"sha256:5ee7f3dbbdba0da75dec7e94bd7a2b10fe57a83e1b38e678200a6ad8e7b14fdc",
"sha256:62d55e96ec7b117d3d5e618c15efcf769e70a6effaee5842857b64fb4883887a",
"sha256:719b6789acb2bc86ea9b33a701d7c43dc2fc56d95107fd3c5b0a8230164d4dfb",
"sha256:7a70f2b60d48828cba94a54a8776b61a9c2657a803d47f5785f8062e3a9c7c55",
"sha256:7b9e37f194f8bcdca8e9e6af92e2cbad79e360542effc2dd6b98d63955d8d8a3",
"sha256:83b8fc18261b70f45bece2d392537c93dc81eb6c539a16c9ac994c47fc79f09a",
"sha256:9473ad28375710ab18378e72b59422399b27e957e9339c413bf00793b4b12df0",
"sha256:95b085b253080e5d09f7826f5e27dce067bae813a132023a77b739614a29de6e",
"sha256:98b86c62c08c2e5dc98a9c856d4a95329d11b1c6058cb9b5191d5ea6891acd09",
"sha256:a3bd01d6d3ed3d7c06d7f9979ba5d68281f15383fafd53b81aa44b9191047cf8",
"sha256:c81a6afc1d2531a9ada50b58f8c36197f8418ef3d0611d4c1d7af93fdcda764f",
"sha256:ce75ed495a746e3e78cfa22a77096b3bff2eda995616cb7a542047f233091268",
"sha256:dae8618c0bcbfcf6cf91350f8abcdd84158323711566a8c5892b5c7f832af76f",
"sha256:df0b02c6705c5d1c25cc35c7b5d6b6f9b3b30833f9d178843397ae55ecc2eebb",
"sha256:e3660744cda0d94b90141cdd0db9308b958a372cfeee8d7188fdf5ad9108ea82",
"sha256:f2362d0ca3e16c37782c1054d7972b8ad2729169567e3f0f4e5dd3cdf85f188e"
],
"markers": "python_version != '3.3.*' and python_version != '3.1.*' and python_version != '3.0.*' and python_version >= '2.7' and python_version != '3.2.*'",
"version": "==1.15.1"
},
"pandas": {
"hashes": [
"sha256:11975fad9edbdb55f1a560d96f91830e83e29bed6ad5ebf506abda09818eaf60",
"sha256:12e13d127ca1b585dd6f6840d3fe3fa6e46c36a6afe2dbc5cb0b57032c902e31",
"sha256:1c87fcb201e1e06f66e23a61a5fea9eeebfe7204a66d99df24600e3f05168051",
"sha256:242e9900de758e137304ad4b5663c2eff0d798c2c3b891250bd0bd97144579da",
"sha256:26c903d0ae1542890cb9abadb4adcb18f356b14c2df46e4ff657ae640e3ac9e7",
"sha256:2e1e88f9d3e5f107b65b59cd29f141995597b035d17cc5537e58142038942e1a",
"sha256:31b7a48b344c14691a8e92765d4023f88902ba3e96e2e4d0364d3453cdfd50db",
"sha256:4fd07a932b4352f8a8973761ab4e84f965bf81cc750fb38e04f01088ab901cb8",
"sha256:5b24ca47acf69222e82530e89111dd9d14f9b970ab2cd3a1c2c78f0c4fbba4f4",
"sha256:647b3b916cc8f6aeba240c8171be3ab799c3c1b2ea179a3be0bd2712c4237553",
"sha256:66b060946046ca27c0e03e9bec9bba3e0b918bafff84c425ca2cc2e157ce121e",
"sha256:6efa9fa6e1434141df8872d0fa4226fc301b17aacf37429193f9d70b426ea28f",
"sha256:be4715c9d8367e51dbe6bc6d05e205b1ae234f0dc5465931014aa1c4af44c1ba",
"sha256:bea90da782d8e945fccfc958585210d23de374fa9294a9481ed2abcef637ebfc",
"sha256:d318d77ab96f66a59e792a481e2701fba879e1a453aefeebdb17444fe204d1ed",
"sha256:d785fc08d6f4207437e900ffead930a61e634c5e4f980ba6d3dc03c9581748c7",
"sha256:de9559287c4fe8da56e8c3878d2374abc19d1ba2b807bfa7553e912a8e5ba87c",
"sha256:f4f98b190bb918ac0bc0e3dd2ab74ff3573da9f43106f6dba6385406912ec00f",
"sha256:f71f1a7e2d03758f6e957896ed696254e2bc83110ddbc6942018f1a232dd9dad",
"sha256:fb944c8f0b0ab5c1f7846c686bc4cdf8cde7224655c12edcd59d5212cd57bec0"
],
"index": "pypi",
"version": "==0.23.4"
},
"pandocfilters": {
"hashes": [
"sha256:b3dd70e169bb5449e6bc6ff96aea89c5eea8c5f6ab5e207fc2f521a2cf4a0da9"
],
"version": "==1.4.2"
},
"parso": {
"hashes": [
"sha256:35704a43a3c113cce4de228ddb39aab374b8004f4f2407d070b6a2ca784ce8a2",
"sha256:895c63e93b94ac1e1690f5fdd40b65f07c8171e3e53cbd7793b5b96c0e0a7f24"
],
"version": "==0.3.1"
},
"pexpect": {
"hashes": [
"sha256:2a8e88259839571d1251d278476f3eec5db26deb73a70be5ed5dc5435e418aba",
"sha256:3fbd41d4caf27fa4a377bfd16fef87271099463e6fa73e92a52f92dfee5d425b"
],
"markers": "sys_platform != 'win32'",
"version": "==4.6.0"
},
"pickleshare": {
"hashes": [
"sha256:84a9257227dfdd6fe1b4be1319096c20eb85ff1e82c7932f36efccfe1b09737b",
"sha256:c9a2541f25aeabc070f12f452e1f2a8eae2abd51e1cd19e8430402bdf4c1d8b5"
],
"version": "==0.7.4"
},
"prometheus-client": {
"hashes": [
"sha256:17bc24c09431644f7c65d7bce9f4237252308070b6395d6d8e87767afe867e24"
],
"version": "==0.3.1"
},
"prompt-toolkit": {
"hashes": [
"sha256:1df952620eccb399c53ebb359cc7d9a8d3a9538cb34c5a1344bdbeb29fbcc381",
"sha256:3f473ae040ddaa52b52f97f6b4a493cfa9f5920c255a12dc56a7d34397a398a4",
"sha256:858588f1983ca497f1cf4ffde01d978a3ea02b01c8a26a8bbc5cd2e66d816917"
],
"version": "==1.0.15"
},
"ptyprocess": {
"hashes": [
"sha256:923f299cc5ad920c68f2bc0bc98b75b9f838b93b599941a6b63ddbc2476394c0",
"sha256:d7cc528d76e76342423ca640335bd3633420dc1366f258cb31d05e865ef5ca1f"
],
"markers": "os_name != 'nt'",
"version": "==0.6.0"
},
"pygments": {
"hashes": [
"sha256:78f3f434bcc5d6ee09020f92ba487f95ba50f1e3ef83ae96b9d5ffa1bab25c5d",
"sha256:dbae1046def0efb574852fab9e90209b23f556367b5a320c0bcb871c77c3e8cc"
],
"version": "==2.2.0"
},
"pyparsing": {
"hashes": [
"sha256:0832bcf47acd283788593e7a0f542407bd9550a55a8a8435214a1960e04bcb04",
"sha256:fee43f17a9c4087e7ed1605bd6df994c6173c1e977d7ade7b651292fab2bd010"
],
"version": "==2.2.0"
},
"python-dateutil": {
"hashes": [
"sha256:1adb80e7a782c12e52ef9a8182bebeb73f1d7e24e374397af06fb4956c8dc5c0",
"sha256:e27001de32f627c22380a688bcc43ce83504a7bc5da472209b4c70f02829f0b8"
],
"version": "==2.7.3"
},
"pytz": {
"hashes": [
"sha256:a061aa0a9e06881eb8b3b2b43f05b9439d6583c206d0a6c340ff72a7b6669053",
"sha256:ffb9ef1de172603304d9d2819af6f5ece76f2e85ec10692a524dd876e72bf277"
],
"version": "==2018.5"
},
"pyzmq": {
"hashes": [
"sha256:25a0715c8f69cf72f67cfe5a68a3f3ed391c67c063d2257bec0fe7fc2c7f08f8",
"sha256:2bab63759632c6b9e0d5bf19cc63c3b01df267d660e0abcf230cf0afaa966349",
"sha256:30ab49d99b24bf0908ebe1cdfa421720bfab6f93174e4883075b7ff38cc555ba",
"sha256:32c7ca9fc547a91e3c26fc6080b6982e46e79819e706eb414dd78f635a65d946",
"sha256:41219ae72b3cc86d97557fe5b1ef5d1adc1057292ec597b50050874a970a39cf",
"sha256:4b8c48a9a13cea8f1f16622f9bd46127108af14cd26150461e3eab71e0de3e46",
"sha256:55724997b4a929c0d01b43c95051318e26ddbae23565018e138ae2dc60187e59",
"sha256:65f0a4afae59d4fc0aad54a917ab599162613a761b760ba167d66cc646ac3786",
"sha256:6f88591a8b246f5c285ee6ce5c1bf4f6bd8464b7f090b1333a446b6240a68d40",
"sha256:75022a4c60dcd8765bb9ca32f6de75a0ec83b0d96e0309dc479f4c7b21f26cb7",
"sha256:76ea493bfab18dcb090d825f3662b5612e2def73dffc196d51a5194b0294a81d",
"sha256:7b60c045b80709e4e3c085bab9b691e71761b44c2b42dbb047b8b498e7bc16b3",
"sha256:8e6af2f736734aef8ed6f278f9f552ec7f37b1a6b98e59b887484a840757f67d",
"sha256:9ac2298e486524331e26390eac14e4627effd3f8e001d4266ed9d8f1d2d31cce",
"sha256:9ba650f493a9bc1f24feca1d90fce0e5dd41088a252ac9840131dfbdbf3815ca",
"sha256:a02a4a385e394e46012dc83d2e8fd6523f039bb52997c1c34a2e0dd49ed839c1",
"sha256:a3ceee84114d9f5711fa0f4db9c652af0e4636c89eabc9b7f03a3882569dd1ed",
"sha256:a72b82ac1910f2cf61a49139f4974f994984475f771b0faa730839607eeedddf",
"sha256:ab136ac51027e7c484c53138a0fab4a8a51e80d05162eb7b1585583bcfdbad27",
"sha256:c095b224300bcac61e6c445e27f9046981b1ac20d891b2f1714da89d34c637c8",
"sha256:c5cc52d16c06dc2521340d69adda78a8e1031705924e103c0eb8fc8af861d810",
"sha256:d612e9833a89e8177f8c1dc68d7b4ff98d3186cd331acd616b01bbdab67d3a7b",
"sha256:e828376a23c66c6fe90dcea24b4b72cd774f555a6ee94081670872918df87a19",
"sha256:e9767c7ab2eb552796440168d5c6e23a99ecaade08dda16266d43ad461730192",
"sha256:ebf8b800d42d217e4710d1582b0c8bff20cdcb4faad7c7213e52644034300924"
],
"markers": "python_version != '3.1*' and python_version >= '2.7' and python_version != '3.2*' and python_version != '3.0*'",
"version": "==17.1.2"
},
"qtconsole": {
"hashes": [
"sha256:298431d376d71a02eb1a04fe6e72dd4beb82b83423d58b17d532e0af838e62fa",
"sha256:7870b19e6a6b0ab3acc09ee65463c0ca7568b3a01a6902d7c4e1ed2c4fc4e176"
],
"version": "==4.4.1"
},
"scipy": {
"hashes": [
"sha256:0611ee97296265af4a21164a5323f8c1b4e8e15c582d3dfa7610825900136bb7",
"sha256:08237eda23fd8e4e54838258b124f1cd141379a5f281b0a234ca99b38918c07a",
"sha256:0e645dbfc03f279e1946cf07c9c754c2a1859cb4a41c5f70b25f6b3a586b6dbd",
"sha256:0e9bb7efe5f051ea7212555b290e784b82f21ffd0f655405ac4f87e288b730b3",
"sha256:108c16640849e5827e7d51023efb3bd79244098c3f21e4897a1007720cb7ce37",
"sha256:340ef70f5b0f4e2b4b43c8c8061165911bc6b2ad16f8de85d9774545e2c47463",
"sha256:3ad73dfc6f82e494195144bd3a129c7241e761179b7cb5c07b9a0ede99c686f3",
"sha256:3b243c77a822cd034dad53058d7c2abf80062aa6f4a32e9799c95d6391558631",
"sha256:404a00314e85eca9d46b80929571b938e97a143b4f2ddc2b2b3c91a4c4ead9c5",
"sha256:423b3ff76957d29d1cce1bc0d62ebaf9a3fdfaf62344e3fdec14619bb7b5ad3a",
"sha256:42d9149a2fff7affdd352d157fa5717033767857c11bd55aa4a519a44343dfef",
"sha256:625f25a6b7d795e8830cb70439453c9f163e6870e710ec99eba5722775b318f3",
"sha256:698c6409da58686f2df3d6f815491fd5b4c2de6817a45379517c92366eea208f",
"sha256:729f8f8363d32cebcb946de278324ab43d28096f36593be6281ca1ee86ce6559",
"sha256:8190770146a4c8ed5d330d5b5ad1c76251c63349d25c96b3094875b930c44692",
"sha256:878352408424dffaa695ffedf2f9f92844e116686923ed9aa8626fc30d32cfd1",
"sha256:8b984f0821577d889f3c7ca8445564175fb4ac7c7f9659b7c60bef95b2b70e76",
"sha256:8f841bbc21d3dad2111a94c490fb0a591b8612ffea86b8e5571746ae76a3deac",
"sha256:c22b27371b3866c92796e5d7907e914f0e58a36d3222c5d436ddd3f0e354227a",
"sha256:d0cdd5658b49a722783b8b4f61a6f1f9c75042d0e29a30ccb6cacc9b25f6d9e2",
"sha256:d40dc7f494b06dcee0d303e51a00451b2da6119acbeaccf8369f2d29e28917ac",
"sha256:d8491d4784aceb1f100ddb8e31239c54e4afab8d607928a9f7ef2469ec35ae01",
"sha256:dfc5080c38dde3f43d8fbb9c0539a7839683475226cf83e4b24363b227dfe552",
"sha256:e24e22c8d98d3c704bb3410bce9b69e122a8de487ad3dbfe9985d154e5c03a40",
"sha256:e7a01e53163818d56eabddcafdc2090e9daba178aad05516b20c6591c4811020",
"sha256:ee677635393414930541a096fc8e61634304bb0153e4e02b75685b11eba14cae",
"sha256:f0521af1b722265d824d6ad055acfe9bd3341765735c44b5a4d0069e189a0f40",
"sha256:f25c281f12c0da726c6ed00535ca5d1622ec755c30a3f8eafef26cf43fede694"
],
"index": "pypi",
"version": "==1.1.0"
},
"seaborn": {
"hashes": [
"sha256:42e627b24e849c2d3bbfd059e00005f6afbc4a76e4895baf44ae23fe8a4b09a5",
"sha256:76c83f794ca320fb6b23a7c6192d5e185a5fcf4758966a0c0a54baee46d41e2f"
],
"index": "pypi",
"version": "==0.9.0"
},
"send2trash": {
"hashes": [
"sha256:60001cc07d707fe247c94f74ca6ac0d3255aabcb930529690897ca2a39db28b2",
"sha256:f1691922577b6fa12821234aeb57599d887c4900b9ca537948d2dac34aea888b"
],
"version": "==1.5.0"
},
"simplegeneric": {
"hashes": [
"sha256:dc972e06094b9af5b855b3df4a646395e43d1c9d0d39ed345b7393560d0b9173"
],
"version": "==0.8.1"
},
"six": {
"hashes": [
"sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9",
"sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb"
],
"version": "==1.11.0"
},
"terminado": {
"hashes": [
"sha256:55abf9ade563b8f9be1f34e4233c7b7bde726059947a593322e8a553cc4c067a",
"sha256:65011551baff97f5414c67018e908110693143cfbaeb16831b743fe7cad8b927"
],
"version": "==0.8.1"
},
"testpath": {
"hashes": [
"sha256:039fa6a6c9fd3488f8336d23aebbfead5fa602c4a47d49d83845f55a595ec1b4",
"sha256:0d5337839c788da5900df70f8e01015aec141aa3fe7936cb0d0a2953f7ac7609"
],
"version": "==0.3.1"
},
"tornado": {
"hashes": [
"sha256:0662d28b1ca9f67108c7e3b77afabfb9c7e87bde174fbda78186ecedc2499a9d",
"sha256:4e5158d97583502a7e2739951553cbd88a72076f152b4b11b64b9a10c4c49409",
"sha256:732e836008c708de2e89a31cb2fa6c0e5a70cb60492bee6f1ea1047500feaf7f",
"sha256:8154ec22c450df4e06b35f131adc4f2f3a12ec85981a203301d310abf580500f",
"sha256:8e9d728c4579682e837c92fdd98036bd5cdefa1da2aaf6acf26947e6dd0c01c5",
"sha256:d4b3e5329f572f055b587efc57d29bd051589fb5a43ec8898c77a47ec2fa2bbb",
"sha256:e5f2585afccbff22390cddac29849df463b252b711aa2ce7c5f3f342a5b3b444"
],
"markers": "python_version >= '2.7' and python_version != '3.0.*' and python_version != '3.1.*' and python_version != '3.3.*' and python_version != '3.2.*'",
"version": "==5.1.1"
},
"traitlets": {
"hashes": [
"sha256:9c4bd2d267b7153df9152698efb1050a5d84982d3384a37b2c1f7723ba3e7835",
"sha256:c6cb5e6f57c5a9bdaa40fa71ce7b4af30298fbab9ece9815b5d995ab6217c7d9"
],
"version": "==4.3.2"
},
"wcwidth": {
"hashes": [
"sha256:3df37372226d6e63e1b1e1eda15c594bca98a22d33a23832a90998faa96bc65e",
"sha256:f4ebe71925af7b40a864553f761ed559b43544f8f71746c2d756c7fe788ade7c"
],
"version": "==0.1.7"
},
"webencodings": {
"hashes": [
"sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78",
"sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"
],
"version": "==0.5.1"
},
"widgetsnbextension": {
"hashes": [
"sha256:14b2c65f9940c9a7d3b70adbe713dbd38b5ec69724eebaba034d1036cf3d4740",
"sha256:fa618be8435447a017fd1bf2c7ae922d0428056cfc7449f7a8641edf76b48265"
],
"version": "==3.4.2"
}
},
"develop": {}
}

View File

@ -1,63 +1,50 @@
# SimCad
```
__________ ____
________ __ _____/ ____/ | / __ \
/ ___/ __` / __ / / / /| | / / / /
/ /__/ /_/ / /_/ / /___/ ___ |/ /_/ /
\___/\__,_/\__,_/\____/_/ |_/_____/
by BlockScience
======================================
Complex Adaptive Dynamics
o i e
m d s
p e i
u d g
t n
e
r
```
***cadCAD*** is a Python package that assists in the processes of designing, testing and validating complex systems through simulation, with support for Monte Carlo methods, A/B testing and parameter sweeping.
**Dependencies:**
# Getting Started
## 1. Installation:
Requires [Python 3](https://www.python.org/downloads/)
**Option A: Install Using [pip](https://pypi.org/project/cadCAD/)**
```bash
pip install pipenv fn tabulate
pip3 install cadCAD
```
**Project:**
Example Run File:
`/DiffyQ-SimCAD/test.py`
**User Interface: Simulation Configuration**
Configurations:
```bash
/DiffyQ-SimCAD/ui/config.py
**Option B:** Build From Source
```
pip3 install -r requirements.txt
python3 setup.py sdist bdist_wheel
pip3 install dist/*.whl
```
**Build Tool & Package Import:**
Step 1. Build & Install Package locally:
```bash
pip install .
pip install -e .
```
* [Package Creation Tutorial](https://python-packaging.readthedocs.io/en/latest/minimal.html)
Step 2. Import Package & Run:
```python
from engine import run
run.main()
```
**Warning**:
**Do Not** publish this package / software to **Any** software repository **except** [DiffyQ-SimCAD's staging branch](https://github.com/BlockScience/DiffyQ-SimCAD/tree/staging) or its **Fork**
**Jupyter Setup:**
Step 1. Create Virtual Environment:
```bash
cd DiffyQ-SimCAD
pipenv run python -m ipykernel install --user --name DiffyQ-SimCAD --display-name "DiffyQ-SimCAD Env"
```
Step 2. Run Jupter Notebook:
```bash
pipenv run jupyter notebook
```
Step 3. Notebook Management:
Notebook Directory:
`/DiffyQ-SimCAD/notebooks/`
Note:
Notebooks should run on the `DiffyQ-SimCAD Env` kernel.
## 2. Learn the basics
**Tutorials:** available both as [Jupyter Notebooks](tutorials)
and [videos](https://www.youtube.com/watch?v=uJEiYHRWA9g&list=PLmWm8ksQq4YKtdRV-SoinhV6LbQMgX1we)
Familiarize yourself with some system modelling concepts and cadCAD terminology.
## 3. Documentation:
* [System Model Configuration](documentation)
* [System Simulation Execution](documentation/Simulation_Execution.md)
* [Policy Aggregation](documentation/Policy_Aggregation.md)
* [System Model Parameter Sweep](documentation/System_Model_Parameter_Sweep.md)
## 4. Connect
Find other cadCAD users at our [Discourse](https://community.cadcad.org/). We are a small but rapidly growing community.

15
ascii_art.txt Normal file
View File

@ -0,0 +1,15 @@
Complex Adaptive Dynamics
o i e
m d s
p e i
u d g
t n
e
r
__________ ____
________ __ _____/ ____/ | / __ \
/ ___/ __` / __ / / / /| | / / / /
/ /__/ /_/ / /_/ / /___/ ___ |/ /_/ /
\___/\__,_/\__,_/\____/_/ |_/_____/
by BlockScience

2
cadCAD/__init__.py Normal file
View File

@ -0,0 +1,2 @@
name = "cadCAD"
configs = []

View File

@ -0,0 +1,137 @@
from typing import Dict, Callable, List, Tuple
from functools import reduce
import pandas as pd
from pandas.core.frame import DataFrame
from cadCAD import configs
from cadCAD.utils import key_filter
from cadCAD.configuration.utils import exo_update_per_ts
from cadCAD.configuration.utils.policyAggregation import dict_elemwise_sum
from cadCAD.configuration.utils.depreciationHandler import sanitize_partial_state_updates, sanitize_config
class Configuration(object):
def __init__(self, sim_config={}, initial_state={}, seeds={}, env_processes={},
exogenous_states={}, partial_state_update_blocks={}, policy_ops=[lambda a, b: a + b],
**kwargs) -> None:
# print(exogenous_states)
self.sim_config = sim_config
self.initial_state = initial_state
self.seeds = seeds
self.env_processes = env_processes
self.exogenous_states = exogenous_states
self.partial_state_updates = partial_state_update_blocks
self.policy_ops = policy_ops
self.kwargs = kwargs
sanitize_config(self)
def append_configs(sim_configs={}, initial_state={}, seeds={}, raw_exogenous_states={}, env_processes={},
partial_state_update_blocks={}, policy_ops=[lambda a, b: a + b], _exo_update_per_ts: bool = True) -> None:
if _exo_update_per_ts is True:
exogenous_states = exo_update_per_ts(raw_exogenous_states)
else:
exogenous_states = raw_exogenous_states
if isinstance(sim_configs, dict):
sim_configs = [sim_configs]
for sim_config in sim_configs:
config = Configuration(
sim_config=sim_config,
initial_state=initial_state,
seeds=seeds,
exogenous_states=exogenous_states,
env_processes=env_processes,
partial_state_update_blocks=partial_state_update_blocks,
policy_ops=policy_ops
)
print(sim_configs)
#for each sim config create new config
configs.append(config)
class Identity:
def __init__(self, policy_id: Dict[str, int] = {'identity': 0}) -> None:
self.beh_id_return_val = policy_id
def p_identity(self, var_dict, sub_step, sL, s):
return self.beh_id_return_val
def policy_identity(self, k: str) -> Callable:
return self.p_identity
def no_state_identity(self, var_dict, sub_step, sL, s, _input):
return None
def state_identity(self, k: str) -> Callable:
return lambda var_dict, sub_step, sL, s, _input: (k, s[k])
def apply_identity_funcs(self, identity: Callable, df: DataFrame, cols: List[str]) -> List[DataFrame]:
def fillna_with_id_func(identity, df, col):
return df[[col]].fillna(value=identity(col))
return list(map(lambda col: fillna_with_id_func(identity, df, col), cols))
class Processor:
def __init__(self, id: Identity = Identity()) -> None:
self.id = id
self.p_identity = id.p_identity
self.policy_identity = id.policy_identity
self.no_state_identity = id.no_state_identity
self.state_identity = id.state_identity
self.apply_identity_funcs = id.apply_identity_funcs
def create_matrix_field(self, partial_state_updates, key: str) -> DataFrame:
if key == 'variables':
identity = self.state_identity
elif key == 'policies':
identity = self.policy_identity
df = pd.DataFrame(key_filter(partial_state_updates, key))
col_list = self.apply_identity_funcs(identity, df, list(df.columns))
if len(col_list) != 0:
return reduce((lambda x, y: pd.concat([x, y], axis=1)), col_list)
else:
return pd.DataFrame({'empty': []})
def generate_config(self, initial_state, partial_state_updates, exo_proc
) -> List[Tuple[List[Callable], List[Callable]]]:
def no_update_handler(bdf, sdf):
if (bdf.empty == False) and (sdf.empty == True):
bdf_values = bdf.values.tolist()
sdf_values = [[self.no_state_identity] * len(bdf_values) for m in range(len(partial_state_updates))]
return sdf_values, bdf_values
elif (bdf.empty == True) and (sdf.empty == False):
sdf_values = sdf.values.tolist()
bdf_values = [[self.p_identity] * len(sdf_values) for m in range(len(partial_state_updates))]
return sdf_values, bdf_values
else:
sdf_values = sdf.values.tolist()
bdf_values = bdf.values.tolist()
return sdf_values, bdf_values
def only_ep_handler(state_dict):
sdf_functions = [
lambda var_dict, sub_step, sL, s, _input: (k, v) for k, v in zip(state_dict.keys(), state_dict.values())
]
sdf_values = [sdf_functions]
bdf_values = [[self.p_identity] * len(sdf_values)]
return sdf_values, bdf_values
if len(partial_state_updates) != 0:
# backwards compatibility # ToDo: Move this
partial_state_updates = sanitize_partial_state_updates(partial_state_updates)
bdf = self.create_matrix_field(partial_state_updates, 'policies')
sdf = self.create_matrix_field(partial_state_updates, 'variables')
sdf_values, bdf_values = no_update_handler(bdf, sdf)
zipped_list = list(zip(sdf_values, bdf_values))
else:
sdf_values, bdf_values = only_ep_handler(initial_state)
zipped_list = list(zip(sdf_values, bdf_values))
return list(map(lambda x: (x[0] + exo_proc, x[1]), zipped_list))

View File

@ -0,0 +1,223 @@
from datetime import datetime, timedelta
from copy import deepcopy
from functools import reduce
from fn.func import curried
from funcy import curry
import pandas as pd
from cadCAD.configuration.utils.depreciationHandler import sanitize_partial_state_updates
from cadCAD.utils import dict_filter, contains_type, flatten_tabulated_dict, tabulate_dict
class TensorFieldReport:
def __init__(self, config_proc):
self.config_proc = config_proc
# ToDo: backwards compatibility
def create_tensor_field(self, partial_state_updates, exo_proc, keys = ['policies', 'variables']):
partial_state_updates = sanitize_partial_state_updates(partial_state_updates) # Temporary
dfs = [self.config_proc.create_matrix_field(partial_state_updates, k) for k in keys]
df = pd.concat(dfs, axis=1)
for es, i in zip(exo_proc, range(len(exo_proc))):
df['es' + str(i + 1)] = es
df['m'] = df.index + 1
return df
def state_update(y, x):
return lambda var_dict, sub_step, sL, s, _input: (y, x)
def bound_norm_random(rng, low, high):
res = rng.normal((high+low)/2, (high-low)/6)
if res < low or res > high:
res = bound_norm_random(rng, low, high)
# return Decimal(res)
return float(res)
@curried
def env_proc_trigger(timestep, f, time):
if time == timestep:
return f
else:
return lambda x: x
tstep_delta = timedelta(days=0, minutes=0, seconds=30)
def time_step(dt_str, dt_format='%Y-%m-%d %H:%M:%S', _timedelta = tstep_delta):
# print(dt_str)
dt = datetime.strptime(dt_str, dt_format)
t = dt + _timedelta
return t.strftime(dt_format)
ep_t_delta = timedelta(days=0, minutes=0, seconds=1)
def ep_time_step(s_condition, dt_str, fromat_str='%Y-%m-%d %H:%M:%S', _timedelta = ep_t_delta):
# print(dt_str)
if s_condition:
return time_step(dt_str, fromat_str, _timedelta)
else:
return dt_str
def partial_state_sweep_filter(state_field, partial_state_updates):
partial_state_dict = dict([(k, v[state_field]) for k, v in partial_state_updates.items()])
return dict([
(k, dict_filter(v, lambda v: isinstance(v, list))) for k, v in partial_state_dict.items()
if contains_type(list(v.values()), list)
])
def state_sweep_filter(raw_exogenous_states):
return dict([(k, v) for k, v in raw_exogenous_states.items() if isinstance(v, list)])
@curried
def sweep_partial_states(_type, in_config):
configs = []
# filtered_mech_states
filtered_partial_states = partial_state_sweep_filter(_type, in_config.partial_state_updates)
if len(filtered_partial_states) > 0:
for partial_state, state_dict in filtered_partial_states.items():
for state, state_funcs in state_dict.items():
for f in state_funcs:
config = deepcopy(in_config)
config.partial_state_updates[partial_state][_type][state] = f
configs.append(config)
del config
else:
configs = [in_config]
return configs
@curried
def sweep_states(state_type, states, in_config):
configs = []
filtered_states = state_sweep_filter(states)
if len(filtered_states) > 0:
for state, state_funcs in filtered_states.items():
for f in state_funcs:
config = deepcopy(in_config)
exploded_states = deepcopy(states)
exploded_states[state] = f
if state_type == 'exogenous':
config.exogenous_states = exploded_states
elif state_type == 'environmental':
config.env_processes = exploded_states
configs.append(config)
del config, exploded_states
else:
configs = [in_config]
return configs
def exo_update_per_ts(ep):
@curried
def ep_decorator(f, y, var_dict, sub_step, sL, s, _input):
if s['substep'] + 1 == 1:
return f(var_dict, sub_step, sL, s, _input)
else:
return y, s[y]
return {es: ep_decorator(f, es) for es, f in ep.items()}
def trigger_condition(s, pre_conditions, cond_opp):
condition_bools = [s[field] in precondition_values for field, precondition_values in pre_conditions.items()]
return reduce(cond_opp, condition_bools)
def apply_state_condition(pre_conditions, cond_opp, y, f, _g, step, sL, s, _input):
if trigger_condition(s, pre_conditions, cond_opp):
return f(_g, step, sL, s, _input)
else:
return y, s[y]
def var_trigger(y, f, pre_conditions, cond_op):
return lambda _g, step, sL, s, _input: apply_state_condition(pre_conditions, cond_op, y, f, _g, step, sL, s, _input)
def var_substep_trigger(substeps):
def trigger(end_substep, y, f):
pre_conditions = {'substep': substeps}
cond_opp = lambda a, b: a and b
return var_trigger(y, f, pre_conditions, cond_opp)
return lambda y, f: curry(trigger)(substeps)(y)(f)
def env_trigger(end_substep):
def trigger(end_substep, trigger_field, trigger_vals, funct_list):
def env_update(state_dict, sweep_dict, target_value):
state_dict_copy = deepcopy(state_dict)
# Use supstep to simulate current sysMetrics
if state_dict_copy['substep'] == end_substep:
state_dict_copy['timestep'] = state_dict_copy['timestep'] + 1
if state_dict_copy[trigger_field] in trigger_vals:
for g in funct_list:
target_value = g(sweep_dict, target_value)
del state_dict_copy
return target_value
return env_update
return lambda trigger_field, trigger_vals, funct_list: \
curry(trigger)(end_substep)(trigger_field)(trigger_vals)(funct_list)
def config_sim(d):
def process_variables(d):
return flatten_tabulated_dict(tabulate_dict(d))
if "M" in d:
return [{"N": d["N"], "T": d["T"], "M": M} for M in process_variables(d["M"])]
else:
d["M"] = [{}]
return d
def psub_list(psu_block, psu_steps):
return [psu_block[psu] for psu in psu_steps]
def psub(policies, state_updates):
return {
'policies': policies,
'states': state_updates
}
def genereate_psubs(policy_grid, states_grid, policies, state_updates):
PSUBS = []
for policy_ids, state_list in zip(policy_grid, states_grid):
filtered_policies = {k: v for (k, v) in policies.items() if k in policy_ids}
filtered_state_updates = {k: v for (k, v) in state_updates.items() if k in state_list}
PSUBS.append(psub(filtered_policies, filtered_state_updates))
return PSUBS
def access_block(state_history, target_field, psu_block_offset, exculsion_list=[]):
exculsion_list += [target_field]
def filter_history(key_list, sH):
filter = lambda key_list: \
lambda d: {k: v for k, v in d.items() if k not in key_list}
return list(map(filter(key_list), sH))
if psu_block_offset < -1:
if len(state_history) >= abs(psu_block_offset):
return filter_history(exculsion_list, state_history[psu_block_offset])
else:
return []
elif psu_block_offset == -1:
return filter_history(exculsion_list, state_history[psu_block_offset])
else:
return []

View File

@ -0,0 +1,35 @@
from copy import deepcopy
def sanitize_config(config):
for key, value in config.kwargs.items():
if key == 'state_dict':
config.initial_state = value
elif key == 'seed':
config.seeds = value
elif key == 'mechanisms':
config.partial_state_updates = value
if config.initial_state == {}:
raise Exception('The initial conditions of the system have not been set')
def sanitize_partial_state_updates(partial_state_updates):
new_partial_state_updates = deepcopy(partial_state_updates)
def rename_keys(d):
if 'behaviors' in d:
d['policies'] = d.pop('behaviors')
if 'states' in d:
d['variables'] = d.pop('states')
if isinstance(new_partial_state_updates, list):
for v in new_partial_state_updates:
rename_keys(v)
elif isinstance(new_partial_state_updates, dict):
for k, v in new_partial_state_updates.items():
rename_keys(v)
del partial_state_updates
return new_partial_state_updates

View File

@ -0,0 +1,45 @@
from fn.op import foldr
from fn.func import curried
def get_base_value(x):
if isinstance(x, str):
return ''
elif isinstance(x, int):
return 0
elif isinstance(x, list):
return []
else:
return 0
def policy_to_dict(v):
return dict(list(zip(map(lambda n: 'p' + str(n + 1), list(range(len(v)))), v)))
add = lambda a, b: a + b
@curried
def foldr_dict_vals(f, d):
return foldr(f)(list(d.values()))
def sum_dict_values():
return foldr_dict_vals(add)
@curried
def dict_op(f, d1, d2):
def set_base_value(target_dict, source_dict, key):
if key not in target_dict:
return get_base_value(source_dict[key])
else:
return target_dict[key]
key_set = set(list(d1.keys()) + list(d2.keys()))
return {k: f(set_base_value(d1, d2, k), set_base_value(d2, d1, k)) for k in key_set}
def dict_elemwise_sum():
return dict_op(add)

View File

@ -0,0 +1,59 @@
from collections import namedtuple
from inspect import getmembers, ismethod
from pandas.core.frame import DataFrame
from cadCAD.utils import SilentDF
def val_switch(v):
if isinstance(v, DataFrame) is True:
return SilentDF(v)
else:
return v
class udcView(object):
def __init__(self, d, masked_members):
self.__dict__ = d
self.masked_members = masked_members
def __repr__(self):
members = {}
variables = {
k: val_switch(v) for k, v in self.__dict__.items()
if str(type(v)) != "<class 'method'>" and k not in self.masked_members # and isinstance(v, DataFrame) is not True
}
members['methods'] = [k for k, v in self.__dict__.items() if str(type(v)) == "<class 'method'>"]
members.update(variables)
return f"{members}"
class udcBroker(object):
def __init__(self, obj, function_filter=['__init__']):
d = {}
funcs = dict(getmembers(obj, ismethod))
filtered_functions = {k: v for k, v in funcs.items() if k not in function_filter}
d['obj'] = obj
# d.update(deepcopy(vars(obj))) # somehow is enough
d.update(vars(obj)) # somehow is enough
d.update(filtered_functions)
self.members_dict = d
def get_members(self):
return self.members_dict
def get_view(self, masked_members):
return udcView(self.members_dict, masked_members)
def get_namedtuple(self):
return namedtuple("Hydra", self.members_dict.keys())(*self.members_dict.values())
def UDO(udo, masked_members=['obj']):
return udcBroker(udo).get_view(masked_members)
def udoPipe(obj_view):
return UDO(obj_view.obj, obj_view.masked_members)

119
cadCAD/engine/__init__.py Normal file
View File

@ -0,0 +1,119 @@
from typing import Callable, Dict, List, Any, Tuple
from pathos.multiprocessing import ProcessingPool as PPool
from pandas.core.frame import DataFrame
from cadCAD.utils import flatten
from cadCAD.configuration import Configuration, Processor
from cadCAD.configuration.utils import TensorFieldReport
from cadCAD.engine.simulation import Executor as SimExecutor
VarDictType = Dict[str, List[Any]]
StatesListsType = List[Dict[str, Any]]
ConfigsType = List[Tuple[List[Callable], List[Callable]]]
EnvProcessesType = Dict[str, Callable]
class ExecutionMode:
single_proc = 'single_proc'
multi_proc = 'multi_proc'
def single_proc_exec(
simulation_execs: List[Callable],
var_dict_list: List[VarDictType],
states_lists: List[StatesListsType],
configs_structs: List[ConfigsType],
env_processes_list: List[EnvProcessesType],
Ts: List[range],
Ns: List[int]
):
l = [simulation_execs, states_lists, configs_structs, env_processes_list, Ts, Ns]
simulation_exec, states_list, config, env_processes, T, N = list(map(lambda x: x.pop(), l))
result = simulation_exec(var_dict_list, states_list, config, env_processes, T, N)
return flatten(result)
def parallelize_simulations(
simulation_execs: List[Callable],
var_dict_list: List[VarDictType],
states_lists: List[StatesListsType],
configs_structs: List[ConfigsType],
env_processes_list: List[EnvProcessesType],
Ts: List[range],
Ns: List[int]
):
l = list(zip(simulation_execs, var_dict_list, states_lists, configs_structs, env_processes_list, Ts, Ns))
with PPool(len(configs_structs)) as p:
results = p.map(lambda t: t[0](t[1], t[2], t[3], t[4], t[5], t[6]), l)
return results
class ExecutionContext:
def __init__(self, context: str = ExecutionMode.multi_proc) -> None:
self.name = context
self.method = None
if context == 'single_proc':
self.method = single_proc_exec
elif context == 'multi_proc':
self.method = parallelize_simulations
class Executor:
def __init__(self, exec_context: ExecutionContext, configs: List[Configuration]) -> None:
self.SimExecutor = SimExecutor
self.exec_method = exec_context.method
self.exec_context = exec_context.name
self.configs = configs
def execute(self) -> Tuple[List[Dict[str, Any]], DataFrame]:
config_proc = Processor()
create_tensor_field = TensorFieldReport(config_proc).create_tensor_field
print(r'''
__________ ____
________ __ _____/ ____/ | / __ \
/ ___/ __` / __ / / / /| | / / / /
/ /__/ /_/ / /_/ / /___/ ___ |/ /_/ /
\___/\__,_/\__,_/\____/_/ |_/_____/
by BlockScience
''')
print(f'Execution Mode: {self.exec_context + ": " + str(self.configs)}')
print(f'Configurations: {self.configs}')
var_dict_list, states_lists, Ts, Ns, eps, configs_structs, env_processes_list, partial_state_updates, simulation_execs = \
[], [], [], [], [], [], [], [], []
config_idx = 0
for x in self.configs:
Ts.append(x.sim_config['T'])
Ns.append(x.sim_config['N'])
var_dict_list.append(x.sim_config['M'])
states_lists.append([x.initial_state])
eps.append(list(x.exogenous_states.values()))
configs_structs.append(config_proc.generate_config(x.initial_state, x.partial_state_updates, eps[config_idx]))
# print(env_processes_list)
env_processes_list.append(x.env_processes)
partial_state_updates.append(x.partial_state_updates)
simulation_execs.append(SimExecutor(x.policy_ops).simulation)
config_idx += 1
final_result = None
if self.exec_context == ExecutionMode.single_proc:
tensor_field = create_tensor_field(partial_state_updates.pop(), eps.pop())
result = self.exec_method(simulation_execs, var_dict_list, states_lists, configs_structs, env_processes_list, Ts, Ns)
final_result = result, tensor_field
elif self.exec_context == ExecutionMode.multi_proc:
# if len(self.configs) > 1:
simulations = self.exec_method(simulation_execs, var_dict_list, states_lists, configs_structs, env_processes_list, Ts, Ns)
results = []
for result, partial_state_updates, ep in list(zip(simulations, partial_state_updates, eps)):
results.append((flatten(result), create_tensor_field(partial_state_updates, ep)))
final_result = results
return final_result

234
cadCAD/engine/simulation.py Normal file
View File

@ -0,0 +1,234 @@
from typing import Any, Callable, Dict, List, Tuple
from pathos.pools import ThreadPool as TPool
from copy import deepcopy
from functools import reduce
from cadCAD.engine.utils import engine_exception
from cadCAD.utils import flatten
id_exception: Callable = engine_exception(KeyError, KeyError, None)
class Executor:
def __init__(
self,
policy_ops: List[Callable],
policy_update_exception: Callable = id_exception,
state_update_exception: Callable = id_exception
) -> None:
self.policy_ops = policy_ops
self.state_update_exception = state_update_exception
self.policy_update_exception = policy_update_exception
def get_policy_input(
self,
sweep_dict: Dict[str, List[Any]],
sub_step: int,
sL: List[Dict[str, Any]],
s: Dict[str, Any],
funcs: List[Callable]
) -> Dict[str, Any]:
ops = self.policy_ops
def get_col_results(sweep_dict, sub_step, sL, s, funcs):
return list(map(lambda f: f(sweep_dict, sub_step, sL, s), funcs))
def compose(init_reduction_funct, funct_list, val_list):
result, i = None, 0
composition = lambda x: [reduce(init_reduction_funct, x)] + funct_list
for g in composition(val_list):
if i == 0:
result = g
i = 1
else:
result = g(result)
return result
col_results = get_col_results(sweep_dict, sub_step, sL, s, funcs)
key_set = list(set(list(reduce(lambda a, b: a + b, list(map(lambda x: list(x.keys()), col_results))))))
new_dict = {k: [] for k in key_set}
for d in col_results:
for k in d.keys():
new_dict[k].append(d[k])
ops_head, *ops_tail = ops
return {
k: compose(
init_reduction_funct=ops_head, # func executed on value list
funct_list=ops_tail,
val_list=val_list
) for k, val_list in new_dict.items()
}
def apply_env_proc(
self,
sweep_dict,
env_processes: Dict[str, Callable],
state_dict: Dict[str, Any],
) -> Dict[str, Any]:
def env_composition(target_field, state_dict, target_value):
function_type = type(lambda x: x)
env_update = env_processes[target_field]
if isinstance(env_update, list):
for f in env_update:
target_value = f(sweep_dict, target_value)
elif isinstance(env_update, function_type):
target_value = env_update(state_dict, sweep_dict, target_value)
else:
target_value = env_update
return target_value
filtered_state_dict = {k: v for k, v in state_dict.items() if k in env_processes.keys()}
env_proc_dict = {
target_field: env_composition(target_field, state_dict, target_value)
for target_field, target_value in filtered_state_dict.items()
}
for k, v in env_proc_dict.items():
state_dict[k] = v
return state_dict
# mech_step
def partial_state_update(
self,
sweep_dict: Dict[str, List[Any]],
sub_step: int,
sL: Any,
sH: Any,
state_funcs: List[Callable],
policy_funcs: List[Callable],
env_processes: Dict[str, Callable],
time_step: int,
run: int
) -> List[Dict[str, Any]]:
last_in_obj: Dict[str, Any] = deepcopy(sL[-1])
_input: Dict[str, Any] = self.policy_update_exception(
self.get_policy_input(sweep_dict, sub_step, sH, last_in_obj, policy_funcs)
)
def generate_record(state_funcs):
for f in state_funcs:
yield self.state_update_exception(f(sweep_dict, sub_step, sH, last_in_obj, _input))
def transfer_missing_fields(source, destination):
for k in source:
if k not in destination:
destination[k] = source[k]
del source # last_in_obj
return destination
last_in_copy: Dict[str, Any] = transfer_missing_fields(last_in_obj, dict(generate_record(state_funcs)))
last_in_copy: Dict[str, Any] = self.apply_env_proc(sweep_dict, env_processes, last_in_copy)
last_in_copy['substep'], last_in_copy['timestep'], last_in_copy['run'] = sub_step, time_step, run
sL.append(last_in_copy)
del last_in_copy
return sL
# mech_pipeline - state_update_block
def state_update_pipeline(
self,
sweep_dict: Dict[str, List[Any]],
simulation_list, #states_list: List[Dict[str, Any]],
configs: List[Tuple[List[Callable], List[Callable]]],
env_processes: Dict[str, Callable],
time_step: int,
run: int
) -> List[Dict[str, Any]]:
sub_step = 0
states_list_copy: List[Dict[str, Any]] = deepcopy(simulation_list[-1])
genesis_states: Dict[str, Any] = states_list_copy[-1]
if len(states_list_copy) == 1:
genesis_states['substep'] = sub_step
# genesis_states['timestep'] = 0
# else:
# genesis_states['timestep'] = time_step
del states_list_copy
states_list: List[Dict[str, Any]] = [genesis_states]
sub_step += 1
for [s_conf, p_conf] in configs: # tensor field
states_list: List[Dict[str, Any]] = self.partial_state_update(
sweep_dict, sub_step, states_list, simulation_list, s_conf, p_conf, env_processes, time_step, run
)
sub_step += 1
time_step += 1
return states_list
# state_update_pipeline
def run_pipeline(
self,
sweep_dict: Dict[str, List[Any]],
states_list: List[Dict[str, Any]],
configs: List[Tuple[List[Callable], List[Callable]]],
env_processes: Dict[str, Callable],
time_seq: range,
run: int
) -> List[List[Dict[str, Any]]]:
time_seq: List[int] = [x + 1 for x in time_seq]
simulation_list: List[List[Dict[str, Any]]] = [states_list]
for time_step in time_seq:
pipe_run: List[Dict[str, Any]] = self.state_update_pipeline(
sweep_dict, simulation_list, configs, env_processes, time_step, run
)
_, *pipe_run = pipe_run
simulation_list.append(pipe_run)
return simulation_list
def simulation(
self,
sweep_dict: Dict[str, List[Any]],
states_list: List[Dict[str, Any]],
configs: List[Tuple[List[Callable], List[Callable]]],
env_processes: Dict[str, Callable],
time_seq: range,
runs: int
) -> List[List[Dict[str, Any]]]:
def execute_run(sweep_dict, states_list, configs, env_processes, time_seq, run) -> List[Dict[str, Any]]:
run += 1
def generate_init_sys_metrics(genesis_states_list):
for d in genesis_states_list:
d['run'], d['substep'], d['timestep'] = run, 0, 0
yield d
states_list_copy: List[Dict[str, Any]] = list(generate_init_sys_metrics(deepcopy(states_list)))
first_timestep_per_run: List[Dict[str, Any]] = self.run_pipeline(
sweep_dict, states_list_copy, configs, env_processes, time_seq, run
)
del states_list_copy
return first_timestep_per_run
tp = TPool(runs)
pipe_run: List[List[Dict[str, Any]]] = flatten(
tp.map(
lambda run: execute_run(sweep_dict, states_list, configs, env_processes, time_seq, run),
list(range(runs))
)
)
tp.clear()
return pipe_run

38
cadCAD/engine/utils.py Normal file
View File

@ -0,0 +1,38 @@
from datetime import datetime
from fn.func import curried
def datetime_range(start, end, delta, dt_format='%Y-%m-%d %H:%M:%S'):
reverse_head = end
[start, end] = [datetime.strptime(x, dt_format) for x in [start, end]]
def _datetime_range(start, end, delta):
current = start
while current < end:
yield current
current += delta
reverse_tail = [dt.strftime(dt_format) for dt in _datetime_range(start, end, delta)]
return reverse_tail + [reverse_head]
def last_index(l):
return len(l)-1
def retrieve_state(l, offset):
return l[last_index(l) + offset + 1]
@curried
def engine_exception(ErrorType, error_message, exception_function, try_function):
try:
return try_function
except ErrorType:
print(error_message)
return exception_function
@curried
def fit_param(param, x):
return x + param

142
cadCAD/utils/__init__.py Normal file
View File

@ -0,0 +1,142 @@
from functools import reduce
from typing import Dict, List
from collections import defaultdict, Counter
from itertools import product
import warnings
from pandas import DataFrame
class SilentDF(DataFrame):
def __repr__(self):
return str(hex(id(DataFrame))) #"pandas.core.frame.DataFrame"
def append_dict(dict, new_dict):
dict.update(new_dict)
return dict
class IndexCounter:
def __init__(self):
self.i = 0
def __call__(self):
self.i += 1
return self.i
def compose(*functions):
return reduce(lambda f, g: lambda x: f(g(x)), functions, lambda x: x)
def pipe(x):
return x
def print_pipe(x):
print(x)
return x
def flattenDict(l):
def tupalize(k, vs):
l = []
if isinstance(vs, list):
for v in vs:
l.append((k, v))
else:
l.append((k, vs))
return l
flat_list = [tupalize(k, vs) for k, vs in l.items()]
flat_dict = [dict(items) for items in product(*flat_list)]
return flat_dict
def flatten(l):
if isinstance(l, list):
return [item for sublist in l for item in sublist]
elif isinstance(l, dict):
return flattenDict(l)
def flatMap(f, collection):
return flatten(list(map(f, collection)))
def dict_filter(dictionary, condition):
return dict([(k, v) for k, v in dictionary.items() if condition(v)])
def get_max_dict_val_len(g: Dict[str, List[int]]) -> int:
return len(max(g.values(), key=len))
def tabulate_dict(d: Dict[str, List[int]]) -> Dict[str, List[int]]:
max_len = get_max_dict_val_len(d)
_d = {}
for k, vl in d.items():
if len(vl) != max_len:
_d[k] = vl + list([vl[-1]] * (max_len-1))
else:
_d[k] = vl
return _d
def flatten_tabulated_dict(d: Dict[str, List[int]]) -> List[Dict[str, int]]:
max_len = get_max_dict_val_len(d)
dl = [{} for i in range(max_len)]
for k, vl in d.items():
for v, i in zip(vl, list(range(len(vl)))):
dl[i][k] = v
return dl
def contains_type(_collection, type):
return any(isinstance(x, type) for x in _collection)
def drop_right(l, n):
return l[:len(l) - n]
def key_filter(l, keyname):
if (type(l) == list):
return [v[keyname] for v in l]
# Keeping support to dictionaries for backwards compatibility
# Should be removed in the future
warnings.warn(
"The use of a dictionary to describe Partial State Update Blocks will be deprecated. Use a list instead.",
FutureWarning)
return [v[keyname] for k, v in l.items()]
def groupByKey(l):
d = defaultdict(list)
for key, value in l:
d[key].append(value)
return list(dict(d).items()).pop()
# @curried
def rename(new_name, f):
f.__name__ = new_name
return f
def curry_pot(f, *argv):
sweep_ind = f.__name__[0:5] == 'sweep'
arg_len = len(argv)
if sweep_ind is True and arg_len == 4:
return f(argv[0])(argv[1])(argv[2])(argv[3])
elif sweep_ind is False and arg_len == 4:
return f(argv[0], argv[1], argv[2], argv[3])
elif sweep_ind is True and arg_len == 3:
return f(argv[0])(argv[1])(argv[2])
elif sweep_ind is False and arg_len == 3:
return f(argv[0], argv[1], argv[2])
else:
raise TypeError('curry_pot() needs 3 or 4 positional arguments')

View File

@ -0,0 +1,51 @@
from funcy import curry
from cadCAD.configuration.utils import ep_time_step, time_step
def increment(y, incr_by):
return lambda _g, step, sL, s, _input: (y, s[y] + incr_by)
def track(y):
return lambda _g, step, sL, s, _input: (y, s[y].x)
def simple_state_update(y, x):
return lambda _g, step, sH, s, _input: (y, x)
def simple_policy_update(y):
return lambda _g, step, sH, s: y
def update_timestamp(y, timedelta, format):
return lambda _g, step, sL, s, _input: (
y,
ep_time_step(s, dt_str=s[y], fromat_str=format, _timedelta=timedelta)
)
def apply(f, y: str, incr_by: int):
return lambda _g, step, sL, s, _input: (y, curry(f)(s[y])(incr_by))
def add(y: str, incr_by):
return apply(lambda a, b: a + b, y, incr_by)
def increment_state_by_int(y: str, incr_by: int):
return lambda _g, step, sL, s, _input: (y, s[y] + incr_by)
def s(y, x):
return lambda _g, step, sH, s, _input: (y, x)
def time_model(y, substeps, time_delta, ts_format='%Y-%m-%d %H:%M:%S'):
def apply_incriment_condition(s):
if s['substep'] == 0 or s['substep'] == substeps:
return y, time_step(dt_str=s[y], dt_format=ts_format, _timedelta=time_delta)
else:
return y, s[y]
return lambda _g, step, sL, s, _input: apply_incriment_condition(s)

BIN
dist/cadCAD-0.3.1-py3-none-any.whl vendored Normal file

Binary file not shown.

BIN
dist/cadCAD-0.3.1.tar.gz vendored Normal file

Binary file not shown.

View File

@ -0,0 +1,91 @@
Historical State Access
==
#### Motivation
The current state (values of state variables) is accessed through the `s` list. When the user requires previous state variable values, they may be accessed through the state history list, `sH`. Accessing the state history should be implemented without creating unintended feedback loops on the current state.
The 3rd parameter of state and policy update functions (labeled as `sH` of type `List[List[dict]]`) provides access to past Partial State Update Block (PSUB) given a negative offset number. `access_block` is used to access past PSUBs (`List[dict]`) from `sH`. For example, an offset of `-2` denotes the second to last PSUB.
#### Exclusion List
Create a list of states to exclude from the reported PSU.
```python
exclusion_list = [
'nonexistent', 'last_x', '2nd_to_last_x', '3rd_to_last_x', '4th_to_last_x'
]
```
##### Example Policy Updates
###### Last partial state update
```python
def last_update(_params, substep, sH, s):
return {"last_x": access_block(
state_history=sH,
target_field="last_x", # Add a field to the exclusion list
psu_block_offset=-1,
exculsion_list=exclusion_list
)
}
```
* Note: Although `target_field` adding a field to the exclusion may seem redundant, it is useful in the case of the exclusion list being empty while the `target_field` is assigned to a state or a policy key.
##### Define State Updates
###### 2nd to last partial state update
```python
def second2last_update(_params, substep, sH, s):
return {"2nd_to_last_x": access_block(sH, "2nd_to_last_x", -2, exclusion_list)}
```
###### 3rd to last partial state update
```python
def third_to_last_x(_params, substep, sH, s, _input):
return '3rd_to_last_x', access_block(sH, "3rd_to_last_x", -3, exclusion_list)
```
###### 4rd to last partial state update
```python
def fourth_to_last_x(_params, substep, sH, s, _input):
return '4th_to_last_x', access_block(sH, "4th_to_last_x", -4, exclusion_list)
```
###### Non-exsistent partial state update
* `psu_block_offset >= 0` doesn't exist
```python
def nonexistent(_params, substep, sH, s, _input):
return 'nonexistent', access_block(sH, "nonexistent", 0, exclusion_list)
```
#### [Example Simulation:](examples/historical_state_access.py)
#### Example Output:
###### State History
```
+----+-------+-----------+------------+-----+
| | run | substep | timestep | x |
|----+-------+-----------+------------+-----|
| 0 | 1 | 0 | 0 | 0 |
| 1 | 1 | 1 | 1 | 1 |
| 2 | 1 | 2 | 1 | 2 |
| 3 | 1 | 3 | 1 | 3 |
| 4 | 1 | 1 | 2 | 4 |
| 5 | 1 | 2 | 2 | 5 |
| 6 | 1 | 3 | 2 | 6 |
| 7 | 1 | 1 | 3 | 7 |
| 8 | 1 | 2 | 3 | 8 |
| 9 | 1 | 3 | 3 | 9 |
+----+-------+-----------+------------+-----+
```
###### Accessed State History:
Example: `last_x`
```
+----+-----------------------------------------------------------------------------------------------------------------------------------------------------+
| | last_x |
|----+-----------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | [] |
| 1 | [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}] |
| 2 | [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}] |
| 3 | [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}] |
| 4 | [{'x': 1, 'run': 1, 'substep': 1, 'timestep': 1}, {'x': 2, 'run': 1, 'substep': 2, 'timestep': 1}, {'x': 3, 'run': 1, 'substep': 3, 'timestep': 1}] |
| 5 | [{'x': 1, 'run': 1, 'substep': 1, 'timestep': 1}, {'x': 2, 'run': 1, 'substep': 2, 'timestep': 1}, {'x': 3, 'run': 1, 'substep': 3, 'timestep': 1}] |
| 6 | [{'x': 1, 'run': 1, 'substep': 1, 'timestep': 1}, {'x': 2, 'run': 1, 'substep': 2, 'timestep': 1}, {'x': 3, 'run': 1, 'substep': 3, 'timestep': 1}] |
| 7 | [{'x': 4, 'run': 1, 'substep': 1, 'timestep': 2}, {'x': 5, 'run': 1, 'substep': 2, 'timestep': 2}, {'x': 6, 'run': 1, 'substep': 3, 'timestep': 2}] |
| 8 | [{'x': 4, 'run': 1, 'substep': 1, 'timestep': 2}, {'x': 5, 'run': 1, 'substep': 2, 'timestep': 2}, {'x': 6, 'run': 1, 'substep': 3, 'timestep': 2}] |
| 9 | [{'x': 4, 'run': 1, 'substep': 1, 'timestep': 2}, {'x': 5, 'run': 1, 'substep': 2, 'timestep': 2}, {'x': 6, 'run': 1, 'substep': 3, 'timestep': 2}] |
+----+-----------------------------------------------------------------------------------------------------------------------------------------------------+
```

View File

@ -0,0 +1,77 @@
Policy Aggregation
==
For each Partial State Update, multiple policy dictionaries are aggregated into a single dictionary to be imputted into
all state functions using an initial reduction function and optional subsequent map functions.
#### Aggregate Function Composition:
```python
# Reduce Function
add = lambda a, b: a + b # Used to add policy values of the same key
# Map Function
mult_by_2 = lambda y: y * 2 # Used to multiply all policy values by 2
policy_ops=[add, mult_by_2]
```
##### Example Policy Updates per Partial State Update (PSU)
```python
def p1_psu1(_params, step, sH, s):
return {'policy1': 1}
def p2_psu1(_params, step, sH, s):
return {'policy2': 2}
```
* `add` not applicable due to lack of redundant policies
* `mult_by_2` applied to all policies
* Result: `{'policy1': 2, 'policy2': 4}`
```python
def p1_psu2(_params, step, sH, s):
return {'policy1': 2, 'policy2': 2}
def p2_psu2(_params, step, sH, s):
return {'policy1': 2, 'policy2': 2}
```
* `add` applicable due to redundant policies
* `mult_by_2` applied to all policies
* Result: `{'policy1': 8, 'policy2': 8}`
```python
def p1_psu3(_params, step, sH, s):
return {'policy1': 1, 'policy2': 2, 'policy3': 3}
def p2_psu3(_params, step, sH, s):
return {'policy1': 1, 'policy2': 2, 'policy3': 3}
```
* `add` applicable due to redundant policies
* `mult_by_2` applied to all policies
* Result: `{'policy1': 4, 'policy2': 8, 'policy3': 12}`
#### Aggregate Policies using functions
```python
from cadCAD.configuration import append_configs
append_configs(
sim_configs=???,
initial_state=???,
partial_state_update_blocks=???,
policy_ops=[add, mult_by_2] # Default: [lambda a, b: a + b]
)
```
#### Example
##### * [System Model Configuration](examples/policy_aggregation.py)
##### * Simulation Results:
```
+----+---------------------------------------------+-------+------+-----------+------------+
| | policies | run | s1 | substep | timestep |
|----+---------------------------------------------+-------+------+-----------+------------|
| 0 | {} | 1 | 0 | 0 | 0 |
| 1 | {'policy1': 2, 'policy2': 4} | 1 | 1 | 1 | 1 |
| 2 | {'policy1': 8, 'policy2': 8} | 1 | 2 | 2 | 1 |
| 3 | {'policy3': 12, 'policy1': 4, 'policy2': 8} | 1 | 3 | 3 | 1 |
| 4 | {'policy1': 2, 'policy2': 4} | 1 | 4 | 1 | 2 |
| 5 | {'policy1': 8, 'policy2': 8} | 1 | 5 | 2 | 2 |
| 6 | {'policy3': 12, 'policy1': 4, 'policy2': 8} | 1 | 6 | 3 | 2 |
| 7 | {'policy1': 2, 'policy2': 4} | 1 | 7 | 1 | 3 |
| 8 | {'policy1': 8, 'policy2': 8} | 1 | 8 | 2 | 3 |
| 9 | {'policy3': 12, 'policy1': 4, 'policy2': 8} | 1 | 9 | 3 | 3 |
+----+---------------------------------------------+-------+------+-----------+------------+
```

241
documentation/README.md Normal file
View File

@ -0,0 +1,241 @@
Simulation Configuration
==
## Introduction
Given a **Simulation Configuration**, cadCAD produces datasets that represent the evolution of the state of a system
over [discrete time](https://en.wikipedia.org/wiki/Discrete_time_and_continuous_time#Discrete_time). The state of the
system is described by a set of [State Variables](#State-Variables). The dynamic of the system is described by
[Policy Functions](#Policy-Functions) and [State Update Functions](#State-Update-Functions), which are evaluated by
cadCAD according to the definitions set by the user in [Partial State Update Blocks](#Partial-State-Update-Blocks).
A Simulation Configuration is comprised of a [System Model](#System-Model) and a set of
[Simulation Properties](#Simulation-Properties)
`append_configs`, stores a **Simulation Configuration** to be [Executed](/JS4Q9oayQASihxHBJzz4Ug) by cadCAD
```python
from cadCAD.configuration import append_configs
append_configs(
initial_state = ..., # System Model
partial_state_update_blocks = .., # System Model
policy_ops = ..., # System Model
sim_configs = ... # Simulation Properties
)
```
Parameters:
* **initial_state** : _dict_ - [State Variables](#State-Variables) and their initial values
* **partial_state_update_blocks** : List[dict[dict]] - List of [Partial State Update Blocks](#Partial-State-Update-Blocks)
* **policy_ops** : List[functions] - See [Policy Aggregation](/63k2ncjITuqOPCUHzK7Viw)
* **sim_configs** - See [System Model Parameter Sweep](/4oJ_GT6zRWW8AO3yMhFKrg)
## Simulation Properties
Simulation properties are passed to `append_configs` in the `sim_configs` parameter. To construct this parameter, we
use the `config_sim` function in `cadCAD.configuration.utils`
```python
from cadCAD.configuration.utils import config_sim
c = config_sim({
"N": ...,
"T": range(...),
"M": ...
})
append_configs(
...
sim_configs = c # Simulation Properties
)
```
### T - Simulation Length
Computer simulations run in discrete time:
>Discrete time views values of variables as occurring at distinct, separate "points in time", or equivalently as being
unchanged throughout each non-zero region of time ("time period")—that is, time is viewed as a discrete variable. (...)
This view of time corresponds to a digital clock that gives a fixed reading of 10:37 for a while, and then jumps to a
new fixed reading of 10:38, etc.
([source: Wikipedia](https://en.wikipedia.org/wiki/Discrete_time_and_continuous_time#Discrete_time))
As is common in many simulation tools, in cadCAD too we refer to each discrete unit of time as a **timestep**. cadCAD
increments a "time counter", and at each step it updates the state variables according to the equations that describe
the system.
The main simulation property that the user must set when creating a Simulation Configuration is the number of timesteps
in the simulation. In other words, for how long do they want to simulate the system that has been modeled.
### N - Number of Runs
cadCAD facilitates running multiple simulations of the same system sequentially, reporting the results of all those
runs in a single dataset. This is especially helpful for running
[Monte Carlo Simulations](../tutorials/robot-marbles-part-4/robot-marbles-part-4.ipynb).
### M - Parameters of the System
Parameters of the system, passed to the state update functions and the policy functions in the `params` parameter are
defined here. See [System Model Parameter Sweep](/4oJ_GT6zRWW8AO3yMhFKrg) for more information.
## System Model
The System Model describes the system that will be simulated in cadCAD. It is comprised of a set of
[State Variables](###Sate-Variables) and the [State Update Functions](#State-Update-Functions) that determine the
evolution of the state of the system over time. [Policy Functions](#Policy-Functions) (representations of user policies
or internal system control policies) may also be part of a System Model.
### State Variables
>A state variable is one of the set of variables that are used to describe the mathematical "state" of a dynamical
system. Intuitively, the state of a system describes enough about the system to determine its future behaviour in the
absence of any external forces affecting the system. ([source: Wikipedia](https://en.wikipedia.org/wiki/State_variable))
cadCAD can handle state variables of any Python data type, including custom classes. It is up to the user of cadCAD to
determine the state variables needed to **sufficiently and accurately** describe the system they are interested in.
State Variables are passed to `append_configs` along with its initial values, as a Python `dict` where the `dict_keys`
are the names of the variables and the `dict_values` are their initial values.
```python
from cadCAD.configuration import append_configs
genesis_states = {
'state_variable_1': 0,
'state_variable_2': 0,
'state_variable_3': 1.5,
'timestamp': '2019-01-01 00:00:00'
}
append_configs(
initial_state = genesis_states,
...
)
```
### State Update Functions
State Update Functions represent equations according to which the state variables change over time. Each state update
function must return a tuple containing a string with the name of the state variable being updated and its new value.
Each state update function can only modify a single state variable. The general structure of a state update function is:
```python
def state_update_function_A(_params, substep, sH, s, _input):
...
return 'state_variable_name', new_value
```
Parameters:
* **_params** : _dict_ - [System parameters](/4oJ_GT6zRWW8AO3yMhFKrg)
* **substep** : _int_ - Current [substep](#Substep)
* **sH** : _list[list[dict_]] - Historical values of all state variables for the simulation. See
[Historical State Access](/smiyQTnATtC9xPwvF8KbBQ) for details
* **s** : _dict_ - Current state of the system, where the `dict_keys` are the names of the state variables and the
`dict_values` are their current values.
* **_input** : _dict_ - Aggregation of the signals of all policy functions in the current
[Partial State Update Block](#Partial-State-Update-Block)
Return:
* _tuple_ containing a string with the name of the state variable being updated and its new value.
State update functions should not modify any of the parameters passed to it, as those are mutable Python objects that
cadCAD relies on in order to run the simulation according to the specifications.
### Policy Functions
A Policy Function computes one or more signals to be passed to [State Update Functions](#State-Update-Functions)
(via the _\_input_ parameter). Read
[this article](../tutorials/robot-marbles-part-2/robot-marbles-part-2.ipynb)
for details on why and when to use policy functions.
<!-- We would then expand the tutorials with these kind of concepts
#### Policies
Policies consist of the potential action made available through mechanisms. The action taken is expected to be the
result of a conditional determination of the past state.
While executed the same, the modeller can approach policies dependent on the availability of a mechanism to a population.
- ***Control Policy***
When the controlling or deploying entity has the ability to act in order to affect some aspect of the system, this is a
control policy.
- ***User Policy*** model agent behaviors in reaction to state variables and exogenous variables. The resulted user
action will become an input to PSUs. Note that user behaviors should not directly update value of state variables.
The action taken, as well as the potential to act, through a mechanism is a behavior. -->
The general structure of a policy function is:
```python
def policy_function_1(_params, substep, sH, s):
...
return {'signal_1': value_1, ..., 'signal_N': value_N}
```
Parameters:
* **_params** : _dict_ - [System parameters](/4oJ_GT6zRWW8AO3yMhFKrg)
* **substep** : _int_ - Current [substep](#Substep)
* **sH** : _list[list[dict_]] - Historical values of all state variables for the simulation. See
[Historical State Access](/smiyQTnATtC9xPwvF8KbBQ) for details
* **s** : _dict_ - Current state of the system, where the `dict_keys` are the names of the state variables and the
`dict_values` are their current values.
Return:
* _dict_ of signals to be passed to the state update functions in the same
[Partial State Update Block](#Partial-State-Update-Blocks)
Policy functions should not modify any of the parameters passed to it, as those are mutable Python objects that cadCAD
relies on in order to run the simulation according to the specifications.
At each [Partial State Update Block](#Partial-State-Update-Blocks) (PSUB), the `dicts` returned by all policy functions
within that PSUB dictionaries are aggregated into a single `dict` using an initial reduction function
(a key-wise operation, default: `dic1['keyA'] + dic2['keyA']`) and optional subsequent map functions. The resulting
aggregated `dict` is then passed as the `_input` parameter to the state update functions in that PSUB. For more
information on how to modify the aggregation method, see [Policy Aggregation](/63k2ncjITuqOPCUHzK7Viw).
### Partial State Update Blocks
A **Partial State Update Block** (PSUB) is a set of State Update Functions and Policy Functions such that State Update
Functions in the set are independent from each other and Policies in the set are independent from each other and from
the State Update Functions in the set. In other words, if a state variable is updated in a PSUB, its new value cannot
impact the State Update Functions and Policy Functions in that PSUB - only those in the next PSUB.
![](https://i.imgur.com/9rlX9TG.png)
Partial State Update Blocks are passed to `append_configs` as a List of Python `dicts` where the `dict_keys` are named
`"policies"` and `"variables"` and the values are also Python `dicts` where the keys are the names of the policy and
state update functions and the values are the functions.
```python
PSUBs = [
{
"policies": {
"b_1": policy_function_1,
...
"b_J": policy_function_J
},
"variables": {
"s_1": state_update_function_1,
...
"s_K": state_update_function_K
}
}, #PSUB_1,
{...}, #PSUB_2,
...
{...} #PSUB_M
]
append_configs(
...
partial_state_update_blocks = PSUBs,
...
)
```
#### Substep
At each timestep, cadCAD iterates over the `partial_state_update_blocks` list. For each Partial State Update Block,
cadCAD returns a record containing the state of the system at the end of that PSUB. We refer to that subdivision of a
timestep as a `substep`.
## Result Dataset
cadCAD returns a dataset containing the evolution of the state variables defined by the user over time, with three `int`
indexes:
* `run` - id of the [run](#N-Number-of-Runs)
* `timestep` - discrete unit of time (the total number of timesteps is defined by the user in the
[T Simulation Parameter](#T-Simulation-Length))
* `substep` - subdivision of timestep (the number of [substeps](#Substeps) is the same as the number of Partial State
Update Blocks)
Therefore, the total number of records in the resulting dataset is `N` x `T` x `len(partial_state_update_blocks)`
#### [System Simulation Execution](Simulation_Execution.md)

View File

@ -0,0 +1,160 @@
Simulation Execution
==
System Simulations are executed with the execution engine executor (`cadCAD.engine.Executor`) given System Model
Configurations. There are multiple simulation Execution Modes and Execution Contexts.
### Steps:
1. #### *Choose Execution Mode*:
* ##### Simulation Execution Modes:
`cadCAD` executes a process per System Model Configuration and a thread per System Simulation.
##### Class: `cadCAD.engine.ExecutionMode`
##### Attributes:
* **Single Process:** A single process Execution Mode for a single System Model Configuration (Example:
`cadCAD.engine.ExecutionMode().single_proc`).
* **Multi-Process:** Multiple process Execution Mode for System Model Simulations which executes on a thread per
given System Model Configuration (Example: `cadCAD.engine.ExecutionMode().multi_proc`).
2. #### *Create Execution Context using Execution Mode:*
```python
from cadCAD.engine import ExecutionMode, ExecutionContext
exec_mode = ExecutionMode()
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
```
3. #### *Create Simulation Executor*
```python
from cadCAD.engine import Executor
from cadCAD import configs
simulation = Executor(exec_context=single_proc_ctx, configs=configs)
```
4. #### *Execute Simulation: Produce System Event Dataset*
A Simulation execution produces a System Event Dataset and the Tensor Field applied to initial states used to create it.
```python
import pandas as pd
raw_system_events, tensor_field = simulation.execute()
# Simulation Result Types:
# raw_system_events: List[dict]
# tensor_field: pd.DataFrame
# Result System Events DataFrame
simulation_result = pd.DataFrame(raw_system_events)
```
##### Example Tensor Field
```
+----+-----+--------------------------------+--------------------------------+
| | m | b1 | s1 |
|----+-----+--------------------------------+--------------------------------|
| 0 | 1 | <function p1m1 at 0x10c458ea0> | <function s1m1 at 0x10c464510> |
| 1 | 2 | <function p1m2 at 0x10c464048> | <function s1m2 at 0x10c464620> |
| 2 | 3 | <function p1m3 at 0x10c464400> | <function s1m3 at 0x10c464730> |
+----+-----+--------------------------------+--------------------------------+
```
##### Example Result: System Events DataFrame
```
+----+-------+------------+-----------+------+-----------+
| | run | timestep | substep | s1 | s2 |
|----+-------+------------+-----------+------+-----------|
| 0 | 1 | 0 | 0 | 0 | 0.0 |
| 1 | 1 | 1 | 1 | 1 | 4 |
| 2 | 1 | 1 | 2 | 2 | 6 |
| 3 | 1 | 1 | 3 | 3 | [ 30 300] |
| 4 | 2 | 0 | 0 | 0 | 0.0 |
| 5 | 2 | 1 | 1 | 1 | 4 |
| 6 | 2 | 1 | 2 | 2 | 6 |
| 7 | 2 | 1 | 3 | 3 | [ 30 300] |
+----+-------+------------+-----------+------+-----------+
```
### Execution Examples:
##### Single Simulation Execution (Single Process Execution)
Example System Model Configurations:
* [System Model A](examples/sys_model_A.py): `/documentation/examples/sys_model_A.py`
* [System Model B](examples/sys_model_B.py): `/documentation/examples/sys_model_B.py`
Example Simulation Executions:
* [System Model A](examples/sys_model_A_exec.py): `/documentation/examples/sys_model_A_exec.py`
* [System Model B](examples/sys_model_B_exec.py): `/documentation/examples/sys_model_B_exec.py`
```python
import pandas as pd
from tabulate import tabulate
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from documentation.examples import sys_model_A
from cadCAD import configs
exec_mode = ExecutionMode()
# Single Process Execution using a Single System Model Configuration:
# sys_model_A
sys_model_A = [configs[0]] # sys_model_A
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
sys_model_A_simulation = Executor(exec_context=single_proc_ctx, configs=sys_model_A)
sys_model_A_raw_result, sys_model_A_tensor_field = sys_model_A_simulation.execute()
sys_model_A_result = pd.DataFrame(sys_model_A_raw_result)
print()
print("Tensor Field: sys_model_A")
print(tabulate(sys_model_A_tensor_field, headers='keys', tablefmt='psql'))
print("Result: System Events DataFrame")
print(tabulate(sys_model_A_result, headers='keys', tablefmt='psql'))
print()
```
##### Multiple Simulation Execution
* ##### *Multi Process Execution*
Documentation: Simulation Execution
[Example Simulation Executions::](examples/sys_model_AB_exec.py) `/documentation/examples/sys_model_AB_exec.py`
Example System Model Configurations:
* [System Model A](examples/sys_model_A.py): `/documentation/examples/sys_model_A.py`
* [System Model B](examples/sys_model_B.py): `/documentation/examples/sys_model_B.py`
```python
import pandas as pd
from tabulate import tabulate
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from documentation.examples import sys_model_A, sys_model_B
from cadCAD import configs
exec_mode = ExecutionMode()
# # Multiple Processes Execution using Multiple System Model Configurations:
# # sys_model_A & sys_model_B
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
sys_model_AB_simulation = Executor(exec_context=multi_proc_ctx, configs=configs)
i = 0
config_names = ['sys_model_A', 'sys_model_B']
for sys_model_AB_raw_result, sys_model_AB_tensor_field in sys_model_AB_simulation.execute():
sys_model_AB_result = pd.DataFrame(sys_model_AB_raw_result)
print()
print(f"Tensor Field: {config_names[i]}")
print(tabulate(sys_model_AB_tensor_field, headers='keys', tablefmt='psql'))
print("Result: System Events DataFrame:")
print(tabulate(sys_model_AB_result, headers='keys', tablefmt='psql'))
print()
i += 1
```
* ##### [*System Model Parameter Sweep*](System_Model_Parameter_Sweep.md)
[Example:](examples/param_sweep.py) `/documentation/examples/param_sweep.py`
```python
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from documentation.examples import param_sweep
from cadCAD import configs
exec_mode = ExecutionMode()
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
run = Executor(exec_context=multi_proc_ctx, configs=configs)
for raw_result, tensor_field in run.execute():
result = pd.DataFrame(raw_result)
print()
print("Tensor Field:")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()
```

View File

@ -0,0 +1,72 @@
System Model Parameter Sweep
==
Parametrization of a System Model configuration that produces multiple configurations.
##### Set Parameters
```python
params = {
'alpha': [1],
'beta': [2, 5],
'gamma': [3, 4],
'omega': [7]
}
```
The parameters above produce 2 simulations.
* Simulation 1:
* `alpha = 1`
* `beta = 2`
* `gamma = 3`
* `omega = 7`
* Simulation 2:
* `alpha = 1`
* `beta = 5`
* `gamma = 4`
* `omega = 7`
All parameters can also be set to include a single parameter each, which will result in a single simulation.
##### Example State Updates
Previous State:
`y = 0`
```python
def state_update(_params, step, sH, s, _input):
y = 'state'
x = s['state'] + _params['alpha'] + _params['gamma']
return y, x
```
* Updated State:
* Simulation 1: `y = 4 = 0 + 1 + 3`
* Simulation 2: `y = 5 = 0 + 1 + 4`
##### Example Policy Updates
```python
# Internal States per Mechanism
def policies(_params, step, sH, s):
return {'beta': _params['beta'], 'gamma': _params['gamma']}
```
* Simulation 1: `{'beta': 2, 'gamma': 3]}`
* Simulation 2: `{'beta': 5, 'gamma': 4}`
##### Configure Simulation
```python
from cadCAD.configuration.utils import config_sim
g = {
'alpha': [1],
'beta': [2, 5],
'gamma': [3, 4],
'omega': [7]
}
sim_config = config_sim(
{
"N": 2,
"T": range(5),
"M": g,
}
)
```
#### Example
##### * [System Model Configuration](examples/param_sweep.py)

View File

@ -0,0 +1,45 @@
from pprint import pprint
import pandas as pd
from tabulate import tabulate
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from documentation.examples import sys_model_A, sys_model_B
from cadCAD import configs
exec_mode = ExecutionMode()
# Single Process Execution using a Single System Model Configuration:
# sys_model_A
sys_model_A = [configs[0]]
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
sys_model_A_simulation = Executor(exec_context=single_proc_ctx, configs=sys_model_A)
sys_model_A_raw_result, sys_model_A_tensor_field = sys_model_A_simulation.execute()
sys_model_A_result = pd.DataFrame(sys_model_A_raw_result)
print()
print("Tensor Field: sys_model_A")
print(tabulate(sys_model_A_tensor_field, headers='keys', tablefmt='psql'))
print("Result: System Events DataFrame")
print(tabulate(sys_model_A_result, headers='keys', tablefmt='psql'))
print()
# # Multiple Processes Execution using Multiple System Model Configurations:
# # sys_model_A & sys_model_B
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
sys_model_AB_simulation = Executor(exec_context=multi_proc_ctx, configs=configs)
i = 0
config_names = ['sys_model_A', 'sys_model_B']
for sys_model_AB_raw_result, sys_model_AB_tensor_field in sys_model_AB_simulation.execute():
print()
pprint(sys_model_AB_raw_result)
# sys_model_AB_result = pd.DataFrame(sys_model_AB_raw_result)
print()
print(f"Tensor Field: {config_names[i]}")
print(tabulate(sys_model_AB_tensor_field, headers='keys', tablefmt='psql'))
# print("Result: System Events DataFrame:")
# print(tabulate(sys_model_AB_result, headers='keys', tablefmt='psql'))
# print()
i += 1

View File

@ -0,0 +1,110 @@
import pandas as pd
from tabulate import tabulate
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import config_sim, access_block
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from cadCAD import configs
policies, variables = {}, {}
exclusion_list = ['nonexsistant', 'last_x', '2nd_to_last_x', '3rd_to_last_x', '4th_to_last_x']
# Policies per Mechanism
# state_history, target_field, psu_block_offset, exculsion_list
def last_update(_g, substep, sH, s):
return {"last_x": access_block(
state_history=sH,
target_field="last_x",
psu_block_offset=-1,
exculsion_list=exclusion_list
)
}
policies["last_x"] = last_update
def second2last_update(_g, substep, sH, s):
return {"2nd_to_last_x": access_block(sH, "2nd_to_last_x", -2, exclusion_list)}
policies["2nd_to_last_x"] = second2last_update
# Internal States per Mechanism
# WARNING: DO NOT delete elements from sH
def add(y, x):
return lambda _g, substep, sH, s, _input: (y, s[y] + x)
variables['x'] = add('x', 1)
# last_partial_state_update_block
def nonexsistant(_g, substep, sH, s, _input):
return 'nonexsistant', access_block(sH, "nonexsistant", 0, exclusion_list)
variables['nonexsistant'] = nonexsistant
# last_partial_state_update_block
def last_x(_g, substep, sH, s, _input):
return 'last_x', _input["last_x"]
variables['last_x'] = last_x
# 2nd to last partial state update block
def second_to_last_x(_g, substep, sH, s, _input):
return '2nd_to_last_x', _input["2nd_to_last_x"]
variables['2nd_to_last_x'] = second_to_last_x
# 3rd to last partial state update block
def third_to_last_x(_g, substep, sH, s, _input):
return '3rd_to_last_x', access_block(sH, "3rd_to_last_x", -3, exclusion_list)
variables['3rd_to_last_x'] = third_to_last_x
# 4th to last partial state update block
def fourth_to_last_x(_g, substep, sH, s, _input):
return '4th_to_last_x', access_block(sH, "4th_to_last_x", -4, exclusion_list)
variables['4th_to_last_x'] = fourth_to_last_x
genesis_states = {
'x': 0,
'nonexsistant': [],
'last_x': [],
'2nd_to_last_x': [],
'3rd_to_last_x': [],
'4th_to_last_x': []
}
PSUB = {
"policies": policies,
"variables": variables
}
psubs = {
"PSUB1": PSUB,
"PSUB2": PSUB,
"PSUB3": PSUB
}
sim_config = config_sim(
{
"N": 1,
"T": range(3),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
partial_state_update_blocks=psubs
)
exec_mode = ExecutionMode()
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=configs)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
cols = ['run','substep','timestep','x','nonexsistant','last_x','2nd_to_last_x','3rd_to_last_x','4th_to_last_x']
result = result[cols]
print()
print("Tensor Field:")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,116 @@
import pprint
from typing import Dict, List
import pandas as pd
from tabulate import tabulate
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import env_trigger, var_substep_trigger, config_sim, psub_list
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from cadCAD import configs
pp = pprint.PrettyPrinter(indent=4)
def some_function(x):
return x
g: Dict[str, List[int]] = {
'alpha': [1],
'beta': [2, 5],
'gamma': [3, 4],
'omega': [some_function]
}
psu_steps = ['1', '2', '3']
system_substeps = len(psu_steps)
var_timestep_trigger = var_substep_trigger([0, system_substeps])
env_timestep_trigger = env_trigger(system_substeps)
env_process = {}
# Policies
def gamma(_params, step, sH, s):
return {'gamma': _params['gamma']}
def omega(_params, step, sH, s):
return {'omega': _params['omega'](7)}
# Internal States
def alpha(_params, step, sH, s, _input):
return 'alpha', _params['alpha']
def alpha_plus_gamma(_params, step, sH, s, _input):
return 'alpha_plus_gamma', _params['alpha'] + _params['gamma']
def beta(_params, step, sH, s, _input):
return 'beta', _params['beta']
def policies(_params, step, sH, s, _input):
return 'policies', _input
def sweeped(_params, step, sH, s, _input):
return 'sweeped', {'beta': _params['beta'], 'gamma': _params['gamma']}
genesis_states = {
'alpha_plus_gamma': 0,
'alpha': 0,
'beta': 0,
'policies': {},
'sweeped': {}
}
env_process['sweeped'] = env_timestep_trigger(trigger_field='timestep', trigger_vals=[5], funct_list=[lambda _g, x: _g['beta']])
sim_config = config_sim(
{
"N": 2,
"T": range(5),
"M": g,
}
)
psu_block = {k: {"policies": {}, "variables": {}} for k in psu_steps}
for m in psu_steps:
psu_block[m]['policies']['gamma'] = gamma
psu_block[m]['policies']['omega'] = omega
psu_block[m]["variables"]['alpha'] = alpha_plus_gamma
psu_block[m]["variables"]['alpha_plus_gamma'] = alpha
psu_block[m]["variables"]['beta'] = beta
psu_block[m]['variables']['policies'] = policies
psu_block[m]["variables"]['sweeped'] = var_timestep_trigger(y='sweeped', f=sweeped)
psubs = psub_list(psu_block, psu_steps)
print()
pp.pprint(psu_block)
print()
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
env_processes=env_process,
partial_state_update_blocks=psubs
)
exec_mode = ExecutionMode()
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
run = Executor(exec_context=multi_proc_ctx, configs=configs)
for raw_result, tensor_field in run.execute():
result = pd.DataFrame(raw_result)
print()
print("Tensor Field:")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,98 @@
import pandas as pd
from tabulate import tabulate
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import config_sim
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from cadCAD import configs
# Policies per Mechanism
def p1m1(_g, step, sH, s):
return {'policy1': 1}
def p2m1(_g, step, sH, s):
return {'policy2': 2}
def p1m2(_g, step, sH, s):
return {'policy1': 2, 'policy2': 2}
def p2m2(_g, step, sH, s):
return {'policy1': 2, 'policy2': 2}
def p1m3(_g, step, sH, s):
return {'policy1': 1, 'policy2': 2, 'policy3': 3}
def p2m3(_g, step, sH, s):
return {'policy1': 1, 'policy2': 2, 'policy3': 3}
# Internal States per Mechanism
def add(y, x):
return lambda _g, step, sH, s, _input: (y, s[y] + x)
def policies(_g, step, sH, s, _input):
y = 'policies'
x = _input
return (y, x)
# Genesis States
genesis_states = {
'policies': {},
's1': 0
}
variables = {
's1': add('s1', 1),
"policies": policies
}
psubs = {
"m1": {
"policies": {
"p1": p1m1,
"p2": p2m1
},
"variables": variables
},
"m2": {
"policies": {
"p1": p1m2,
"p2": p2m2
},
"variables": variables
},
"m3": {
"policies": {
"p1": p1m3,
"p2": p2m3
},
"variables": variables
}
}
sim_config = config_sim(
{
"N": 1,
"T": range(3),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
partial_state_update_blocks=psubs,
policy_ops=[lambda a, b: a + b, lambda y: y * 2] # Default: lambda a, b: a + b
)
exec_mode = ExecutionMode()
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=configs)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
print()
print("Tensor Field:")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,159 @@
import numpy as np
from datetime import timedelta
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import bound_norm_random, config_sim, time_step, env_trigger
seeds = {
'z': np.random.RandomState(1),
'a': np.random.RandomState(2),
'b': np.random.RandomState(3),
'c': np.random.RandomState(4)
}
# Policies per Mechanism
def p1m1(_g, step, sH, s):
return {'param1': 1}
def p2m1(_g, step, sH, s):
return {'param1': 1, 'param2': 4}
def p1m2(_g, step, sH, s):
return {'param1': 'a', 'param2': 2}
def p2m2(_g, step, sH, s):
return {'param1': 'b', 'param2': 4}
def p1m3(_g, step, sH, s):
return {'param1': ['c'], 'param2': np.array([10, 100])}
def p2m3(_g, step, sH, s):
return {'param1': ['d'], 'param2': np.array([20, 200])}
# Internal States per Mechanism
def s1m1(_g, step, sH, s, _input):
y = 's1'
x = s['s1'] + 1
return (y, x)
def s2m1(_g, step, sH, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m2(_g, step, sH, s, _input):
y = 's1'
x = s['s1'] + 1
return (y, x)
def s2m2(_g, step, sH, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m3(_g, step, sH, s, _input):
y = 's1'
x = s['s1'] + 1
return (y, x)
def s2m3(_g, step, sH, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def policies(_g, step, sH, s, _input):
y = 'policies'
x = _input
return (y, x)
# Exogenous States
proc_one_coef_A = 0.7
proc_one_coef_B = 1.3
def es3(_g, step, sH, s, _input):
y = 's3'
x = s['s3'] * bound_norm_random(seeds['a'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
def es4(_g, step, sH, s, _input):
y = 's4'
x = s['s4'] * bound_norm_random(seeds['b'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
def update_timestamp(_g, step, sH, s, _input):
y = 'timestamp'
return y, time_step(dt_str=s[y], dt_format='%Y-%m-%d %H:%M:%S', _timedelta=timedelta(days=0, minutes=0, seconds=1))
# Genesis States
genesis_states = {
's1': 0.0,
's2': 0.0,
's3': 1.0,
's4': 1.0,
'timestamp': '2018-10-01 15:16:24'
}
# Environment Process
# ToDo: Depreciation Waring for env_proc_trigger convention
trigger_timestamps = ['2018-10-01 15:16:25', '2018-10-01 15:16:27', '2018-10-01 15:16:29']
env_processes = {
"s3": [lambda _g, x: 5],
"s4": env_trigger(3)(trigger_field='timestamp', trigger_vals=trigger_timestamps, funct_list=[lambda _g, x: 10])
}
psubs = [
{
"policies": {
"b1": p1m1,
"b2": p2m1
},
"variables": {
"s1": s1m1,
"s2": s2m1,
"s3": es3,
"s4": es4,
"timestamp": update_timestamp
}
},
{
"policies": {
"b1": p1m2,
"b2": p2m2
},
"variables": {
"s1": s1m2,
"s2": s2m2,
# "s3": es3p1,
# "s4": es4p2,
}
},
{
"policies": {
"b1": p1m3,
"b2": p2m3
},
"variables": {
"s1": s1m3,
"s2": s2m3,
# "s3": es3p1,
# "s4": es4p2,
}
}
]
sim_config = config_sim(
{
"N": 2,
"T": range(1),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
env_processes=env_processes,
partial_state_update_blocks=psubs,
policy_ops=[lambda a, b: a + b]
)

View File

@ -0,0 +1,24 @@
import pandas as pd
from tabulate import tabulate
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from documentation.examples import sys_model_A, sys_model_B
from cadCAD import configs
exec_mode = ExecutionMode()
# # Multiple Processes Execution using Multiple System Model Configurations:
# # sys_model_A & sys_model_B
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
sys_model_AB_simulation = Executor(exec_context=multi_proc_ctx, configs=configs)
i = 0
config_names = ['sys_model_A', 'sys_model_B']
for sys_model_AB_raw_result, sys_model_AB_tensor_field in sys_model_AB_simulation.execute():
sys_model_AB_result = pd.DataFrame(sys_model_AB_raw_result)
print()
print(f"Tensor Field: {config_names[i]}")
print(tabulate(sys_model_AB_tensor_field, headers='keys', tablefmt='psql'))
print("Result: System Events DataFrame:")
print(tabulate(sys_model_AB_result, headers='keys', tablefmt='psql'))
print()
i += 1

View File

@ -0,0 +1,22 @@
import pandas as pd
from tabulate import tabulate
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from documentation.examples import sys_model_A
from cadCAD import configs
exec_mode = ExecutionMode()
# Single Process Execution using a Single System Model Configuration:
# sys_model_A
sys_model_A = [configs[0]] # sys_model_A
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
sys_model_A_simulation = Executor(exec_context=single_proc_ctx, configs=sys_model_A)
sys_model_A_raw_result, sys_model_A_tensor_field = sys_model_A_simulation.execute()
sys_model_A_result = pd.DataFrame(sys_model_A_raw_result)
print()
print("Tensor Field: sys_model_A")
print(tabulate(sys_model_A_tensor_field, headers='keys', tablefmt='psql'))
print("Result: System Events DataFrame")
print(tabulate(sys_model_A_result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,147 @@
import numpy as np
from datetime import timedelta
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import bound_norm_random, config_sim, env_trigger, time_step
seeds = {
'z': np.random.RandomState(1),
'a': np.random.RandomState(2),
'b': np.random.RandomState(3),
'c': np.random.RandomState(3)
}
# Policies per Mechanism
def p1m1(_g, step, sH, s):
return {'param1': 1}
def p2m1(_g, step, sH, s):
return {'param2': 4}
def p1m2(_g, step, sH, s):
return {'param1': 'a', 'param2': 2}
def p2m2(_g, step, sH, s):
return {'param1': 'b', 'param2': 4}
def p1m3(_g, step, sH, s):
return {'param1': ['c'], 'param2': np.array([10, 100])}
def p2m3(_g, step, sH, s):
return {'param1': ['d'], 'param2': np.array([20, 200])}
# Internal States per Mechanism
def s1m1(_g, step, sH, s, _input):
y = 's1'
x = _input['param1']
return (y, x)
def s2m1(_g, step, sH, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m2(_g, step, sH, s, _input):
y = 's1'
x = _input['param1']
return (y, x)
def s2m2(_g, step, sH, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m3(_g, step, sH, s, _input):
y = 's1'
x = _input['param1']
return (y, x)
def s2m3(_g, step, sH, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
# Exogenous States
proc_one_coef_A = 0.7
proc_one_coef_B = 1.3
def es3(_g, step, sH, s, _input):
y = 's3'
x = s['s3'] * bound_norm_random(seeds['a'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
def es4(_g, step, sH, s, _input):
y = 's4'
x = s['s4'] * bound_norm_random(seeds['b'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
def update_timestamp(_g, step, sH, s, _input):
y = 'timestamp'
return y, time_step(dt_str=s[y], dt_format='%Y-%m-%d %H:%M:%S', _timedelta=timedelta(days=0, minutes=0, seconds=1))
# Genesis States
genesis_states = {
's1': 0,
's2': 0,
's3': 1,
's4': 1,
'timestamp': '2018-10-01 15:16:24'
}
# Environment Process
# ToDo: Depreciation Waring for env_proc_trigger convention
trigger_timestamps = ['2018-10-01 15:16:25', '2018-10-01 15:16:27', '2018-10-01 15:16:29']
env_processes = {
"s3": [lambda _g, x: 5],
"s4": env_trigger(3)(trigger_field='timestamp', trigger_vals=trigger_timestamps, funct_list=[lambda _g, x: 10])
}
psubs = [
{
"policies": {
"b1": p1m1,
# "b2": p2m1
},
"states": {
"s1": s1m1,
# "s2": s2m1
"s3": es3,
"s4": es4,
"timestep": update_timestamp
}
},
{
"policies": {
"b1": p1m2,
# "b2": p2m2
},
"states": {
"s1": s1m2,
# "s2": s2m2
}
},
{
"policies": {
"b1": p1m3,
"b2": p2m3
},
"states": {
"s1": s1m3,
"s2": s2m3
}
}
]
sim_config = config_sim(
{
"N": 2,
"T": range(5),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
env_processes=env_processes,
partial_state_update_blocks=psubs
)

View File

@ -0,0 +1,23 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from documentation.examples import sys_model_B
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs # only contains sys_model_B
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
print()
print("Tensor Field: sys_model_B")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -1,21 +0,0 @@
from ui.config import state_dict, mechanisms, exogenous_states, env_processes, sim_config
from engine.configProcessor import generate_config
from engine.mechanismExecutor import simulation
from engine.utils import flatten
#from tabulate import tabulate
import pandas as pd
def main():
states_list = [state_dict]
configs = generate_config(mechanisms, exogenous_states)
# p = pipeline(states_list, configs, env_processes, range(10))
N = sim_config['N']
r = range(5)
# Dimensions: N x r x mechs
s = simulation(states_list, configs, env_processes, r, N)
result = pd.DataFrame(flatten(s))
print('Test')
# print(tabulate(result, headers='keys', tablefmt='psql'))
# remove print and tabulate functions, so it returns a dataframe
return result

View File

@ -1,13 +0,0 @@
# if beh list empty, repeat 0 x n_states in list
def generate_config(mechanisms, exogenous_states):
es_funcs = [exogenous_states[state] for state in list(exogenous_states.keys())]
config = list(
map(
lambda m: (
list(mechanisms[m]["states"].values()) + es_funcs,
list(mechanisms[m]["behaviors"].values())
),
list(mechanisms.keys())
)
)
return config

View File

@ -1,83 +0,0 @@
from copy import deepcopy
from fn import op, _
def getColResults(step, sL, s, funcs):
return list(map(lambda f: f(step, sL, s), funcs))
def getBehaviorInput(step, sL, s, funcs):
return op.foldr(_ + _)(getColResults(step, sL, s, funcs))
def apply_env_proc(env_processes, state_dict, step):
for state in state_dict.keys():
if state in list(env_processes.keys()):
state_dict[state] = env_processes[state](step)(state_dict[state])
def mech_step(m_step, sL, state_funcs, behavior_funcs, env_processes, t_step):
in_copy, mutatable_copy, out_copy = deepcopy(sL), deepcopy(sL), deepcopy(sL)
last_in_obj, last_mut_obj = in_copy[-1], mutatable_copy[-1]
_input = getBehaviorInput(m_step, sL, last_in_obj, behavior_funcs)
# OLD: no bueno! Mutation Bad
# for f in state_funcs:
# f(m_step, sL, last_mut_obj, _input)
# New
last_mut_obj = dict([ f(m_step, sL, last_mut_obj, _input) for f in state_funcs ])
apply_env_proc(env_processes, last_mut_obj, last_mut_obj['timestamp'])
last_mut_obj["mech_step"], last_mut_obj["time_step"] = m_step, t_step
out_copy.append(last_mut_obj)
del last_in_obj, last_mut_obj, in_copy, mutatable_copy,
return out_copy
def block_gen(states_list, configs, env_processes, t_step):
m_step = 0
states_list_copy = deepcopy(states_list)
genesis_states = states_list_copy[-1]
genesis_states['mech_step'], genesis_states['time_step'] = m_step, t_step
states_list = [genesis_states]
m_step += 1
for config in configs:
s_conf, b_conf = config[0], config[1]
states_list = mech_step(m_step, states_list, s_conf, b_conf, env_processes, t_step)
m_step += 1
t_step += 1
return states_list
def pipeline(states_list, configs, env_processes, time_seq):
time_seq = [x + 1 for x in time_seq]
simulation_list = [states_list]
for time_step in time_seq:
pipeline_run = block_gen(simulation_list[-1], configs, env_processes, time_step)
head, *pipeline_run = pipeline_run
simulation_list.append(pipeline_run)
return simulation_list
def simulation(states_list, configs, env_processes, time_seq, runs):
pipeline_run = []
for run in range(runs):
if run == 0:
head, *tail = pipeline(states_list, configs, env_processes, time_seq)
head[-1]['mech_step'], head[-1]['time_step'] = 0, 0
simulation_list = [head] + tail
pipeline_run += simulation_list
else:
transient_states_list = [pipeline_run[-1][-1]]
head, *tail = pipeline(transient_states_list, configs, env_processes, time_seq)
pipeline_run += tail
return pipeline_run

View File

@ -1,21 +0,0 @@
from ui.config import state_dict, mechanisms, exogenous_states, env_processes, sim_config
from engine.configProcessor import generate_config
from engine.mechanismExecutor import simulation
from engine.utils import flatten
#from tabulate import tabulate
import pandas as pd
def main():
states_list = [state_dict]
configs = generate_config(mechanisms, exogenous_states)
# p = pipeline(states_list, configs, env_processes, range(10))
N = sim_config['N']
r = range(5)
# Dimensions: N x r x mechs
s = simulation(states_list, configs, env_processes, r, N)
result = pd.DataFrame(flatten(s))
print('Test')
# print(tabulate(result, headers='keys', tablefmt='psql'))
# remove print and tabulate functions, so it returns a dataframe
return result

View File

@ -1,57 +0,0 @@
from datetime import datetime, timedelta
from decimal import Decimal
from functools import partial
flatten = lambda l: [item for sublist in l for item in sublist]
def flatmap(f, items):
return list(map(f, items))
def datetime_range(start, end, delta, dt_format='%Y-%m-%d %H:%M:%S'):
reverse_head = end
[start, end] = [datetime.strptime(x, dt_format) for x in [start, end]]
def _datetime_range(start, end, delta):
current = start
while current < end:
yield current
current += delta
reverse_tail = [dt.strftime(dt_format) for dt in _datetime_range(start, end, delta)]
return reverse_tail + [reverse_head]
def last_index(l):
return len(l)-1
def retrieve_state(l, offset):
return l[last_index(l) + offset + 1]
def bound_norm_random(rng, low, high):
# Add RNG Seed
res = rng.normal((high+low)/2,(high-low)/6)
if (res<low or res>high):
res = bound_norm_random(rng, low, high)
return Decimal(res)
def env_proc(trigger_step, update_f):
def env_step_trigger(trigger_step, update_f, step):
if step == trigger_step:
return update_f
else:
return lambda x: x
return partial(env_step_trigger, trigger_step, update_f)
# accept timedelta instead of timedelta params
def time_step(dt_str, dt_format='%Y-%m-%d %H:%M:%S', days=0, minutes=0, seconds=30):
dt = datetime.strptime(dt_str, dt_format)
t = dt + timedelta(days=days, minutes=minutes, seconds=seconds)
return t.strftime(dt_format)
# accept timedelta instead of timedelta params
def ep_time_step(s, dt_str, fromat_str='%Y-%m-%d %H:%M:%S', days=0, minutes=0, seconds=1):
if s['mech_step'] == 0:
return time_step(dt_str, fromat_str, days, minutes, seconds)
else:
return dt_str

File diff suppressed because it is too large Load Diff

View File

@ -1,42 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"assert pd.__version__ == '0.23.4'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,55 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"ename": "ModuleNotFoundError",
"evalue": "No module named 'ui'",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m<ipython-input-1-a6e895c51fc0>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m()\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[1;32mfrom\u001b[0m \u001b[0mengine\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mrun\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2\u001b[0m \u001b[0mrun\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmain\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32m~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\engine\\run.py\u001b[0m in \u001b[0;36m<module>\u001b[1;34m()\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[1;32mfrom\u001b[0m \u001b[0mui\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mconfig\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mstate_dict\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmechanisms\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mexogenous_states\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0menv_processes\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0msim_config\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[0mengine\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mconfigProcessor\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mgenerate_config\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 3\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[0mengine\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmechanismExecutor\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0msimulation\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 4\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[0mengine\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mutils\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mflatten\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 5\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;31mModuleNotFoundError\u001b[0m: No module named 'ui'"
]
}
],
"source": [
"from engine import run\n",
"run.main()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

File diff suppressed because it is too large Load Diff

View File

@ -1,877 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 491,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from scipy.stats import poisson\n",
"import numpy as np\n",
"import math\n",
"import seaborn as sns\n",
"import matplotlib as mpl\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# UTILS"
]
},
{
"cell_type": "code",
"execution_count": 492,
"metadata": {},
"outputs": [],
"source": [
"def bound_norm_random(low, high):\n",
" res = np.random.normal((high+low)/2,(high-low)/6)\n",
" if (res<low or res>high):\n",
" res = bound_norm_random(low, high)\n",
" return res"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TYPE/PHASE"
]
},
{
"cell_type": "code",
"execution_count": 493,
"metadata": {},
"outputs": [],
"source": [
"EXPERIMENT_TYPES = ['1 off run', 'Monte Carlo', 'Monte Carlo Parameter Sweep', 'Monte Carlo Pairwise']\n",
"\n",
"experiment_type = EXPERIMENT_TYPES[1]\n",
"monte_carlo_runs = 100\n",
"\n",
"#correct number of runs if inconsistent with experiment type\n",
"if (experiment_type == EXPERIMENT_TYPES[0]):\n",
" monte_carlo_runs = 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TIMESCALE"
]
},
{
"cell_type": "code",
"execution_count": 494,
"metadata": {},
"outputs": [],
"source": [
"SECOND = 1\n",
"MINUTE = 60*SECOND\n",
"HOUR = 60*MINUTE\n",
"DAY = 24*HOUR\n",
"DURATION_OF_A_STEP = 1*DAY\n",
"\n",
"experiment_steps = 1000\n",
"experiment_duration = experiment_steps * DURATION_OF_A_STEP\n",
"time_array = np.arange(0,experiment_steps*DURATION_OF_A_STEP,DURATION_OF_A_STEP)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# MECHANISMS (dimension)"
]
},
{
"cell_type": "code",
"execution_count": 495,
"metadata": {},
"outputs": [],
"source": [
"mechanisms_names = ['mech_one', 'mech_two', 'mech_three']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# STATES (dimension)"
]
},
{
"cell_type": "code",
"execution_count": 496,
"metadata": {},
"outputs": [],
"source": [
"states_names = ['a', 'b', 'c']\n",
"states_data = [[np.zeros(experiment_steps)]*len(states_names)]*monte_carlo_runs\n",
"states_data = np.zeros((monte_carlo_runs, experiment_steps, len(states_names)), dtype=int)\n",
"# states_data is a 3-dimensional array - montecarlo, time, states\n",
"# montecarlo[time[states]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initial Conditions"
]
},
{
"cell_type": "code",
"execution_count": 497,
"metadata": {},
"outputs": [],
"source": [
"states_0 = {\n",
" 'a': 0,\n",
" 'b': 0,\n",
" 'c': 300\n",
"}\n",
"# an initial condition must be set for every state\n",
"assert np.array([k in states_0 for k in states_names]).all(), 'Error: The initial condition of one or more states is unkonwn'\n",
"\n",
"# copy initial condition to the states dataset\n",
"for i in range(len(states_names)):\n",
" states_data[:,0,i] = states_0[states_names[i]]"
]
},
{
"cell_type": "code",
"execution_count": 498,
"metadata": {},
"outputs": [],
"source": [
"T_0 = 0\n",
"time_array = T_0 + time_array"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Mechanisms Coef (params)"
]
},
{
"cell_type": "code",
"execution_count": 499,
"metadata": {},
"outputs": [],
"source": [
"mech_one_coef_A = 0.05"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# MECHANISMS EQUATIONS (func)"
]
},
{
"cell_type": "code",
"execution_count": 500,
"metadata": {},
"outputs": [],
"source": [
"# state/mechanism matrix\n",
"def mech_one(_states_data, _time_array, _run, _step, args):\n",
"# print('mech 1')\n",
" _states_data[_run, _step, states_names.index('a')] += (1-mech_one_coef_A)*args[0]\n",
" _states_data[_run, _step, states_names.index('b')] += mech_one_coef_A*args[0]\n",
" return _states_data\n",
"\n",
"def mech_two(_states_data, _time_array, _run, _step, args):\n",
"# print('mech 2')\n",
" _states_data[_run, _step, states_names.index('a')] -= args[0]\n",
" return _states_data\n",
"\n",
"def mech_three(_states_data, _time_array, _run, _step, args):\n",
"# print('mech 3')\n",
" _states_data[_run, _step, states_names.index('b')] -= args[0]\n",
" return _states_data\n",
"\n",
"def mech_four(_states_data, _time_array, _run, _step):\n",
"# print('mech 4')\n",
" _states_data[_run, _step, states_names.index('a')] = _states_data[_run, _step-1, states_names.index('a')]\n",
" _states_data[_run, _step, states_names.index('b')] = _states_data[_run, _step-1, states_names.index('b')] \n",
" return _states_data\n",
"\n",
"mechanisms = [eval(m) for m in mechanisms_names]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Behavioral Model Coef (params) "
]
},
{
"cell_type": "code",
"execution_count": 501,
"metadata": {},
"outputs": [],
"source": [
"behavior_one_coef_A = 0.01\n",
"behavior_one_coef_B = -0.01\n",
"\n",
"behavior_two_coef_A = 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# BEHAVIORAL MODEL (func)"
]
},
{
"cell_type": "code",
"execution_count": 502,
"metadata": {},
"outputs": [],
"source": [
"behaviors_names = ['behavior_one', 'behavior_two']\n",
"def behavior_one(_states_data, _time_array, _run, _step): \n",
" c_var = ( _states_data[_run, _step, states_names.index('c')]\n",
" - _states_data[_run, _step-1, states_names.index('c')] )\n",
" c_var_perc = c_var / _states_data[_run, _step-1, states_names.index('c')]\n",
" \n",
" if (c_var_perc > behavior_one_coef_A):\n",
" return mech_one(_states_data, _time_array, _run, _step, [c_var])\n",
" elif (c_var_perc < behavior_one_coef_B):\n",
" return mech_two(_states_data, _time_array, _run, _step, [-c_var])\n",
" return _states_data\n",
"\n",
"def behavior_two(_states_data, _time_array, _run, _step):\n",
" b_balance = _states_data[_run, _step-1, states_names.index('b')]\n",
" if (b_balance > behavior_two_coef_A):\n",
" return mech_three(_states_data, _time_array, _run, _step, [b_balance])\n",
" return _states_data\n",
"\n",
"behaviors = [eval(b) for b in behaviors_names] "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ENVIRONMENTAL PROCESS (dimension)"
]
},
{
"cell_type": "code",
"execution_count": 503,
"metadata": {},
"outputs": [],
"source": [
"env_proc_names = ['proc_one']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Stochastic Process Coef (params)"
]
},
{
"cell_type": "code",
"execution_count": 504,
"metadata": {},
"outputs": [],
"source": [
"proc_one_coef_A = 0.7\n",
"proc_one_coef_B = 1.3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ENVIRONMENTAL PROCESS (func)"
]
},
{
"cell_type": "code",
"execution_count": 505,
"metadata": {},
"outputs": [],
"source": [
"def proc_one(_states_data, _time_array, _run, _step):\n",
" _states_data[_run, _step, states_names.index('a')] = _states_data[_run, _step-1, states_names.index('a')]\n",
" _states_data[_run, _step, states_names.index('b')] = _states_data[_run, _step-1, states_names.index('b')] \n",
" _states_data[_run, _step, states_names.index('c')] = ( _states_data[_run, _step-1, states_names.index('c')]\n",
" * bound_norm_random(proc_one_coef_A, proc_one_coef_B) )\n",
" return _states_data\n",
"\n",
"env_proc = [eval(p) for p in env_proc_names]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ENGINE"
]
},
{
"cell_type": "code",
"execution_count": 506,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/markusbkoch/.local/share/virtualenvs/DiffyQ-SimCAD-4_qpgnP9/lib/python3.6/site-packages/ipykernel_launcher.py:5: RuntimeWarning: invalid value encountered in long_scalars\n",
" \"\"\"\n"
]
}
],
"source": [
"for i in range(monte_carlo_runs):\n",
" for t in range(1,experiment_steps):\n",
" for p in env_proc:\n",
" states_data = p(_states_data=states_data,\n",
" _time_array=time_array, \n",
" _run=i, \n",
" _step=t)\n",
" for b in behaviors:\n",
" states_data = b(_states_data=states_data,\n",
" _time_array=time_array, \n",
" _run=i, \n",
" _step=t) #behaviors have access to exogenous data @ step-1, not @ step"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# DATA COLLECTION"
]
},
{
"cell_type": "code",
"execution_count": 507,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th></th>\n",
" <th>a</th>\n",
" <th>b</th>\n",
" <th>c</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th rowspan=\"30\" valign=\"top\">0</th>\n",
" <th>0</th>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>300</td>\n",
" </tr>\n",
" <tr>\n",
" <th>86400</th>\n",
" <td>-7</td>\n",
" <td>0</td>\n",
" <td>293</td>\n",
" </tr>\n",
" <tr>\n",
" <th>172800</th>\n",
" <td>-20</td>\n",
" <td>0</td>\n",
" <td>280</td>\n",
" </tr>\n",
" <tr>\n",
" <th>259200</th>\n",
" <td>-86</td>\n",
" <td>0</td>\n",
" <td>214</td>\n",
" </tr>\n",
" <tr>\n",
" <th>345600</th>\n",
" <td>-67</td>\n",
" <td>1</td>\n",
" <td>234</td>\n",
" </tr>\n",
" <tr>\n",
" <th>432000</th>\n",
" <td>-67</td>\n",
" <td>1</td>\n",
" <td>233</td>\n",
" </tr>\n",
" <tr>\n",
" <th>518400</th>\n",
" <td>-91</td>\n",
" <td>1</td>\n",
" <td>209</td>\n",
" </tr>\n",
" <tr>\n",
" <th>604800</th>\n",
" <td>-99</td>\n",
" <td>1</td>\n",
" <td>201</td>\n",
" </tr>\n",
" <tr>\n",
" <th>691200</th>\n",
" <td>-93</td>\n",
" <td>1</td>\n",
" <td>207</td>\n",
" </tr>\n",
" <tr>\n",
" <th>777600</th>\n",
" <td>-104</td>\n",
" <td>1</td>\n",
" <td>196</td>\n",
" </tr>\n",
" <tr>\n",
" <th>864000</th>\n",
" <td>-101</td>\n",
" <td>1</td>\n",
" <td>199</td>\n",
" </tr>\n",
" <tr>\n",
" <th>950400</th>\n",
" <td>-86</td>\n",
" <td>1</td>\n",
" <td>214</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1036800</th>\n",
" <td>-92</td>\n",
" <td>1</td>\n",
" <td>208</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1123200</th>\n",
" <td>-92</td>\n",
" <td>1</td>\n",
" <td>210</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1209600</th>\n",
" <td>-99</td>\n",
" <td>1</td>\n",
" <td>203</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1296000</th>\n",
" <td>-122</td>\n",
" <td>1</td>\n",
" <td>180</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1382400</th>\n",
" <td>-135</td>\n",
" <td>1</td>\n",
" <td>167</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1468800</th>\n",
" <td>-161</td>\n",
" <td>1</td>\n",
" <td>141</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1555200</th>\n",
" <td>-161</td>\n",
" <td>1</td>\n",
" <td>141</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1641600</th>\n",
" <td>-187</td>\n",
" <td>1</td>\n",
" <td>115</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1728000</th>\n",
" <td>-195</td>\n",
" <td>1</td>\n",
" <td>107</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1814400</th>\n",
" <td>-201</td>\n",
" <td>1</td>\n",
" <td>101</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1900800</th>\n",
" <td>-189</td>\n",
" <td>1</td>\n",
" <td>113</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1987200</th>\n",
" <td>-189</td>\n",
" <td>1</td>\n",
" <td>112</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2073600</th>\n",
" <td>-189</td>\n",
" <td>1</td>\n",
" <td>111</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2160000</th>\n",
" <td>-191</td>\n",
" <td>1</td>\n",
" <td>109</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2246400</th>\n",
" <td>-200</td>\n",
" <td>1</td>\n",
" <td>100</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2332800</th>\n",
" <td>-206</td>\n",
" <td>1</td>\n",
" <td>94</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2419200</th>\n",
" <td>-206</td>\n",
" <td>1</td>\n",
" <td>94</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2505600</th>\n",
" <td>-218</td>\n",
" <td>1</td>\n",
" <td>82</td>\n",
" </tr>\n",
" <tr>\n",
" <th>...</th>\n",
" <th>...</th>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" </tr>\n",
" <tr>\n",
" <th rowspan=\"30\" valign=\"top\">99</th>\n",
" <th>83808000</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>83894400</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>83980800</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84067200</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84153600</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84240000</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84326400</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84412800</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84499200</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84585600</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84672000</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84758400</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84844800</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>84931200</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85017600</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85104000</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85190400</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85276800</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85363200</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85449600</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85536000</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85622400</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85708800</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85795200</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85881600</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>85968000</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>86054400</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>86140800</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>86227200</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>86313600</th>\n",
" <td>-386</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>100000 rows × 3 columns</p>\n",
"</div>"
],
"text/plain": [
" a b c\n",
"0 0 0 0 300\n",
" 86400 -7 0 293\n",
" 172800 -20 0 280\n",
" 259200 -86 0 214\n",
" 345600 -67 1 234\n",
" 432000 -67 1 233\n",
" 518400 -91 1 209\n",
" 604800 -99 1 201\n",
" 691200 -93 1 207\n",
" 777600 -104 1 196\n",
" 864000 -101 1 199\n",
" 950400 -86 1 214\n",
" 1036800 -92 1 208\n",
" 1123200 -92 1 210\n",
" 1209600 -99 1 203\n",
" 1296000 -122 1 180\n",
" 1382400 -135 1 167\n",
" 1468800 -161 1 141\n",
" 1555200 -161 1 141\n",
" 1641600 -187 1 115\n",
" 1728000 -195 1 107\n",
" 1814400 -201 1 101\n",
" 1900800 -189 1 113\n",
" 1987200 -189 1 112\n",
" 2073600 -189 1 111\n",
" 2160000 -191 1 109\n",
" 2246400 -200 1 100\n",
" 2332800 -206 1 94\n",
" 2419200 -206 1 94\n",
" 2505600 -218 1 82\n",
"... ... .. ...\n",
"99 83808000 -386 0 0\n",
" 83894400 -386 0 0\n",
" 83980800 -386 0 0\n",
" 84067200 -386 0 0\n",
" 84153600 -386 0 0\n",
" 84240000 -386 0 0\n",
" 84326400 -386 0 0\n",
" 84412800 -386 0 0\n",
" 84499200 -386 0 0\n",
" 84585600 -386 0 0\n",
" 84672000 -386 0 0\n",
" 84758400 -386 0 0\n",
" 84844800 -386 0 0\n",
" 84931200 -386 0 0\n",
" 85017600 -386 0 0\n",
" 85104000 -386 0 0\n",
" 85190400 -386 0 0\n",
" 85276800 -386 0 0\n",
" 85363200 -386 0 0\n",
" 85449600 -386 0 0\n",
" 85536000 -386 0 0\n",
" 85622400 -386 0 0\n",
" 85708800 -386 0 0\n",
" 85795200 -386 0 0\n",
" 85881600 -386 0 0\n",
" 85968000 -386 0 0\n",
" 86054400 -386 0 0\n",
" 86140800 -386 0 0\n",
" 86227200 -386 0 0\n",
" 86313600 -386 0 0\n",
"\n",
"[100000 rows x 3 columns]"
]
},
"execution_count": 507,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data = pd.DataFrame(states_data[0], \n",
" index=[[0]*experiment_steps, time_array], \n",
" columns=states_names)\n",
"for i in range(1,monte_carlo_runs):\n",
" b = pd.DataFrame(states_data[i],\n",
" index=[[i]*experiment_steps, time_array], \n",
" columns=states_names)\n",
" data = data.append(b)\n",
"data"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "DiffyQ-SimCAD Env",
"language": "python",
"name": "diffyq-simcad"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,35 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"assert pd.__version__ == '0.23.4'"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,77 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"scrolled": false
},
"outputs": [
{
"ename": "ImportError",
"evalue": "cannot import name 'run'",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mImportError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m<ipython-input-5-a6e895c51fc0>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m()\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[1;32mfrom\u001b[0m \u001b[0mengine\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mrun\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2\u001b[0m \u001b[0mrun\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmain\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;31mImportError\u001b[0m: cannot import name 'run'"
]
}
],
"source": [
"from engine import run\n",
"run.main()"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

6
requirements.txt Normal file
View File

@ -0,0 +1,6 @@
pandas
wheel
pathos
fn
tabulate
funcy

View File

@ -1,11 +1,38 @@
from setuptools import setup
from setuptools import setup, find_packages
setup(name='SimCAD',
version='0.1',
description='Sim-Cad Enigne',
url='https://github.com/BlockScience/DiffyQ-SimCAD',
long_description = """
cadCAD (complex adaptive systems computer-aided design) is a python based, unified modeling framework for stochastic
dynamical systems and differential games for research, validation, and Computer Aided Design of economic systems created
by BlockScience. It is capable of modeling systems at all levels of abstraction from Agent Based Modeling (ABM) to
System Dynamics (SD), and enabling smooth integration of computational social science simulations with empirical data
science workflows.
An economic system is treated as a state-based model and defined through a set of endogenous and exogenous state
variables which are updated through mechanisms and environmental processes, respectively. Behavioral models, which may
be deterministic or stochastic, provide the evolution of the system within the action space of the mechanisms.
Mathematical formulations of these economic games treat agent utility as derived from the state rather than direct from
an action, creating a rich, dynamic modeling framework. Simulations may be run with a range of initial conditions and
parameters for states, behaviors, mechanisms, and environmental processes to understand and visualize network behavior
under various conditions. Support for A/B testing policies, Monte Carlo analysis, and other common numerical methods is
provided.
"""
setup(name='cadCAD',
version='0.3.1',
description="cadCAD: a differential games based simulation software package for research, validation, and \
Computer Aided Design of economic systems",
long_description=long_description,
url='https://github.com/BlockScience/cadCAD',
author='Joshua E. Jodesty',
author_email='joshua@block.science',
license='MIT',
packages=['engine'],
zip_safe=False)
author_email='joshua@block.science, joshua.jodesty@gmail.com',
license='LICENSE.txt',
packages=find_packages(),
install_requires=[
"pandas",
"wheel",
"pathos",
"fn",
"tabulate",
"funcy"
]
)

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,27 @@
ds1,ds2,ds3,run,substep,timestep
0,0,1,1,0,0
1,40,5,1,1,1
2,40,5,1,2,1
3,40,5,1,3,1
4,40,5,1,1,2
5,40,5,1,2,2
6,40,5,1,3,2
7,40,5,1,1,3
8,40,5,1,2,3
9,40,5,1,3,3
10,40,5,1,1,4
11,40,5,1,2,4
12,40,5,1,3,4
0,0,1,2,0,0
1,40,5,2,1,1
2,40,5,2,2,1
3,40,5,2,3,1
4,40,5,2,1,2
5,40,5,2,2,2
6,40,5,2,3,2
7,40,5,2,1,3
8,40,5,2,2,3
9,40,5,2,3,3
10,40,5,2,1,4
11,40,5,2,2,4
12,40,5,2,3,4
1 ds1 ds2 ds3 run substep timestep
2 0 0 1 1 0 0
3 1 40 5 1 1 1
4 2 40 5 1 2 1
5 3 40 5 1 3 1
6 4 40 5 1 1 2
7 5 40 5 1 2 2
8 6 40 5 1 3 2
9 7 40 5 1 1 3
10 8 40 5 1 2 3
11 9 40 5 1 3 3
12 10 40 5 1 1 4
13 11 40 5 1 2 4
14 12 40 5 1 3 4
15 0 0 1 2 0 0
16 1 40 5 2 1 1
17 2 40 5 2 2 1
18 3 40 5 2 3 1
19 4 40 5 2 1 2
20 5 40 5 2 2 2
21 6 40 5 2 3 2
22 7 40 5 2 1 3
23 8 40 5 2 2 3
24 9 40 5 2 3 3
25 10 40 5 2 1 4
26 11 40 5 2 2 4
27 12 40 5 2 3 4

View File

@ -0,0 +1,24 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
# from simulations.validation import config1_test_pipe
# from simulations.validation import config1
from simulations.validation import write_simulation
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, _ = run.main()
result = pd.DataFrame(raw_result)
result.to_csv('/Users/jjodesty/Projects/DiffyQ-SimCAD/simulations/external_data/output.csv', index=False)
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,24 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.validation import sweep_config
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Concurrent Execution")
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
run = Executor(exec_context=multi_proc_ctx, configs=configs)
i = 0
config_names = ['sweep_config_A', 'sweep_config_B']
for raw_result, tensor_field in run.execute():
result = pd.DataFrame(raw_result)
print()
print("Tensor Field: " + config_names[i])
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()
i += 1

View File

@ -0,0 +1,160 @@
import numpy as np
from datetime import timedelta
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import bound_norm_random, config_sim, time_step, env_trigger
seeds = {
'z': np.random.RandomState(1),
'a': np.random.RandomState(2),
'b': np.random.RandomState(3),
'c': np.random.RandomState(4)
}
# Policies per Mechanism
def p1m1(_g, step, sL, s):
return {'param1': 1}
def p2m1(_g, step, sL, s):
return {'param1': 1, 'param2': 4}
def p1m2(_g, step, sL, s):
return {'param1': 'a', 'param2': 2}
def p2m2(_g, step, sL, s):
return {'param1': 'b', 'param2': 4}
def p1m3(_g, step, sL, s):
return {'param1': ['c'], 'param2': np.array([10, 100])}
def p2m3(_g, step, sL, s):
return {'param1': ['d'], 'param2': np.array([20, 200])}
# Internal States per Mechanism
def s1m1(_g, step, sL, s, _input):
y = 's1'
x = s['s1'] + 1
return (y, x)
def s2m1(_g, step, sL, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m2(_g, step, sL, s, _input):
y = 's1'
x = s['s1'] + 1
return (y, x)
def s2m2(_g, step, sL, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m3(_g, step, sL, s, _input):
y = 's1'
x = s['s1'] + 1
return (y, x)
def s2m3(_g, step, sL, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def policies(_g, step, sL, s, _input):
y = 'policies'
x = _input
return (y, x)
# Exogenous States
proc_one_coef_A = 0.7
proc_one_coef_B = 1.3
def es3(_g, step, sL, s, _input):
y = 's3'
x = s['s3'] * bound_norm_random(seeds['a'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
def es4(_g, step, sL, s, _input):
y = 's4'
x = s['s4'] * bound_norm_random(seeds['b'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
def update_timestamp(_g, step, sL, s, _input):
y = 'timestamp'
return y, time_step(dt_str=s[y], dt_format='%Y-%m-%d %H:%M:%S', _timedelta=timedelta(days=0, minutes=0, seconds=1))
# Genesis States
genesis_states = {
's1': 0.0,
's2': 0.0,
's3': 1.0,
's4': 1.0,
'timestamp': '2018-10-01 15:16:24'
}
# Environment Process
# ToDo: Depreciation Waring for env_proc_trigger convention
trigger_timestamps = ['2018-10-01 15:16:25', '2018-10-01 15:16:27', '2018-10-01 15:16:29']
env_processes = {
"s3": [lambda _g, x: 5],
"s4": env_trigger(3)(trigger_field='timestamp', trigger_vals=trigger_timestamps, funct_list=[lambda _g, x: 10])
}
partial_state_update_block = [
{
"policies": {
"b1": p1m1,
"b2": p2m1
},
"variables": {
"s1": s1m1,
"s2": s2m1,
"s3": es3,
"s4": es4,
"timestamp": update_timestamp
}
},
{
"policies": {
"b1": p1m2,
"b2": p2m2
},
"variables": {
"s1": s1m2,
"s2": s2m2,
# "s3": es3p1,
# "s4": es4p2,
}
},
{
"policies": {
"b1": p1m3,
"b2": p2m3
},
"variables": {
"s1": s1m3,
"s2": s2m3,
# "s3": es3p1,
# "s4": es4p2,
}
}
]
sim_config = config_sim(
{
"N": 1,
# "N": 5,
"T": range(5),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
env_processes=env_processes,
partial_state_update_blocks=partial_state_update_block,
policy_ops=[lambda a, b: a + b]
)

View File

@ -0,0 +1,147 @@
import numpy as np
from datetime import timedelta
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import bound_norm_random, config_sim, env_trigger, time_step
seeds = {
'z': np.random.RandomState(1),
'a': np.random.RandomState(2),
'b': np.random.RandomState(3),
'c': np.random.RandomState(3)
}
# Policies per Mechanism
def p1m1(_g, step, sL, s):
return {'param1': 1}
def p2m1(_g, step, sL, s):
return {'param2': 4}
def p1m2(_g, step, sL, s):
return {'param1': 'a', 'param2': 2}
def p2m2(_g, step, sL, s):
return {'param1': 'b', 'param2': 4}
def p1m3(_g, step, sL, s):
return {'param1': ['c'], 'param2': np.array([10, 100])}
def p2m3(_g, step, sL, s):
return {'param1': ['d'], 'param2': np.array([20, 200])}
# Internal States per Mechanism
def s1m1(_g, step, sL, s, _input):
y = 's1'
x = _input['param1']
return (y, x)
def s2m1(_g, step, sL, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m2(_g, step, sL, s, _input):
y = 's1'
x = _input['param1']
return (y, x)
def s2m2(_g, step, sL, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m3(_g, step, sL, s, _input):
y = 's1'
x = _input['param1']
return (y, x)
def s2m3(_g, step, sL, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
# Exogenous States
proc_one_coef_A = 0.7
proc_one_coef_B = 1.3
def es3(_g, step, sL, s, _input):
y = 's3'
x = s['s3'] * bound_norm_random(seeds['a'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
def es4(_g, step, sL, s, _input):
y = 's4'
x = s['s4'] * bound_norm_random(seeds['b'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
def update_timestamp(_g, step, sL, s, _input):
y = 'timestamp'
return y, time_step(dt_str=s[y], dt_format='%Y-%m-%d %H:%M:%S', _timedelta=timedelta(days=0, minutes=0, seconds=1))
# Genesis States
genesis_states = {
's1': 0,
's2': 0,
's3': 1,
's4': 1,
'timestamp': '2018-10-01 15:16:24'
}
# Environment Process
# ToDo: Depreciation Waring for env_proc_trigger convention
trigger_timestamps = ['2018-10-01 15:16:25', '2018-10-01 15:16:27', '2018-10-01 15:16:29']
env_processes = {
"s3": [lambda _g, x: 5],
"s4": env_trigger(3)(trigger_field='timestamp', trigger_vals=trigger_timestamps, funct_list=[lambda _g, x: 10])
}
partial_state_update_block = {
"m1": {
"policies": {
"b1": p1m1,
# "b2": p2m1
},
"states": {
"s1": s1m1,
# "s2": s2m1
"s3": es3,
"s4": es4,
"timestep": update_timestamp
}
},
"m2": {
"policies": {
"b1": p1m2,
# "b2": p2m2
},
"states": {
"s1": s1m2,
# "s2": s2m2
}
},
"m3": {
"policies": {
"b1": p1m3,
"b2": p2m3
},
"states": {
"s1": s1m3,
"s2": s2m3
}
}
}
sim_config = config_sim(
{
"N": 2,
"T": range(5),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
env_processes=env_processes,
partial_state_update_blocks=partial_state_update_block
)

View File

@ -0,0 +1,67 @@
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import config_sim
import pandas as pd
from cadCAD.utils import SilentDF
df = SilentDF(pd.read_csv('/Users/jjodesty/Projects/DiffyQ-SimCAD/simulations/external_data/output.csv'))
def query(s, df):
return df[
(df['run'] == s['run']) & (df['substep'] == s['substep']) & (df['timestep'] == s['timestep'])
].drop(columns=['run', 'substep', "timestep"])
def p1(_g, substep, sL, s):
result_dict = query(s, df).to_dict()
del result_dict["ds3"]
return {k: list(v.values()).pop() for k, v in result_dict.items()}
def p2(_g, substep, sL, s):
result_dict = query(s, df).to_dict()
del result_dict["ds1"], result_dict["ds2"]
return {k: list(v.values()).pop() for k, v in result_dict.items()}
# ToDo: SilentDF(df) wont work
#integrate_ext_dataset
def integrate_ext_dataset(_g, step, sL, s, _input):
result_dict = query(s, df).to_dict()
return 'external_data', {k: list(v.values()).pop() for k, v in result_dict.items()}
def increment(y, incr_by):
return lambda _g, step, sL, s, _input: (y, s[y] + incr_by)
increment = increment('increment', 1)
def view_policies(_g, step, sL, s, _input):
return 'policies', _input
external_data = {'ds1': None, 'ds2': None, 'ds3': None}
state_dict = {
'increment': 0,
'external_data': external_data,
'policies': external_data
}
policies = {"p1": p1, "p2": p2}
states = {'increment': increment, 'external_data': integrate_ext_dataset, 'policies': view_policies}
PSUB = {'policies': policies, 'states': states}
# needs M1&2 need behaviors
partial_state_update_blocks = {
'PSUB1': PSUB,
'PSUB2': PSUB,
'PSUB3': PSUB
}
sim_config = config_sim({
"N": 2,
"T": range(4)
})
append_configs(
sim_configs=sim_config,
initial_state=state_dict,
partial_state_update_blocks=partial_state_update_blocks,
policy_ops=[lambda a, b: {**a, **b}]
)

View File

@ -0,0 +1,91 @@
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import config_sim, access_block
policies, variables = {}, {}
exclusion_list = ['nonexsistant', 'last_x', '2nd_to_last_x', '3rd_to_last_x', '4th_to_last_x']
# Policies per Mechanism
# WARNING: DO NOT delete elements from sH
# state_history, target_field, psu_block_offset, exculsion_list
def last_update(_g, substep, sH, s):
return {"last_x": access_block(
state_history=sH,
target_field="last_x",
psu_block_offset=-1,
exculsion_list=exclusion_list
)
}
policies["last_x"] = last_update
def second2last_update(_g, substep, sH, s):
return {"2nd_to_last_x": access_block(sH, "2nd_to_last_x", -2, exclusion_list)}
policies["2nd_to_last_x"] = second2last_update
# Internal States per Mechanism
# WARNING: DO NOT delete elements from sH
def add(y, x):
return lambda _g, substep, sH, s, _input: (y, s[y] + x)
variables['x'] = add('x', 1)
# last_partial_state_update_block
def nonexsistant(_g, substep, sH, s, _input):
return 'nonexsistant', access_block(sH, "nonexsistant", 0, exclusion_list)
variables['nonexsistant'] = nonexsistant
# last_partial_state_update_block
def last_x(_g, substep, sH, s, _input):
return 'last_x', _input["last_x"]
variables['last_x'] = last_x
# 2nd to last partial state update block
def second_to_last_x(_g, substep, sH, s, _input):
return '2nd_to_last_x', _input["2nd_to_last_x"]
variables['2nd_to_last_x'] = second_to_last_x
# 3rd to last partial state update block
def third_to_last_x(_g, substep, sH, s, _input):
return '3rd_to_last_x', access_block(sH, "3rd_to_last_x", -3, exclusion_list)
variables['3rd_to_last_x'] = third_to_last_x
# 4th to last partial state update block
def fourth_to_last_x(_g, substep, sH, s, _input):
return '4th_to_last_x', access_block(sH, "4th_to_last_x", -4, exclusion_list)
variables['4th_to_last_x'] = fourth_to_last_x
genesis_states = {
'x': 0,
'nonexsistant': [],
'last_x': [],
'2nd_to_last_x': [],
'3rd_to_last_x': [],
'4th_to_last_x': []
}
PSUB = {
"policies": policies,
"variables": variables
}
partial_state_update_block = {
"PSUB1": PSUB,
"PSUB2": PSUB,
"PSUB3": PSUB
}
sim_config = config_sim(
{
"N": 1,
"T": range(3),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
partial_state_update_blocks=partial_state_update_block
)

View File

@ -0,0 +1,83 @@
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import config_sim
# Policies per Mechanism
def p1m1(_g, step, sL, s):
return {'policy1': 1}
def p2m1(_g, step, sL, s):
return {'policy2': 2}
def p1m2(_g, step, sL, s):
return {'policy1': 2, 'policy2': 2}
def p2m2(_g, step, sL, s):
return {'policy1': 2, 'policy2': 2}
def p1m3(_g, step, sL, s):
return {'policy1': 1, 'policy2': 2, 'policy3': 3}
def p2m3(_g, step, sL, s):
return {'policy1': 1, 'policy2': 2, 'policy3': 3}
# Internal States per Mechanism
def add(y, x):
return lambda _g, step, sH, s, _input: (y, s[y] + x)
def policies(_g, step, sH, s, _input):
y = 'policies'
x = _input
return (y, x)
# Genesis States
genesis_states = {
'policies': {},
's1': 0
}
variables = {
's1': add('s1', 1),
"policies": policies
}
partial_state_update_block = {
"m1": {
"policies": {
"p1": p1m1,
"p2": p2m1
},
"variables": variables
},
"m2": {
"policies": {
"p1": p1m2,
"p2": p2m2
},
"variables": variables
},
"m3": {
"policies": {
"p1": p1m3,
"p2": p2m3
},
"variables": variables
}
}
sim_config = config_sim(
{
"N": 1,
"T": range(3),
}
)
# Aggregation == Reduce Map / Reduce Map Aggregation
# using env functions (include in reg test using / for env proc)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
partial_state_update_blocks=partial_state_update_block,
# ToDo: subsequent functions should include policy dict for access to each policy (i.e shouldnt be a map)
policy_ops=[lambda a, b: a + b, lambda y: y * 2] # Default: lambda a, b: a + b ToDO: reduction function requires high lvl explanation
)

View File

@ -0,0 +1,159 @@
import numpy as np
from datetime import timedelta
import pprint
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import env_trigger, var_substep_trigger, config_sim, time_step, psub_list
from typing import Dict, List
pp = pprint.PrettyPrinter(indent=4)
seeds = {
'z': np.random.RandomState(1),
'a': np.random.RandomState(2),
'b': np.random.RandomState(3),
'c': np.random.RandomState(3)
}
# Optional
g: Dict[str, List[int]] = {
'alpha': [1],
# 'beta': [2],
# 'gamma': [3],
'beta': [2, 5],
'gamma': [3, 4],
'omega': [7]
}
psu_steps = ['m1', 'm2', 'm3']
system_substeps = len(psu_steps)
var_timestep_trigger = var_substep_trigger([0, system_substeps])
env_timestep_trigger = env_trigger(system_substeps)
env_process = {}
psu_block = {k: {"policies": {}, "variables": {}} for k in psu_steps}
# ['s1', 's2', 's3', 's4']
# Policies per Mechanism
def p1m1(_g, step, sL, s):
return {'param1': 1}
psu_block['m1']['policies']['p1'] = p1m1
def p2m1(_g, step, sL, s):
return {'param2': 4}
psu_block['m1']['policies']['p2'] = p2m1
def p1m2(_g, step, sL, s):
return {'param1': 'a', 'param2': _g['beta']}
psu_block['m2']['policies']['p1'] = p1m2
def p2m2(_g, step, sL, s):
return {'param1': 'b', 'param2': 0}
psu_block['m2']['policies']['p2'] = p2m2
def p1m3(_g, step, sL, s):
return {'param1': np.array([10, 100])}
psu_block['m3']['policies']['p1'] = p1m3
def p2m3(_g, step, sL, s):
return {'param1': np.array([20, 200])}
psu_block['m3']['policies']['p2'] = p2m3
# Internal States per Mechanism
def s1m1(_g, step, sL, s, _input):
return 's1', 0
psu_block['m1']["variables"]['s1'] = s1m1
def s2m1(_g, step, sL, s, _input):
return 's2', _g['beta']
psu_block['m1']["variables"]['s2'] = s2m1
def s1m2(_g, step, sL, s, _input):
return 's1', _input['param2']
psu_block['m2']["variables"]['s1'] = s1m2
def s2m2(_g, step, sL, s, _input):
return 's2', _input['param2']
psu_block['m2']["variables"]['s2'] = s2m2
def s1m3(_g, step, sL, s, _input):
return 's1', 0
psu_block['m3']["variables"]['s1'] = s1m3
def s2m3(_g, step, sL, s, _input):
return 's2', 0
psu_block['m3']["variables"]['s2'] = s2m3
# Exogenous States
def update_timestamp(_g, step, sL, s, _input):
y = 'timestamp'
return y, time_step(dt_str=s[y], dt_format='%Y-%m-%d %H:%M:%S', _timedelta=timedelta(days=0, minutes=0, seconds=1))
for m in ['m1','m2','m3']:
# psu_block[m]["variables"]['timestamp'] = update_timestamp
psu_block[m]["variables"]['timestamp'] = var_timestep_trigger(y='timestamp', f=update_timestamp)
# psu_block[m]["variables"]['timestamp'] = var_trigger(
# y='timestamp', f=update_timestamp, pre_conditions={'substep': [0, system_substeps]}, cond_op=lambda a, b: a and b
# )
proc_one_coef = 0.7
def es3(_g, step, sL, s, _input):
return 's3', s['s3'] + proc_one_coef
# use `timestep_trigger` to update every ts
for m in ['m1','m2','m3']:
psu_block[m]["variables"]['s3'] = var_timestep_trigger(y='s3', f=es3)
def es4(_g, step, sL, s, _input):
return 's4', s['s4'] + _g['gamma']
for m in ['m1','m2','m3']:
psu_block[m]["variables"]['s4'] = var_timestep_trigger(y='s4', f=es4)
# ToDo: The number of values entered in sweep should be the # of config objs created,
# not dependent on the # of times the sweep is applied
# sweep exo_state func and point to exo-state in every other funtion
# param sweep on genesis states
# Genesis States
genesis_states = {
's1': 0.0,
's2': 0.0,
's3': 1.0,
's4': 1.0,
'timestamp': '2018-10-01 15:16:24'
}
# Environment Process
# ToDo: Validate - make env proc trigger field agnostic
env_process["s3"] = [lambda _g, x: _g['beta'], lambda _g, x: x + 1]
env_process["s4"] = env_timestep_trigger(trigger_field='timestep', trigger_vals=[5], funct_list=[lambda _g, x: _g['beta']])
# config_sim Necessary
sim_config = config_sim(
{
"N": 2,
"T": range(5),
"M": g, # Optional
}
)
# New Convention
partial_state_update_blocks = psub_list(psu_block, psu_steps)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
seeds=seeds,
env_processes=env_process,
partial_state_update_blocks=partial_state_update_blocks
)
print()
print("Policie State Update Block:")
pp.pprint(partial_state_update_blocks)
print()
print()

View File

@ -0,0 +1,36 @@
import unittest
import pandas as pd
# from tabulate import tabulate
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import policy_aggregation
from cadCAD import configs
exec_mode = ExecutionMode()
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
class TestStringMethods(unittest.TestCase):
def __init__(self, result: pd.DataFrame, tensor_field: pd.DataFrame) -> None:
self.result = result
self.tensor_field = tensor_field
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,183 @@
from copy import deepcopy
import pandas as pd
from fn.func import curried
from datetime import timedelta
import pprint as pp
from cadCAD.utils import SilentDF #, val_switch
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import time_step, config_sim, var_trigger, var_substep_trigger, env_trigger, psub_list
from cadCAD.configuration.utils.userDefinedObject import udoPipe, UDO
DF = SilentDF(pd.read_csv('/Users/jjodesty/Projects/DiffyQ-SimCAD/simulations/external_data/output.csv'))
class udoExample(object):
def __init__(self, x, dataset=None):
self.x = x
self.mem_id = str(hex(id(self)))
self.ds = dataset # for setting ds initially or querying
self.perception = {}
def anon(self, f):
return f(self)
def updateX(self):
self.x += 1
return self
def updateDS(self):
self.ds.iloc[0,0] -= 10
# pp.pprint(self.ds)
return self
def perceive(self, s):
self.perception = self.ds[
(self.ds['run'] == s['run']) & (self.ds['substep'] == s['substep']) & (self.ds['timestep'] == s['timestep'])
].drop(columns=['run', 'substep']).to_dict()
return self
def read(self, ds_uri):
self.ds = SilentDF(pd.read_csv(ds_uri))
return self
def write(self, ds_uri):
pd.to_csv(ds_uri)
# ToDo: Generic update function
pass
state_udo = UDO(udo=udoExample(0, DF), masked_members=['obj', 'perception'])
policy_udoA = UDO(udo=udoExample(0, DF), masked_members=['obj', 'perception'])
policy_udoB = UDO(udo=udoExample(0, DF), masked_members=['obj', 'perception'])
sim_config = config_sim({
"N": 2,
"T": range(4)
})
# ToDo: DataFrame Column order
state_dict = {
'increment': 0,
'state_udo': state_udo, 'state_udo_tracker': 0,
'state_udo_perception_tracker': {"ds1": None, "ds2": None, "ds3": None, "timestep": None},
'udo_policies': {'udo_A': policy_udoA, 'udo_B': policy_udoB},
'udo_policy_tracker': (0, 0),
'timestamp': '2019-01-01 00:00:00'
}
psu_steps = ['m1', 'm2', 'm3']
system_substeps = len(psu_steps)
var_timestep_trigger = var_substep_trigger([0, system_substeps])
env_timestep_trigger = env_trigger(system_substeps)
psu_block = {k: {"policies": {}, "variables": {}} for k in psu_steps}
def udo_policyA(_g, step, sL, s):
s['udo_policies']['udo_A'].updateX()
return {'udo_A': udoPipe(s['udo_policies']['udo_A'])}
# policies['a'] = udo_policyA
for m in psu_steps:
psu_block[m]['policies']['a'] = udo_policyA
def udo_policyB(_g, step, sL, s):
s['udo_policies']['udo_B'].updateX()
return {'udo_B': udoPipe(s['udo_policies']['udo_B'])}
# policies['b'] = udo_policyB
for m in psu_steps:
psu_block[m]['policies']['b'] = udo_policyB
# policies = {"p1": udo_policyA, "p2": udo_policyB}
# policies = {"A": udo_policyA, "B": udo_policyB}
def add(y: str, added_val):
return lambda _g, step, sL, s, _input: (y, s[y] + added_val)
# state_updates['increment'] = add('increment', 1)
for m in psu_steps:
psu_block[m]["variables"]['increment'] = add('increment', 1)
@curried
def perceive(s, self):
self.perception = self.ds[
(self.ds['run'] == s['run']) & (self.ds['substep'] == s['substep']) & (self.ds['timestep'] == s['timestep'])
].drop(columns=['run', 'substep']).to_dict()
return self
def state_udo_update(_g, step, sL, s, _input):
y = 'state_udo'
# s['hydra_state'].updateX().anon(perceive(s))
s['state_udo'].updateX().perceive(s).updateDS()
x = udoPipe(s['state_udo'])
return y, x
for m in psu_steps:
psu_block[m]["variables"]['state_udo'] = state_udo_update
def track(destination, source):
return lambda _g, step, sL, s, _input: (destination, s[source].x)
state_udo_tracker = track('state_udo_tracker', 'state_udo')
for m in psu_steps:
psu_block[m]["variables"]['state_udo_tracker'] = state_udo_tracker
def track_state_udo_perception(destination, source):
def id(past_perception):
if len(past_perception) == 0:
return state_dict['state_udo_perception_tracker']
else:
return past_perception
return lambda _g, step, sL, s, _input: (destination, id(s[source].perception))
state_udo_perception_tracker = track_state_udo_perception('state_udo_perception_tracker', 'state_udo')
for m in psu_steps:
psu_block[m]["variables"]['state_udo_perception_tracker'] = state_udo_perception_tracker
def view_udo_policy(_g, step, sL, s, _input):
return 'udo_policies', _input
for m in psu_steps:
psu_block[m]["variables"]['udo_policies'] = view_udo_policy
def track_udo_policy(destination, source):
def val_switch(v):
if isinstance(v, pd.DataFrame) is True or isinstance(v, SilentDF) is True:
return SilentDF(v)
else:
return v.x
return lambda _g, step, sL, s, _input: (destination, tuple(val_switch(v) for _, v in s[source].items()))
udo_policy_tracker = track_udo_policy('udo_policy_tracker', 'udo_policies')
for m in psu_steps:
psu_block[m]["variables"]['udo_policy_tracker'] = udo_policy_tracker
def update_timestamp(_g, step, sL, s, _input):
y = 'timestamp'
return y, time_step(dt_str=s[y], dt_format='%Y-%m-%d %H:%M:%S', _timedelta=timedelta(days=0, minutes=0, seconds=1))
for m in psu_steps:
psu_block[m]["variables"]['timestamp'] = var_timestep_trigger(y='timestamp', f=update_timestamp)
# psu_block[m]["variables"]['timestamp'] = var_trigger(
# y='timestamp', f=update_timestamp,
# pre_conditions={'substep': [0, system_substeps]}, cond_op=lambda a, b: a and b
# )
# psu_block[m]["variables"]['timestamp'] = update_timestamp
# ToDo: Bug without specifying parameters
# New Convention
partial_state_update_blocks = psub_list(psu_block, psu_steps)
append_configs(
sim_configs=sim_config,
initial_state=state_dict,
partial_state_update_blocks=partial_state_update_blocks
)
print()
print("State Updates:")
pp.pprint(partial_state_update_blocks)
print()

View File

@ -0,0 +1,169 @@
import pandas as pd
import pprint as pp
from fn.func import curried
from datetime import timedelta
from cadCAD.utils import SilentDF #, val_switch
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import time_step, config_sim
from cadCAD.configuration.utils.userDefinedObject import udoPipe, UDO
DF = SilentDF(pd.read_csv('/Users/jjodesty/Projects/DiffyQ-SimCAD/simulations/external_data/output.csv'))
class udoExample(object):
def __init__(self, x, dataset=None):
self.x = x
self.mem_id = str(hex(id(self)))
self.ds = dataset # for setting ds initially or querying
self.perception = {}
def anon(self, f):
return f(self)
def updateX(self):
self.x += 1
return self
def perceive(self, s):
self.perception = self.ds[
(self.ds['run'] == s['run']) & (self.ds['substep'] == s['substep']) & (self.ds['timestep'] == s['timestep'])
].drop(columns=['run', 'substep']).to_dict()
return self
def read(self, ds_uri):
self.ds = SilentDF(pd.read_csv(ds_uri))
return self
def write(self, ds_uri):
pd.to_csv(ds_uri)
# ToDo: Generic update function
pass
# can be accessed after an update within the same substep and timestep
state_udo = UDO(udo=udoExample(0, DF), masked_members=['obj', 'perception'])
policy_udoA = UDO(udo=udoExample(0, DF), masked_members=['obj', 'perception'])
policy_udoB = UDO(udo=udoExample(0, DF), masked_members=['obj', 'perception'])
def udo_policyA(_g, step, sL, s):
s['udo_policies']['udo_A'].updateX()
return {'udo_A': udoPipe(s['udo_policies']['udo_A'])}
def udo_policyB(_g, step, sL, s):
s['udo_policies']['udo_B'].updateX()
return {'udo_B': udoPipe(s['udo_policies']['udo_B'])}
policies = {"p1": udo_policyA, "p2": udo_policyB}
# ToDo: DataFrame Column order
state_dict = {
'increment': 0,
'state_udo': state_udo, 'state_udo_tracker_a': 0, 'state_udo_tracker_b': 0,
'state_udo_perception_tracker': {"ds1": None, "ds2": None, "ds3": None, "timestep": None},
'udo_policies': {'udo_A': policy_udoA, 'udo_B': policy_udoB},
'udo_policy_tracker_a': (0, 0), 'udo_policy_tracker_b': (0, 0),
'timestamp': '2019-01-01 00:00:00'
}
@curried
def perceive(s, self):
self.perception = self.ds[
(self.ds['run'] == s['run']) & (self.ds['substep'] == s['substep']) & (self.ds['timestep'] == s['timestep'])
].drop(columns=['run', 'substep']).to_dict()
return self
def view_udo_policy(_g, step, sL, s, _input):
return 'udo_policies', _input
def state_udo_update(_g, step, sL, s, _input):
y = 'state_udo'
# s['hydra_state'].updateX().anon(perceive(s))
s['state_udo'].updateX().perceive(s)
x = udoPipe(s['state_udo'])
return y, x
def increment(y, incr_by):
return lambda _g, step, sL, s, _input: (y, s[y] + incr_by)
def track(destination, source):
return lambda _g, step, sL, s, _input: (destination, s[source].x)
def track_udo_policy(destination, source):
def val_switch(v):
if isinstance(v, pd.DataFrame) is True or isinstance(v, SilentDF) is True:
return SilentDF(v)
else:
return v.x
return lambda _g, step, sL, s, _input: (destination, tuple(val_switch(v) for _, v in s[source].items()))
def track_state_udo_perception(destination, source):
def id(past_perception):
if len(past_perception) == 0:
return state_dict['state_udo_perception_tracker']
else:
return past_perception
return lambda _g, step, sL, s, _input: (destination, id(s[source].perception))
def time_model(y, substeps, time_delta, ts_format='%Y-%m-%d %H:%M:%S'):
def apply_incriment_condition(s):
if s['substep'] == 0 or s['substep'] == substeps:
return y, time_step(dt_str=s[y], dt_format=ts_format, _timedelta=time_delta)
else:
return y, s[y]
return lambda _g, step, sL, s, _input: apply_incriment_condition(s)
states = {
'increment': increment('increment', 1),
'state_udo_tracker_a': track('state_udo_tracker_a', 'state_udo'),
'state_udo': state_udo_update,
'state_udo_perception_tracker': track_state_udo_perception('state_udo_perception_tracker', 'state_udo'),
'state_udo_tracker_b': track('state_udo_tracker_b', 'state_udo'),
'udo_policy_tracker_a': track_udo_policy('udo_policy_tracker_a', 'udo_policies'),
'udo_policies': view_udo_policy,
'udo_policy_tracker_b': track_udo_policy('udo_policy_tracker_b', 'udo_policies')
}
substeps=3
update_timestamp = time_model(
'timestamp',
substeps=3,
time_delta=timedelta(days=0, minutes=0, seconds=1),
ts_format='%Y-%m-%d %H:%M:%S'
)
states['timestamp'] = update_timestamp
PSUB = {
'policies': policies,
'states': states
}
# needs M1&2 need behaviors
partial_state_update_blocks = [PSUB] * substeps
# pp.pprint(partial_state_update_blocks)
sim_config = config_sim({
"N": 2,
"T": range(4)
})
# ToDo: Bug without specifying parameters
append_configs(
sim_configs=sim_config,
initial_state=state_dict,
seeds={},
raw_exogenous_states={},
env_processes={},
partial_state_update_blocks=partial_state_update_blocks,
# policy_ops=[lambda a, b: {**a, **b}]
)
print()
print("State Updates:")
pp.pprint(partial_state_update_blocks)
print()

View File

@ -0,0 +1,25 @@
import pandas as pd
from typing import List
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import config1
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
print()
print("Tensor Field: config1")
# print(raw_result)
print(tabulate(tensor_field[['m', 'b1', 's1', 's2']], headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,23 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import config2
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs # only contains config2
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
print()
print("Tensor Field: config1")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,26 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import external_dataset
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
result = pd.concat([result, result['external_data'].apply(pd.Series)], axis=1)[
['run', 'substep', 'timestep', 'increment', 'external_data', 'policies', 'ds1', 'ds2', 'ds3', ]
]
print()
print("Tensor Field: config1")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,27 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import historical_state_access
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
# cols = ['run','substep','timestep','x','nonexsistant','last_x','2nd_to_last_x','3rd_to_last_x','4th_to_last_x']
cols = ['last_x']
result = result[cols]
print()
print("Tensor Field: config1")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,25 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import config1, config2
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Concurrent Execution")
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
run = Executor(exec_context=multi_proc_ctx, configs=configs)
# print(configs)
i = 0
config_names = ['config1', 'config2']
for raw_result, tensor_field in run.execute():
result = pd.DataFrame(raw_result)
print()
print(f"Tensor Field: {config_names[i]}")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()
i += 1

View File

@ -0,0 +1,26 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import sweep_config
from cadCAD import configs
# pprint(configs)
exec_mode = ExecutionMode()
print("Simulation Execution: Concurrent Execution")
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
run = Executor(exec_context=multi_proc_ctx, configs=configs)
i = 0
config_names = ['sweep_config_A', 'sweep_config_B']
for raw_result, tensor_field in run.execute():
result = pd.DataFrame(raw_result)
print()
print("Tensor Field: " + config_names[i])
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()
i += 1

View File

@ -0,0 +1,25 @@
from pprint import pprint
import pandas as pd
from tabulate import tabulate
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import policy_aggregation
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
print()
print("Tensor Field: config1")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()

View File

@ -0,0 +1,48 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import udo
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=configs)
# cols = configs[0].initial_state.keys()
cols = [
'increment',
'state_udo_tracker', 'state_udo', 'state_udo_perception_tracker',
'udo_policies', 'udo_policy_tracker',
'timestamp'
]
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)[['run', 'substep', 'timestep'] + cols]
# result = pd.concat([result.drop(['c'], axis=1), result['c'].apply(pd.Series)], axis=1)
# print(list(result['c']))
# print(tabulate(result['c'].apply(pd.Series), headers='keys', tablefmt='psql'))
# print(result.iloc[8,:]['state_udo'].ds)
# ctypes.cast(id(v['state_udo']['mem_id']), ctypes.py_object).value
print()
print("Tensor Field: config1")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()
print(result.info(verbose=True))
# def f(df, col):
# for k in df[col].iloc[0].keys():
# df[k] = None
# for index, row in df.iterrows():
# # df.apply(lambda row:, axis=1)

View File

@ -0,0 +1,44 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import udo_inter_substep_update
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
# cols = configs[0].initial_state.keys()
cols = [
'increment',
'state_udo_tracker_a', 'state_udo', 'state_udo_perception_tracker', 'state_udo_tracker_b',
'udo_policy_tracker_a', 'udo_policies', 'udo_policy_tracker_b',
'timestamp'
]
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)[['run', 'substep', 'timestep'] + cols]
# result = pd.concat([result.drop(['c'], axis=1), result['c'].apply(pd.Series)], axis=1)
# print(list(result['c']))
# print(tabulate(result['c'].apply(pd.Series), headers='keys', tablefmt='psql'))
print()
print("Tensor Field: config1")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()
print(result.info(verbose=True))
# def f(df, col):
# for k in df[col].iloc[0].keys():
# df[k] = None
# for index, row in df.iterrows():
# # df.apply(lambda row:, axis=1)

View File

@ -0,0 +1,166 @@
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from cadCAD.configuration import Configuration
from cadCAD.configuration.utils.userDefinedObject import udoPipe, UDO
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pprint as pp
T = 50 #iterations in our simulation
n = 3 #number of boxes in our network
m = 2 #for barabasi graph type number of edges is (n-2)*m
G = nx.barabasi_albert_graph(n, m)
k = len(G.edges)
# class udoExample(object):
# def __init__(self, G):
# self.G = G
# self.mem_id = str(hex(id(self)))
g = UDO(udo=G)
print()
# print(g.edges)
# print(G.edges)
# pp.pprint(f"{type(g)}: {g}")
# next
balls = np.zeros(n,)
for node in g.nodes:
rv = np.random.randint(1,25)
g.nodes[node]['initial_balls'] = rv
balls[node] = rv
# pp.pprint(balls)
# next
scale=100
nx.draw_kamada_kawai(G, node_size=balls*scale,labels=nx.get_node_attributes(G,'initial_balls'))
# next
initial_conditions = {'balls':balls, 'network':G}
print(initial_conditions)
# next
def update_balls(params, step, sL, s, _input):
delta_balls = _input['delta']
new_balls = s['balls']
for e in G.edges:
move_ball = delta_balls[e]
src = e[0]
dst = e[1]
if (new_balls[src] >= move_ball) and (new_balls[dst] >= -move_ball):
new_balls[src] = new_balls[src] - move_ball
new_balls[dst] = new_balls[dst] + move_ball
key = 'balls'
value = new_balls
return (key, value)
def update_network(params, step, sL, s, _input):
new_nodes = _input['nodes']
new_edges = _input['edges']
new_balls = _input['quantity']
graph = s['network']
for node in new_nodes:
graph.add_node(node)
graph.nodes[node]['initial_balls'] = new_balls[node]
graph.nodes[node]['strat'] = _input['node_strats'][node]
for edge in new_edges:
graph.add_edge(edge[0], edge[1])
graph.edges[edge]['strat'] = _input['edge_strats'][edge]
key = 'network'
value = graph
return (key, value)
def update_network_balls(params, step, sL, s, _input):
new_nodes = _input['nodes']
new_balls = _input['quantity']
balls = np.zeros(len(s['balls']) + len(new_nodes))
for node in s['network'].nodes:
balls[node] = s['balls'][node]
for node in new_nodes:
balls[node] = new_balls[node]
key = 'balls'
value = balls
return (key, value)
# next
def greedy_robot(src_balls, dst_balls):
# robot wishes to accumlate balls at its source
# takes half of its neighbors balls
if src_balls < dst_balls:
delta = -np.floor(dst_balls / 2)
else:
delta = 0
return delta
def fair_robot(src_balls, dst_balls):
# robot follows the simple balancing rule
delta = np.sign(src_balls - dst_balls)
return delta
def giving_robot(src_balls, dst_balls):
# robot wishes to gice away balls one at a time
if src_balls > 0:
delta = 1
else:
delta = 0
return delta
# next
robot_strategies = [greedy_robot,fair_robot, giving_robot]
for node in G.nodes:
nstrats = len(robot_strategies)
rv = np.random.randint(0,nstrats)
G.nodes[node]['strat'] = robot_strategies[rv]
for e in G.edges:
owner_node = e[0]
G.edges[e]['strat'] = G.nodes[owner_node]['strat']
# next
def robotic_network(params, step, sL, s):
graph = s['network']
delta_balls = {}
for e in graph.edges:
src = e[0]
src_balls = s['balls'][src]
dst = e[1]
dst_balls = s['balls'][dst]
# transfer balls according to specific robot strat
strat = graph.edges[e]['strat']
delta_balls[e] = strat(src_balls, dst_balls)
return_dict = {'nodes': [], 'edges': {}, 'quantity': {}, 'node_strats': {}, 'edge_strats': {}, 'delta': delta_balls}
return (return_dict)

View File

@ -0,0 +1,221 @@
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils.userDefinedObject import udoPipe, UDO
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
T = 50 #iterations in our simulation
n = 3 #number of boxes in our network
m = 2 #for barabasi graph type number of edges is (n-2)*m
G = nx.barabasi_albert_graph(n, m)
k = len(G.edges)
balls = np.zeros(n,)
for node in G.nodes:
rv = np.random.randint(1,25)
G.nodes[node]['initial_balls'] = rv
balls[node] = rv
scale=100
nx.draw_kamada_kawai(G, node_size=balls*scale,labels=nx.get_node_attributes(G,'initial_balls'))
def greedy_robot(src_balls, dst_balls):
# robot wishes to accumlate balls at its source
# takes half of its neighbors balls
if src_balls < dst_balls:
return -np.floor(dst_balls / 2)
else:
return 0
def fair_robot(src_balls, dst_balls):
# robot follows the simple balancing rule
return np.sign(src_balls - dst_balls)
def giving_robot(src_balls, dst_balls):
# robot wishes to gice away balls one at a time
if src_balls > 0:
return 1
else:
return 0
robot_strategies = [greedy_robot,fair_robot, giving_robot]
for node in G.nodes:
nstrats = len(robot_strategies)
rv = np.random.randint(0,nstrats)
G.nodes[node]['strat'] = robot_strategies[rv]
for e in G.edges:
owner_node = e[0]
G.edges[e]['strat'] = G.nodes[owner_node]['strat']
default_policy = {'nodes': [], 'edges': {}, 'quantity': {}, 'node_strats': {}, 'edge_strats': {}, 'delta': {}}
class robot(object):
def __init__(self, graph, balls, internal_policy=default_policy):
self.mem_id = str(hex(id(self)))
self.internal_policy = internal_policy
self.graph = graph
self.balls = balls
def robotic_network(self, graph, balls): # move balls
self.graph, self.balls = graph, balls
delta_balls = {}
for e in self.graph.edges:
src = e[0]
src_balls = self.balls[src]
dst = e[1]
dst_balls = self.balls[dst]
# transfer balls according to specific robot strat
strat = self.graph.edges[e]['strat']
delta_balls[e] = strat(src_balls, dst_balls)
self.internal_policy = {'nodes': [], 'edges': {}, 'quantity': {}, 'node_strats': {}, 'edge_strats': {}, 'delta': delta_balls}
return self
def agent_arrival(self, graph, balls): # add node
self.graph, self.balls = graph, balls
node = len(self.graph.nodes)
edge_list = self.graph.edges
# choose a m random edges without replacement
# new = np.random.choose(edgelist,m)
new = [0, 1] # tester
nodes = [node]
edges = [(node, new_node) for new_node in new]
initial_balls = {node: np.random.randint(1, 25)}
rv = np.random.randint(0, nstrats)
node_strat = {node: robot_strategies[rv]}
edge_strats = {e: robot_strategies[rv] for e in edges}
self.internal_policy = {'nodes': nodes,
'edges': edges,
'quantity': initial_balls,
'node_strats': node_strat,
'edge_strats': edge_strats,
'delta': np.zeros(node + 1)
}
return self
robot_udo = UDO(udo=robot(G, balls), masked_members=['obj'])
initial_conditions = {'balls': balls, 'network': G, 'robot': robot_udo}
def update_balls(params, step, sL, s, _input):
delta_balls = _input['robot'].internal_policy['delta']
new_balls = s['balls']
for e in G.edges:
move_ball = delta_balls[e]
src = e[0]
dst = e[1]
if (new_balls[src] >= move_ball) and (new_balls[dst] >= -move_ball):
new_balls[src] = new_balls[src] - move_ball
new_balls[dst] = new_balls[dst] + move_ball
key = 'balls'
value = new_balls
return (key, value)
def update_network(params, step, sL, s, _input):
new_nodes = _input['robot'].internal_policy['nodes']
new_edges = _input['robot'].internal_policy['edges']
new_balls = _input['robot'].internal_policy['quantity']
graph = s['network']
for node in new_nodes:
graph.add_node(node)
graph.nodes[node]['initial_balls'] = new_balls[node]
graph.nodes[node]['strat'] = _input['robot'].internal_policy['node_strats'][node]
for edge in new_edges:
graph.add_edge(edge[0], edge[1])
graph.edges[edge]['strat'] = _input['robot'].internal_policy['edge_strats'][edge]
key = 'network'
value = graph
return (key, value)
def update_network_balls(params, step, sL, s, _input):
new_nodes = _input['robot'].internal_policy['nodes']
new_balls = _input['robot'].internal_policy['quantity']
balls = np.zeros(len(s['balls']) + len(new_nodes))
for node in s['network'].nodes:
balls[node] = s['balls'][node]
for node in new_nodes:
balls[node] = new_balls[node]
key = 'balls'
value = balls
return (key, value)
def robotic_network(params, step, sL, s):
s['robot'].robotic_network(s['network'], s['balls'])
return {'robot': udoPipe(s['robot'])}
def agent_arrival(params, step, sL, s):
s['robot'].agent_arrival(s['network'], s['balls'])
return {'robot': udoPipe(s['robot'])}
def get_robot(params, step, sL, s, _input):
return 'robot', _input['robot']
partial_state_update_blocks = [
{
'policies': {
# The following policy functions will be evaluated and their returns will be passed to the state update functions
'p1': robotic_network
},
'variables': { # The following state variables will be updated simultaneously
'balls': update_balls,
'robot': get_robot
}
},
{
'policies': {
# The following policy functions will be evaluated and their returns will be passed to the state update functions
'p1': agent_arrival
},
'variables': { # The following state variables will be updated simultaneously
'network': update_network,
'balls': update_network_balls,
'robot': get_robot
}
}
]
simulation_parameters = {
'T': range(T),
'N': 1,
'M': {}
}
append_configs(
sim_configs=simulation_parameters, #dict containing state update functions
initial_state=initial_conditions, #dict containing variable names and initial values
partial_state_update_blocks= partial_state_update_blocks #, #dict containing state update functions
# policy_ops=[lambda a, b: {**a, **b}]
)
# config = Configuration(initial_state=initial_conditions, #dict containing variable names and initial values
# partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
# sim_config=simulation_parameters #dict containing simulation parameters
# )

View File

@ -0,0 +1,25 @@
import pandas as pd
from tabulate import tabulate
# The following imports NEED to be in the exact order
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from cadCAD import configs
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.main()
result = pd.DataFrame(raw_result)
print()
print("Tensor Field: config1")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print("Output:")
print(tabulate(result, headers='keys', tablefmt='psql'))
print(result[['network']])
print()
print(result[['network', 'substep']])

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,139 @@
from decimal import Decimal
import numpy as np
from datetime import timedelta
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import env_proc_trigger, bound_norm_random, ep_time_step, config_sim
seeds = {
'z': np.random.RandomState(1),
'a': np.random.RandomState(2),
'b': np.random.RandomState(3),
'c': np.random.RandomState(3)
}
# Policies per Mechanism
def p1m1(_g, step, sL, s):
return {'param1': 1}
def p2m1(_g, step, sL, s):
return {'param2': 4}
def p1m2(_g, step, sL, s):
return {'param1': 'a', 'param2': 2}
def p2m2(_g, step, sL, s):
return {'param1': 'b', 'param2': 4}
def p1m3(_g, step, sL, s):
return {'param1': ['c'], 'param2': np.array([10, 100])}
def p2m3(_g, step, sL, s):
return {'param1': ['d'], 'param2': np.array([20, 200])}
# Internal States per Mechanism
def s1m1(_g, step, sL, s, _input):
y = 's1'
x = _input['param1']
return (y, x)
def s2m1(_g, step, sL, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m2(_g, step, sL, s, _input):
y = 's1'
x = _input['param1']
return (y, x)
def s2m2(_g, step, sL, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m3(_g, step, sL, s, _input):
y = 's1'
x = _input['param1']
return (y, x)
def s2m3(_g, step, sL, s, _input):
y = 's2'
x = _input['param2']
return (y, x)
def s1m4(_g, step, sL, s, _input):
y = 's1'
x = [1]
return (y, x)
# Exogenous States
proc_one_coef_A = 0.7
proc_one_coef_B = 1.3
def es3p1(_g, step, sL, s, _input):
y = 's3'
x = s['s3'] * bound_norm_random(seeds['a'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
def es4p2(_g, step, sL, s, _input):
y = 's4'
x = s['s4'] * bound_norm_random(seeds['b'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
ts_format = '%Y-%m-%d %H:%M:%S'
t_delta = timedelta(days=0, minutes=0, seconds=1)
def es5p2(_g, step, sL, s, _input):
y = 'timestamp'
x = ep_time_step(s, dt_str=s['timestamp'], fromat_str=ts_format, _timedelta=t_delta)
return (y, x)
# Environment States
def env_a(x):
return 5
def env_b(x):
return 10
# Genesis States
genesis_states = {
's1': Decimal(0.0),
's2': Decimal(0.0),
's3': Decimal(1.0),
's4': Decimal(1.0),
'timestamp': '2018-10-01 15:16:24'
}
raw_exogenous_states = {
"s3": es3p1,
"s4": es4p2,
"timestamp": es5p2
}
env_processes = {
"s3": env_a,
"s4": env_proc_trigger('2018-10-01 15:16:25', env_b)
}
partial_state_update_block = [
]
sim_config = config_sim(
{
"N": 2,
"T": range(5),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
seeds={},
raw_exogenous_states={},
env_processes={},
partial_state_update_blocks=partial_state_update_block
)

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,150 @@
import networkx as nx
from scipy.stats import expon, gamma
import numpy as np
import matplotlib.pyplot as plt
#helper functions
def get_nodes_by_type(g, node_type_selection):
return [node for node in g.nodes if g.nodes[node]['type']== node_type_selection ]
def get_edges_by_type(g, edge_type_selection):
return [edge for edge in g.edges if g.edges[edge]['type']== edge_type_selection ]
def total_funds_given_total_supply(total_supply):
#can put any bonding curve invariant here for initializatio!
total_funds = total_supply
return total_funds
#maximum share of funds a proposal can take
default_beta = .2 #later we should set this to be param so we can sweep it
# tuning param for the trigger function
default_rho = .001
def trigger_threshold(requested, funds, supply, beta = default_beta, rho = default_rho):
share = requested/funds
if share < beta:
return rho*supply/(beta-share)**2
else:
return np.inf
def initialize_network(n,m, funds_func=total_funds_given_total_supply, trigger_func =trigger_threshold ):
network = nx.DiGraph()
for i in range(n):
network.add_node(i)
network.nodes[i]['type']="participant"
h_rv = expon.rvs(loc=0.0, scale=1000)
network.nodes[i]['holdings'] = h_rv
s_rv = np.random.rand()
network.nodes[i]['sentiment'] = s_rv
participants = get_nodes_by_type(network, 'participant')
initial_supply = np.sum([ network.nodes[i]['holdings'] for i in participants])
initial_funds = funds_func(initial_supply)
#generate initial proposals
for ind in range(m):
j = n+ind
network.add_node(j)
network.nodes[j]['type']="proposal"
network.nodes[j]['conviction']=0
network.nodes[j]['status']='candidate'
network.nodes[j]['age']=0
r_rv = gamma.rvs(3,loc=0.001, scale=10000)
network.node[j]['funds_requested'] = r_rv
network.nodes[j]['trigger']= trigger_threshold(r_rv, initial_funds, initial_supply)
for i in range(n):
network.add_edge(i, j)
rv = np.random.rand()
a_rv = 1-4*(1-rv)*rv #polarized distribution
network.edges[(i, j)]['affinity'] = a_rv
network.edges[(i,j)]['tokens'] = 0
network.edges[(i, j)]['conviction'] = 0
proposals = get_nodes_by_type(network, 'proposal')
total_requested = np.sum([ network.nodes[i]['funds_requested'] for i in proposals])
return network, initial_funds, initial_supply, total_requested
def trigger_sweep(field, trigger_func,xmax=.2,default_alpha=.5):
if field == 'token_supply':
alpha = default_alpha
share_of_funds = np.arange(.001,xmax,.001)
total_supply = np.arange(0,10**9, 10**6)
demo_data_XY = np.outer(share_of_funds,total_supply)
demo_data_Z0=np.empty(demo_data_XY.shape)
demo_data_Z1=np.empty(demo_data_XY.shape)
demo_data_Z2=np.empty(demo_data_XY.shape)
demo_data_Z3=np.empty(demo_data_XY.shape)
for sof_ind in range(len(share_of_funds)):
sof = share_of_funds[sof_ind]
for ts_ind in range(len(total_supply)):
ts = total_supply[ts_ind]
tc = ts /(1-alpha)
trigger = trigger_func(sof, 1, ts)
demo_data_Z0[sof_ind,ts_ind] = np.log10(trigger)
demo_data_Z1[sof_ind,ts_ind] = trigger
demo_data_Z2[sof_ind,ts_ind] = trigger/tc #share of maximum possible conviction
demo_data_Z3[sof_ind,ts_ind] = np.log10(trigger/tc)
return {'log10_trigger':demo_data_Z0,
'trigger':demo_data_Z1,
'share_of_max_conv': demo_data_Z2,
'log10_share_of_max_conv':demo_data_Z3,
'total_supply':total_supply,
'share_of_funds':share_of_funds}
elif field == 'alpha':
alpha = np.arange(.5,1,.01)
share_of_funds = np.arange(.001,xmax,.001)
total_supply = 10**9
demo_data_XY = np.outer(share_of_funds,alpha)
demo_data_Z4=np.empty(demo_data_XY.shape)
demo_data_Z5=np.empty(demo_data_XY.shape)
demo_data_Z6=np.empty(demo_data_XY.shape)
demo_data_Z7=np.empty(demo_data_XY.shape)
for sof_ind in range(len(share_of_funds)):
sof = share_of_funds[sof_ind]
for a_ind in range(len(alpha)):
ts = total_supply
a = alpha[a_ind]
tc = ts /(1-a)
trigger = trigger_func(sof, 1, ts)
demo_data_Z4[sof_ind,a_ind] = np.log10(trigger)
demo_data_Z5[sof_ind,a_ind] = trigger
demo_data_Z6[sof_ind,a_ind] = trigger/tc #share of maximum possible conviction
demo_data_Z7[sof_ind,a_ind] = np.log10(trigger/tc)
return {'log10_trigger':demo_data_Z4,
'trigger':demo_data_Z5,
'share_of_max_conv': demo_data_Z6,
'log10_share_of_max_conv':demo_data_Z7,
'alpha':alpha,
'share_of_funds':share_of_funds}
else:
return "invalid field"
def trigger_plotter(share_of_funds,Z, color_label,y, ylabel,cmap='jet'):
dims = (10, 5)
fig, ax = plt.subplots(figsize=dims)
cf = plt.contourf(share_of_funds, y, Z.T, 100, cmap=cmap)
cbar=plt.colorbar(cf)
plt.axis([share_of_funds[0], share_of_funds[-1], y[0], y[-1]])
#ax.set_xscale('log')
plt.ylabel(ylabel)
plt.xlabel('Share of Funds Requested')
plt.title('Trigger Function Map')
cbar.ax.set_ylabel(color_label)

View File

@ -0,0 +1,548 @@
import numpy as np
from cadCAD.configuration.utils import config_sim
from simulations.validation.conviction_helpers import *
#import networkx as nx
from scipy.stats import expon, gamma
#functions for partial state update block 1
#Driving processes: arrival of participants, proposals and funds
##-----------------------------------------
def gen_new_participant(network, new_participant_holdings):
i = len([node for node in network.nodes])
network.add_node(i)
network.nodes[i]['type']="participant"
s_rv = np.random.rand()
network.nodes[i]['sentiment'] = s_rv
network.nodes[i]['holdings']=new_participant_holdings
for j in get_nodes_by_type(network, 'proposal'):
network.add_edge(i, j)
rv = np.random.rand()
a_rv = 1-4*(1-rv)*rv #polarized distribution
network.edges[(i, j)]['affinity'] = a_rv
network.edges[(i,j)]['tokens'] = a_rv*network.nodes[i]['holdings']
network.edges[(i, j)]['conviction'] = 0
return network
scale_factor = 1000
def gen_new_proposal(network, funds, supply, total_funds, trigger_func):
j = len([node for node in network.nodes])
network.add_node(j)
network.nodes[j]['type']="proposal"
network.nodes[j]['conviction']=0
network.nodes[j]['status']='candidate'
network.nodes[j]['age']=0
rescale = scale_factor*funds/total_funds
r_rv = gamma.rvs(3,loc=0.001, scale=rescale)
network.node[j]['funds_requested'] = r_rv
network.nodes[j]['trigger']= trigger_func(r_rv, funds, supply)
participants = get_nodes_by_type(network, 'participant')
proposing_participant = np.random.choice(participants)
for i in participants:
network.add_edge(i, j)
if i==proposing_participant:
network.edges[(i, j)]['affinity']=1
else:
rv = np.random.rand()
a_rv = 1-4*(1-rv)*rv #polarized distribution
network.edges[(i, j)]['affinity'] = a_rv
network.edges[(i, j)]['conviction'] = 0
network.edges[(i,j)]['tokens'] = 0
return network
def driving_process(params, step, sL, s):
#placeholder plumbing for random processes
arrival_rate = 10/s['sentiment']
rv1 = np.random.rand()
new_participant = bool(rv1<1/arrival_rate)
if new_participant:
h_rv = expon.rvs(loc=0.0, scale=1000)
new_participant_holdings = h_rv
else:
new_participant_holdings = 0
network = s['network']
affinities = [network.edges[e]['affinity'] for e in network.edges ]
median_affinity = np.median(affinities)
proposals = get_nodes_by_type(network, 'proposal')
fund_requests = [network.nodes[j]['funds_requested'] for j in proposals if network.nodes[j]['status']=='candidate' ]
funds = s['funds']
total_funds_requested = np.sum(fund_requests)
proposal_rate = 10/median_affinity * total_funds_requested/funds
rv2 = np.random.rand()
new_proposal = bool(rv2<1/proposal_rate)
sentiment = s['sentiment']
funds = s['funds']
scale_factor = 1+4000*sentiment**2
#this shouldn't happen but expon is throwing domain errors
if scale_factor > 1:
funds_arrival = expon.rvs(loc = 0, scale = scale_factor )
else:
funds_arrival = 0
return({'new_participant':new_participant,
'new_participant_holdings':new_participant_holdings,
'new_proposal':new_proposal,
'funds_arrival':funds_arrival})
#Mechanisms for updating the state based on driving processes
##---
def update_network(params, step, sL, s, _input):
print(params)
print(type(params))
network = s['network']
funds = s['funds']
supply = s['supply']
trigger_func = params['trigger_func']
new_participant = _input['new_participant'] #T/F
new_proposal = _input['new_proposal'] #T/F
if new_participant:
new_participant_holdings = _input['new_participant_holdings']
network = gen_new_participant(network, new_participant_holdings)
if new_proposal:
network= gen_new_proposal(network,funds,supply )
#update age of the existing proposals
proposals = get_nodes_by_type(network, 'proposal')
for j in proposals:
network.nodes[j]['age'] = network.nodes[j]['age']+1
if network.nodes[j]['status'] == 'candidate':
requested = network.nodes[j]['funds_requested']
network.nodes[j]['trigger'] = trigger_func(requested, funds, supply)
else:
network.nodes[j]['trigger'] = np.nan
key = 'network'
value = network
return (key, value)
def increment_funds(params, step, sL, s, _input):
funds = s['funds']
funds_arrival = _input['funds_arrival']
#increment funds
funds = funds + funds_arrival
key = 'funds'
value = funds
return (key, value)
def increment_supply(params, step, sL, s, _input):
supply = s['supply']
supply_arrival = _input['new_participant_holdings']
#increment funds
supply = supply + supply_arrival
key = 'supply'
value = supply
return (key, value)
#functions for partial state update block 2
#Driving processes: completion of previously funded proposals
##-----------------------------------------
def check_progress(params, step, sL, s):
network = s['network']
proposals = get_nodes_by_type(network, 'proposal')
completed = []
for j in proposals:
if network.nodes[j]['status'] == 'active':
grant_size = network.nodes[j]['funds_requested']
base_completion_rate=params['base_completion_rate']
likelihood = 1.0/(base_completion_rate+np.log(grant_size))
if np.random.rand() < likelihood:
completed.append(j)
return({'completed':completed})
#Mechanisms for updating the state based on check progress
##---
def complete_proposal(params, step, sL, s, _input):
network = s['network']
participants = get_nodes_by_type(network, 'participant')
completed = _input['completed']
for j in completed:
network.nodes[j]['status']='completed'
for i in participants:
force = network.edges[(i,j)]['affinity']
sentiment = network.node[i]['sentiment']
network.node[i]['sentiment'] = get_sentimental(sentiment, force, decay=0)
key = 'network'
value = network
return (key, value)
def update_sentiment_on_completion(params, step, sL, s, _input):
network = s['network']
proposals = get_nodes_by_type(network, 'proposal')
completed = _input['completed']
grants_outstanding = np.sum([network.nodes[j]['funds_requested'] for j in proposals if network.nodes[j]['status']=='active'])
grants_completed = np.sum([network.nodes[j]['funds_requested'] for j in completed])
sentiment = s['sentiment']
force = grants_completed/grants_outstanding
mu = params['sentiment_decay']
if (force >=0) and (force <=1):
sentiment = get_sentimental(sentiment, force, mu)
else:
sentiment = get_sentimental(sentiment, 0, mu)
key = 'sentiment'
value = sentiment
return (key, value)
def get_sentimental(sentiment, force, decay=0):
mu = decay
sentiment = sentiment*(1-mu) + force
if sentiment > 1:
sentiment = 1
return sentiment
#functions for partial state update block 3
#Decision processes: trigger function policy
##-----------------------------------------
def trigger_function(params, step, sL, s):
network = s['network']
funds = s['funds']
supply = s['supply']
proposals = get_nodes_by_type(network, 'proposal')
tmin = params['tmin']
accepted = []
triggers = {}
for j in proposals:
if network.nodes[j]['status'] == 'candidate':
requested = network.nodes[j]['funds_requested']
age = network.nodes[j]['age']
threshold = trigger_threshold(requested, funds, supply)
if age > tmin:
conviction = network.nodes[j]['conviction']
if conviction >threshold:
accepted.append(j)
else:
threshold = np.nan
triggers[j] = threshold
return({'accepted':accepted, 'triggers':triggers})
def decrement_funds(params, step, sL, s, _input):
funds = s['funds']
network = s['network']
accepted = _input['accepted']
#decrement funds
for j in accepted:
funds = funds - network.nodes[j]['funds_requested']
key = 'funds'
value = funds
return (key, value)
def update_proposals(params, step, sL, s, _input):
network = s['network']
accepted = _input['accepted']
triggers = _input['triggers']
participants = get_nodes_by_type(network, 'participant')
proposals = get_nodes_by_type(network, 'proposals')
sensitivity = params['sensitivity']
for j in proposals:
network.nodes[j]['trigger'] = triggers[j]
#bookkeeping conviction and participant sentiment
for j in accepted:
network.nodes[j]['status']='active'
network.nodes[j]['conviction']=np.nan
#change status to active
for i in participants:
#operating on edge = (i,j)
#reset tokens assigned to other candidates
network.edges[(i,j)]['tokens']=0
network.edges[(i,j)]['conviction'] = np.nan
#update participants sentiments (positive or negative)
affinities = [network.edges[(i,p)]['affinity'] for p in proposals if not(p in accepted)]
if len(affinities)>1:
max_affinity = np.max(affinities)
force = network.edges[(i,j)]['affinity']-sensitivity*max_affinity
else:
force = 0
#based on what their affinities to the accepted proposals
network.nodes[i]['sentiment'] = get_sentimental(network.nodes[i]['sentiment'], force, False)
key = 'network'
value = network
return (key, value)
def update_sentiment_on_release(params, step, sL, s, _input):
network = s['network']
proposals = get_nodes_by_type(network, 'proposal')
accepted = _input['accepted']
proposals_outstanding = np.sum([network.nodes[j]['funds_requested'] for j in proposals if network.nodes[j]['status']=='candidate'])
proposals_accepted = np.sum([network.nodes[j]['funds_requested'] for j in accepted])
sentiment = s['sentiment']
force = proposals_accepted/proposals_outstanding
if (force >=0) and (force <=1):
sentiment = get_sentimental(sentiment, force, False)
else:
sentiment = get_sentimental(sentiment, 0, False)
key = 'sentiment'
value = sentiment
return (key, value)
def participants_decisions(params, step, sL, s):
network = s['network']
participants = get_nodes_by_type(network, 'participant')
proposals = get_nodes_by_type(network, 'proposal')
candidates = [j for j in proposals if network.nodes[j]['status']=='candidate']
sensitivity = params['sensitivity']
gain = .01
delta_holdings={}
proposals_supported ={}
for i in participants:
force = network.nodes[i]['sentiment']-sensitivity
delta_holdings[i] = network.nodes[i]['holdings']*gain*force
support = []
for j in candidates:
affinity = network.edges[(i, j)]['affinity']
cutoff = sensitivity*np.max([network.edges[(i,p)]['affinity'] for p in candidates])
if cutoff <.5:
cutoff = .5
if affinity > cutoff:
support.append(j)
proposals_supported[i] = support
return({'delta_holdings':delta_holdings, 'proposals_supported':proposals_supported})
def update_tokens(params, step, sL, s, _input):
network = s['network']
delta_holdings = _input['delta_holdings']
proposals = get_nodes_by_type(network, 'proposal')
proposals_supported = _input['proposals_supported']
participants = get_nodes_by_type(network, 'participant')
alpha = params['alpha']
for i in participants:
network.nodes[i]['holdings'] = network.nodes[i]['holdings']+delta_holdings[i]
supported = proposals_supported[i]
total_affinity = np.sum([ network.edges[(i, j)]['affinity'] for j in supported])
for j in proposals:
if j in supported:
normalized_affinity = network.edges[(i, j)]['affinity']/total_affinity
network.edges[(i, j)]['tokens'] = normalized_affinity*network.nodes[i]['holdings']
else:
network.edges[(i, j)]['tokens'] = 0
prior_conviction = network.edges[(i, j)]['conviction']
current_tokens = network.edges[(i, j)]['tokens']
network.edges[(i, j)]['conviction'] =current_tokens+alpha*prior_conviction
for j in proposals:
network.nodes[j]['conviction'] = np.sum([ network.edges[(i, j)]['conviction'] for i in participants])
key = 'network'
value = network
return (key, value)
def update_supply(params, step, sL, s, _input):
supply = s['supply']
delta_holdings = _input['delta_holdings']
delta_supply = np.sum([v for v in delta_holdings.values()])
supply = supply + delta_supply
key = 'supply'
value = supply
return (key, value)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The Partial State Update Blocks
partial_state_update_blocks = [
{
'policies': {
#new proposals or new participants
'random': driving_process
},
'variables': {
'network': update_network,
'funds':increment_funds,
'supply':increment_supply
}
},
{
'policies': {
'completion': check_progress #see if any of the funded proposals completes
},
'variables': { # The following state variables will be updated simultaneously
'sentiment': update_sentiment_on_completion, #note completing decays sentiment, completing bumps it
'network': complete_proposal #book-keeping
}
},
{
'policies': {
'release': trigger_function #check each proposal to see if it passes
},
'variables': { # The following state variables will be updated simultaneously
'funds': decrement_funds, #funds expended
'sentiment': update_sentiment_on_release, #releasing funds can bump sentiment
'network': update_proposals #reset convictions, and participants sentiments
#update based on affinities
}
},
{
'policies': {
'participants_act': participants_decisions, #high sentiment, high affinity =>buy
#low sentiment, low affinities => burn
#assign tokens to top affinities
},
'variables': {
'supply': update_supply,
'network': update_tokens #update everyones holdings
#and their conviction for each proposal
}
}
]
n= 25 #initial participants
m= 3 #initial proposals
initial_sentiment = .5
network, initial_funds, initial_supply, total_requested = initialize_network(n,m,total_funds_given_total_supply,trigger_threshold)
initial_conditions = {'network':network,
'supply': initial_supply,
'funds':initial_funds,
'sentiment': initial_sentiment}
#power of 1 token forever
# conviction_capactity = [2]
# alpha = [1-1/cc for cc in conviction_capactity]
# print(alpha)
params={
'sensitivity': [.75],
'tmin': [7], #unit days; minimum periods passed before a proposal can pass
'sentiment_decay': [.001], #termed mu in the state update function
'alpha': [0.5, 0.9],
'base_completion_rate': [10],
'trigger_func': [trigger_threshold]
}
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Settings of general simulation parameters, unrelated to the system itself
# `T` is a range with the number of discrete units of time the simulation will run for;
# `N` is the number of times the simulation will be run (Monte Carlo runs)
time_periods_per_run = 250
monte_carlo_runs = 1
simulation_parameters = config_sim({
'T': range(time_periods_per_run),
'N': monte_carlo_runs,
'M': params
})
from cadCAD.configuration import append_configs
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
append_configs(
initial_state=initial_conditions, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_configs=simulation_parameters #dict containing simulation parameters
)
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from cadCAD import configs
exec_mode = ExecutionMode()
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
run = Executor(exec_context=multi_proc_ctx, configs=configs)
raw_result, tensor = run.execute()
# exec_mode = ExecutionMode()
# exec_context = ExecutionContext(context=exec_mode.multi_proc)
# # run = Executor(exec_context=exec_context, configs=configs)
# executor = Executor(exec_context, configs) # Pass the configuration object inside an array
# raw_result, tensor = executor.execute() # The `main()` method returns a tuple; its first elements contains the raw results

View File

@ -0,0 +1,555 @@
from pprint import pprint
import numpy as np
from tabulate import tabulate
from cadCAD.configuration.utils import config_sim
from simulations.validation.conviction_helpers import *
#import networkx as nx
from scipy.stats import expon, gamma
#functions for partial state update block 1
#Driving processes: arrival of participants, proposals and funds
##-----------------------------------------
def gen_new_participant(network, new_participant_holdings):
i = len([node for node in network.nodes])
network.add_node(i)
network.nodes[i]['type']="participant"
s_rv = np.random.rand()
network.nodes[i]['sentiment'] = s_rv
network.nodes[i]['holdings']=new_participant_holdings
for j in get_nodes_by_type(network, 'proposal'):
network.add_edge(i, j)
rv = np.random.rand()
a_rv = 1-4*(1-rv)*rv #polarized distribution
network.edges[(i, j)]['affinity'] = a_rv
network.edges[(i,j)]['tokens'] = a_rv*network.nodes[i]['holdings']
network.edges[(i, j)]['conviction'] = 0
return network
scale_factor = 1000
def gen_new_proposal(network, funds, supply, trigger_func):
j = len([node for node in network.nodes])
network.add_node(j)
network.nodes[j]['type']="proposal"
network.nodes[j]['conviction']=0
network.nodes[j]['status']='candidate'
network.nodes[j]['age']=0
rescale = scale_factor*funds
r_rv = gamma.rvs(3,loc=0.001, scale=rescale)
network.node[j]['funds_requested'] = r_rv
network.nodes[j]['trigger']= trigger_func(r_rv, funds, supply)
participants = get_nodes_by_type(network, 'participant')
proposing_participant = np.random.choice(participants)
for i in participants:
network.add_edge(i, j)
if i==proposing_participant:
network.edges[(i, j)]['affinity']=1
else:
rv = np.random.rand()
a_rv = 1-4*(1-rv)*rv #polarized distribution
network.edges[(i, j)]['affinity'] = a_rv
network.edges[(i, j)]['conviction'] = 0
network.edges[(i,j)]['tokens'] = 0
return network
def driving_process(params, step, sL, s):
#placeholder plumbing for random processes
arrival_rate = 10/s['sentiment']
rv1 = np.random.rand()
new_participant = bool(rv1<1/arrival_rate)
if new_participant:
h_rv = expon.rvs(loc=0.0, scale=1000)
new_participant_holdings = h_rv
else:
new_participant_holdings = 0
network = s['network']
affinities = [network.edges[e]['affinity'] for e in network.edges ]
median_affinity = np.median(affinities)
proposals = get_nodes_by_type(network, 'proposal')
fund_requests = [network.nodes[j]['funds_requested'] for j in proposals if network.nodes[j]['status']=='candidate' ]
funds = s['funds']
total_funds_requested = np.sum(fund_requests)
proposal_rate = 10/median_affinity * total_funds_requested/funds
rv2 = np.random.rand()
new_proposal = bool(rv2<1/proposal_rate)
sentiment = s['sentiment']
funds = s['funds']
scale_factor = 1+4000*sentiment**2
#this shouldn't happen but expon is throwing domain errors
if scale_factor > 1:
funds_arrival = expon.rvs(loc = 0, scale = scale_factor )
else:
funds_arrival = 0
return({'new_participant':new_participant,
'new_participant_holdings':new_participant_holdings,
'new_proposal':new_proposal,
'funds_arrival':funds_arrival})
#Mechanisms for updating the state based on driving processes
##---
def update_network(params, step, sL, s, _input):
network = s['network']
funds = s['funds']
supply = s['supply']
trigger_func = params['trigger_func']
new_participant = _input['new_participant'] #T/F
new_proposal = _input['new_proposal'] #T/F
if new_participant:
new_participant_holdings = _input['new_participant_holdings']
network = gen_new_participant(network, new_participant_holdings)
if new_proposal:
network= gen_new_proposal(network,funds,supply,trigger_func )
#update age of the existing proposals
proposals = get_nodes_by_type(network, 'proposal')
for j in proposals:
network.nodes[j]['age'] = network.nodes[j]['age']+1
if network.nodes[j]['status'] == 'candidate':
requested = network.nodes[j]['funds_requested']
network.nodes[j]['trigger'] = trigger_func(requested, funds, supply)
else:
network.nodes[j]['trigger'] = np.nan
key = 'network'
value = network
return (key, value)
def increment_funds(params, step, sL, s, _input):
funds = s['funds']
funds_arrival = _input['funds_arrival']
#increment funds
funds = funds + funds_arrival
key = 'funds'
value = funds
return (key, value)
def increment_supply(params, step, sL, s, _input):
supply = s['supply']
supply_arrival = _input['new_participant_holdings']
#increment funds
supply = supply + supply_arrival
key = 'supply'
value = supply
return (key, value)
#functions for partial state update block 2
#Driving processes: completion of previously funded proposals
##-----------------------------------------
def check_progress(params, step, sL, s):
network = s['network']
proposals = get_nodes_by_type(network, 'proposal')
completed = []
for j in proposals:
if network.nodes[j]['status'] == 'active':
grant_size = network.nodes[j]['funds_requested']
base_completion_rate=params['base_completion_rate']
likelihood = 1.0/(base_completion_rate+np.log(grant_size))
if np.random.rand() < likelihood:
completed.append(j)
return({'completed':completed})
#Mechanisms for updating the state based on check progress
##---
def complete_proposal(params, step, sL, s, _input):
network = s['network']
participants = get_nodes_by_type(network, 'participant')
completed = _input['completed']
for j in completed:
network.nodes[j]['status']='completed'
for i in participants:
force = network.edges[(i,j)]['affinity']
sentiment = network.node[i]['sentiment']
network.node[i]['sentiment'] = get_sentimental(sentiment, force, decay=0)
key = 'network'
value = network
return (key, value)
def update_sentiment_on_completion(params, step, sL, s, _input):
network = s['network']
proposals = get_nodes_by_type(network, 'proposal')
completed = _input['completed']
grants_outstanding = np.sum([network.nodes[j]['funds_requested'] for j in proposals if network.nodes[j]['status']=='active'])
grants_completed = np.sum([network.nodes[j]['funds_requested'] for j in completed])
sentiment = s['sentiment']
force = grants_completed/grants_outstanding
mu = params['sentiment_decay']
if (force >=0) and (force <=1):
sentiment = get_sentimental(sentiment, force, mu)
else:
sentiment = get_sentimental(sentiment, 0, mu)
key = 'sentiment'
value = sentiment
return (key, value)
def get_sentimental(sentiment, force, decay=0):
mu = decay
sentiment = sentiment*(1-mu) + force
if sentiment > 1:
sentiment = 1
return sentiment
#functions for partial state update block 3
#Decision processes: trigger function policy
##-----------------------------------------
def trigger_function(params, step, sL, s):
network = s['network']
funds = s['funds']
supply = s['supply']
proposals = get_nodes_by_type(network, 'proposal')
tmin = params['tmin']
accepted = []
triggers = {}
for j in proposals:
if network.nodes[j]['status'] == 'candidate':
requested = network.nodes[j]['funds_requested']
age = network.nodes[j]['age']
threshold = trigger_threshold(requested, funds, supply)
if age > tmin:
conviction = network.nodes[j]['conviction']
if conviction >threshold:
accepted.append(j)
else:
threshold = np.nan
triggers[j] = threshold
return({'accepted':accepted, 'triggers':triggers})
def decrement_funds(params, step, sL, s, _input):
funds = s['funds']
network = s['network']
accepted = _input['accepted']
#decrement funds
for j in accepted:
funds = funds - network.nodes[j]['funds_requested']
key = 'funds'
value = funds
return (key, value)
def update_proposals(params, step, sL, s, _input):
network = s['network']
accepted = _input['accepted']
triggers = _input['triggers']
participants = get_nodes_by_type(network, 'participant')
proposals = get_nodes_by_type(network, 'proposals')
sensitivity = params['sensitivity']
for j in proposals:
network.nodes[j]['trigger'] = triggers[j]
#bookkeeping conviction and participant sentiment
for j in accepted:
network.nodes[j]['status']='active'
network.nodes[j]['conviction']=np.nan
#change status to active
for i in participants:
#operating on edge = (i,j)
#reset tokens assigned to other candidates
network.edges[(i,j)]['tokens']=0
network.edges[(i,j)]['conviction'] = np.nan
#update participants sentiments (positive or negative)
affinities = [network.edges[(i,p)]['affinity'] for p in proposals if not(p in accepted)]
if len(affinities)>1:
max_affinity = np.max(affinities)
force = network.edges[(i,j)]['affinity']-sensitivity*max_affinity
else:
force = 0
#based on what their affinities to the accepted proposals
network.nodes[i]['sentiment'] = get_sentimental(network.nodes[i]['sentiment'], force, False)
key = 'network'
value = network
return (key, value)
def update_sentiment_on_release(params, step, sL, s, _input):
network = s['network']
proposals = get_nodes_by_type(network, 'proposal')
accepted = _input['accepted']
proposals_outstanding = np.sum([network.nodes[j]['funds_requested'] for j in proposals if network.nodes[j]['status']=='candidate'])
proposals_accepted = np.sum([network.nodes[j]['funds_requested'] for j in accepted])
sentiment = s['sentiment']
force = proposals_accepted/proposals_outstanding
if (force >=0) and (force <=1):
sentiment = get_sentimental(sentiment, force, False)
else:
sentiment = get_sentimental(sentiment, 0, False)
key = 'sentiment'
value = sentiment
return (key, value)
def participants_decisions(params, step, sL, s):
network = s['network']
participants = get_nodes_by_type(network, 'participant')
proposals = get_nodes_by_type(network, 'proposal')
candidates = [j for j in proposals if network.nodes[j]['status']=='candidate']
sensitivity = params['sensitivity']
gain = .01
delta_holdings={}
proposals_supported ={}
for i in participants:
force = network.nodes[i]['sentiment']-sensitivity
delta_holdings[i] = network.nodes[i]['holdings']*gain*force
support = []
for j in candidates:
affinity = network.edges[(i, j)]['affinity']
cutoff = sensitivity*np.max([network.edges[(i,p)]['affinity'] for p in candidates])
if cutoff <.5:
cutoff = .5
if affinity > cutoff:
support.append(j)
proposals_supported[i] = support
return({'delta_holdings':delta_holdings, 'proposals_supported':proposals_supported})
def update_tokens(params, step, sL, s, _input):
network = s['network']
delta_holdings = _input['delta_holdings']
proposals = get_nodes_by_type(network, 'proposal')
proposals_supported = _input['proposals_supported']
participants = get_nodes_by_type(network, 'participant')
alpha = params['alpha']
for i in participants:
network.nodes[i]['holdings'] = network.nodes[i]['holdings']+delta_holdings[i]
supported = proposals_supported[i]
total_affinity = np.sum([ network.edges[(i, j)]['affinity'] for j in supported])
for j in proposals:
if j in supported:
normalized_affinity = network.edges[(i, j)]['affinity']/total_affinity
network.edges[(i, j)]['tokens'] = normalized_affinity*network.nodes[i]['holdings']
else:
network.edges[(i, j)]['tokens'] = 0
prior_conviction = network.edges[(i, j)]['conviction']
current_tokens = network.edges[(i, j)]['tokens']
network.edges[(i, j)]['conviction'] =current_tokens+alpha*prior_conviction
for j in proposals:
network.nodes[j]['conviction'] = np.sum([ network.edges[(i, j)]['conviction'] for i in participants])
key = 'network'
value = network
return (key, value)
def update_supply(params, step, sL, s, _input):
supply = s['supply']
delta_holdings = _input['delta_holdings']
delta_supply = np.sum([v for v in delta_holdings.values()])
supply = supply + delta_supply
key = 'supply'
value = supply
return (key, value)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The Partial State Update Blocks
partial_state_update_blocks = [
{
'policies': {
#new proposals or new participants
'random': driving_process
},
'variables': {
'network': update_network,
'funds':increment_funds,
'supply':increment_supply
}
},
{
'policies': {
'completion': check_progress #see if any of the funded proposals completes
},
'variables': { # The following state variables will be updated simultaneously
'sentiment': update_sentiment_on_completion, #note completing decays sentiment, completing bumps it
'network': complete_proposal #book-keeping
}
},
{
'policies': {
'release': trigger_function #check each proposal to see if it passes
},
'variables': { # The following state variables will be updated simultaneously
'funds': decrement_funds, #funds expended
'sentiment': update_sentiment_on_release, #releasing funds can bump sentiment
'network': update_proposals #reset convictions, and participants sentiments
#update based on affinities
}
},
{
'policies': {
'participants_act': participants_decisions, #high sentiment, high affinity =>buy
#low sentiment, low affinities => burn
#assign tokens to top affinities
},
'variables': {
'supply': update_supply,
'network': update_tokens #update everyones holdings
#and their conviction for each proposal
}
}
]
n= 25 #initial participants
m= 3 #initial proposals
initial_sentiment = .5
network, initial_funds, initial_supply, total_requested = initialize_network(n,m,total_funds_given_total_supply,trigger_threshold)
initial_conditions = {'network':network,
'supply': initial_supply,
'funds':initial_funds,
'sentiment': initial_sentiment}
#power of 1 token forever
# conviction_capactity = [2]
# alpha = [1-1/cc for cc in conviction_capactity]
# print(alpha)
params={
'sensitivity': [.75],
'tmin': [7], #unit days; minimum periods passed before a proposal can pass
'sentiment_decay': [.001], #termed mu in the state update function
'alpha': [0.5],
'base_completion_rate': [10],
'trigger_func': [trigger_threshold]
}
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Settings of general simulation parameters, unrelated to the system itself
# `T` is a range with the number of discrete units of time the simulation will run for;
# `N` is the number of times the simulation will be run (Monte Carlo runs)
time_periods_per_run = 250
monte_carlo_runs = 1
simulation_parameters = config_sim({
'T': range(time_periods_per_run),
'N': monte_carlo_runs,
'M': params
})
from cadCAD.configuration import append_configs
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
append_configs(
initial_state=initial_conditions, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_configs=simulation_parameters #dict containing simulation parameters
)
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from cadCAD import configs
import pandas as pd
exec_mode = ExecutionMode()
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
run = Executor(exec_context=multi_proc_ctx, configs=configs)
i = 0
for raw_result, tensor_field in run.execute():
result = pd.DataFrame(raw_result)
print()
print(f"Tensor Field: {type(tensor_field)}")
print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
print(f"Output: {type(result)}")
print(tabulate(result, headers='keys', tablefmt='psql'))
print()
i += 1

View File

@ -0,0 +1,763 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exogenous Example\n",
"## Authored by BlockScience, MV Barlin\n",
"### Updated July-10-2019 \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Key assumptions and space:\n",
"1. Implementation of System Model in cell 2\n",
"2. Timestep = day\n",
"3. Launch simulation, without intervention from changing governance policies"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Library Imports"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Image\n",
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib as mpl\n",
"import matplotlib.pyplot as plt\n",
"import seaborn as sns\n",
"import math\n",
"#from tabulate import tabulate\n",
"from scipy import stats\n",
"sns.set_style('whitegrid')\n",
"from decimal import Decimal\n",
"from datetime import timedelta\n",
"\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## cadCAD Setup\n",
"#### ----------------cadCAD LIBRARY IMPORTS------------------------"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from cadCAD.engine import ExecutionMode, ExecutionContext, Executor\n",
"#from simulations.validation import sweep_config\n",
"from cadCAD import configs\n",
"from cadCAD.configuration import append_configs\n",
"from cadCAD.configuration.utils import proc_trigger, ep_time_step, config_sim"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"#from cadCAD.configuration.utils.parameterSweep import config_sim"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from typing import Dict, List"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### ----------------Random State Seed-----------------------------"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"seed = {\n",
"# 'z': np.random.RandomState(1)\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Timestamp"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"ts_format = '%Y-%m-%d %H:%M:%S'\n",
"t_delta = timedelta(days=0, minutes=0, seconds=1)\n",
"def set_time(_g, step, sL, s, _input):\n",
" y = 'timestamp'\n",
" x = ep_time_step(s, dt_str=s['timestamp'], fromat_str=ts_format, _timedelta=t_delta)\n",
" return (y, x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ASSUMED PARAMETERS"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### PRICE LIST"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# dai_xns_conversion = 1.0 # Assumed for static conversion 'PUBLISHED PRICE LIST' DEPRECATED"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Initial Condition State Variables"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"del_stake_pct = 2\n",
"\n",
"starting_xns = float(10**10) # initial supply of xns tokens\n",
"starting_broker_xns = float(1 * 10**8) # inital holding of xns token by broker app\n",
"starting_broker_fiat = float(1 * 10**5) # inital holding of xns token by broker app\n",
"starting_broker_stable = float(1 * 10**6) # inital holding of stable token by broker app\n",
"starting_deposit_acct = float(100) # inital deposit locked for first month of resources TBD: make function of resource*price\n",
"starting_entrance = float(1 * 10**4) # TBD: make function of entrance fee % * cost * # of initial apps\n",
"starting_app_usage = float(10) # initial fees from app usage \n",
"starting_platform = float(100) # initial platform fees \n",
"starting_resource_fees = float(10) # initial resource fees usage paid by apps \n",
"starting_app_subsidy = float(0.25* 10**9) # initial application subsidy pool\n",
"starting_stake = float(4 * 10**7)\n",
"starting_stake_pool = starting_stake + ((3*10**7)*(del_stake_pct)) # initial staked pool + ((3*10**7)*(del_stake_pct))\n",
"\n",
"#starting_block_reward = float(0) # initial block reward MOVED ABOVE TO POLICY\n",
"starting_capacity_subsidy = float(7.5 * 10**7) # initial capacity subsidy pool\n",
"starting_delegate_holdings = 0.15 * starting_xns\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Initial Condition Composite State Variables"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"# subsidy limit is 30% of the 10B supply\n",
"starting_treasury = float(5.5 * 10**9) \n",
"starting_app_income = float(0) # initial income to application\n",
"starting_resource_income = float(0) # initial income to application\n",
"starting_delegate_income = float(0) # initial income to delegate"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Initial Condition Exogoneous State Variables "
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"starting_xns_fiat = float(0.01) # initial xns per fiat signal\n",
"starting_fiat_ext = float(1) # initial xns per fiat signal\n",
"starting_stable_ext = float(1) # initial stable signal"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exogenous Price Updates"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"def delta_price(mean,sd):\n",
" '''Returns normal random variable generated by first two central moments of price change of input ticker'''\n",
" rv = np.random.normal(mean, sd)\n",
" return rv"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"\n",
"def xns_ext_update(_g, step, sL, s, _input):\n",
" key = 'XNS_fiat_external'\n",
" \n",
" value = s['XNS_fiat_external'] * (1 + delta_price(0.000000, 0.005))\n",
" \n",
" return key, value"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From Currency Analysis of DAI-USD pair \n",
"May-09-2018 through June-10-2019 \n",
"Datasource: BitFinex \n",
"Analysis of daily return percentage performed by BlockScience"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"DAI_mean = 0.0000719\n",
"DAI_sd = 0.006716"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The daily return is computed as: \n",
"$$ r = \\frac{Price_n - Price_{n-1}}{Price_{n-1}} $$ \n",
"Thus, the modelled current price can be as: \n",
"$$ Price_n = Price_{n-1} * r + Price_{n-1} $$"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"\n",
"def stable_update(_g, step, sL, s, _input):\n",
" key = 'stable_external'\n",
" \n",
" value = s['stable_external'] * (1 + delta_price(DAI_mean, DAI_sd))\n",
" return key, value\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Assumed Parameters"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"apps_deployed = 1 # Make part of test- application deployment model\n",
"\n",
"starting_deposit_acct = float(100) # inital deposit locked for first month of resources TBD: make function of resource*price\n",
"\n",
"app_resource_fee_constant = 10**1 # in STABLE, assumed per day per total nodes \n",
"platform_fee_constant = 10 # in XNS\n",
"# ^^^^^^^^^^^^ MAKE A PERCENTAGE OR FLAT FEE as PART of TESTING"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1000"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\n",
"alpha = 100 # Fee Rate\n",
"beta = 0.10 # FIXED Too high because multiplied by constant and resource fees\n",
"app_platform = alpha * platform_fee_constant\n",
"app_platform"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"10.0"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"beta_out =beta*100\n",
"beta_out"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.15"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"starting_capacity_subsidy / (5 * 10**7) / 10"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"\n",
"weight = 0.95 # 0.95 internal weight 5% friction from external markets\n",
"\n",
"def xns_int_update(_g, step, sL, s, _input):\n",
" key = 'XNS_fiat_internal'\n",
"\n",
" internal = s['XNS_fiat_internal'] * weight\n",
" external = s['XNS_fiat_external'] * (1 - weight)\n",
" value = internal + external\n",
" \n",
" return key, value"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### CONFIGURATION DICTIONARY"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"time_step_count = 3652 # days = 10 years\n",
"run_count = 1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Genesis States"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"#----------STATE VARIABLE Genesis DICTIONARY---------------------------\n",
"genesis_states = {\n",
" 'XNS_fiat_external' : starting_xns_fiat,\n",
" 'XNS_fiat_internal' : starting_xns_fiat,\n",
" # 'fiat_external' : starting_fiat_ext,\n",
" 'stable_external' : starting_stable_ext,\n",
" 'timestamp': '2018-10-01 15:16:24', #es5\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"#--------------EXOGENOUS STATE MECHANISM DICTIONARY--------------------\n",
"exogenous_states = {\n",
" 'XNS_fiat_external' : xns_ext_update,\n",
"# 'fiat_external' : starting_fiat_ext,\n",
" 'stable_external' : stable_update,\n",
" \"timestamp\": set_time,\n",
" }\n",
"\n",
"#--------------ENVIRONMENTAL PROCESS DICTIONARY------------------------\n",
"env_processes = {\n",
"# \"Poisson\": env_proc_id\n",
"}\n",
"#----------------------SIMULATION RUN SETUP----------------------------\n",
"sim_config = config_sim(\n",
" {\n",
" \"N\": run_count,\n",
" \"T\": range(time_step_count)\n",
"# \"M\": g # for parameter sweep\n",
"}\n",
")\n",
"#----------------------MECHANISM AND BEHAVIOR DICTIONARY---------------\n",
"partial_state_update_block = {\n",
" \"price\": { \n",
" \"policies\": { \n",
" },\n",
" \"variables\": {\n",
" 'XNS_fiat_internal' : xns_int_update\n",
"# 'app_income' : app_earn,\n",
" }\n",
" },\n",
"}\n",
"\n",
"append_configs(\n",
" sim_configs=sim_config,\n",
" initial_state=genesis_states,\n",
" seeds=seed,\n",
" raw_exogenous_states= exogenous_states,\n",
" env_processes=env_processes,\n",
" partial_state_update_blocks=partial_state_update_block\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Running cadCAD"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Simulation Execution: Single Configuration\n",
"\n",
"single_proc: [<cadCAD.configuration.Configuration object at 0x0000024B3B37AF60>]\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\mbarl\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\cadCAD\\utils\\__init__.py:89: FutureWarning: The use of a dictionary to describe Partial State Update Blocks will be deprecated. Use a list instead.\n",
" FutureWarning)\n"
]
}
],
"source": [
"exec_mode = ExecutionMode()\n",
"\n",
"print(\"Simulation Execution: Single Configuration\")\n",
"print()\n",
"first_config = configs # only contains config1\n",
"single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)\n",
"run1 = Executor(exec_context=single_proc_ctx, configs=first_config)\n",
"run1_raw_result, tensor_field = run1.main()\n",
"result = pd.DataFrame(run1_raw_result)\n",
"# print()\n",
"# print(\"Tensor Field: config1\")\n",
"# print(tabulate(tensor_field, headers='keys', tablefmt='psql'))\n",
"# print(\"Output:\")\n",
"# print(tabulate(result, headers='keys', tablefmt='psql'))\n",
"# print()"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"df = result"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>XNS_fiat_external</th>\n",
" <th>XNS_fiat_internal</th>\n",
" <th>run</th>\n",
" <th>stable_external</th>\n",
" <th>substep</th>\n",
" <th>timestamp</th>\n",
" <th>timestep</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>0.010000</td>\n",
" <td>0.010000</td>\n",
" <td>1</td>\n",
" <td>1.000000</td>\n",
" <td>0</td>\n",
" <td>2018-10-01 15:16:24</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>0.009944</td>\n",
" <td>0.010000</td>\n",
" <td>1</td>\n",
" <td>1.000172</td>\n",
" <td>1</td>\n",
" <td>2018-10-01 15:16:25</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>0.009889</td>\n",
" <td>0.009997</td>\n",
" <td>1</td>\n",
" <td>1.003516</td>\n",
" <td>1</td>\n",
" <td>2018-10-01 15:16:26</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>0.009848</td>\n",
" <td>0.009992</td>\n",
" <td>1</td>\n",
" <td>0.990655</td>\n",
" <td>1</td>\n",
" <td>2018-10-01 15:16:27</td>\n",
" <td>3</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>0.009814</td>\n",
" <td>0.009985</td>\n",
" <td>1</td>\n",
" <td>1.001346</td>\n",
" <td>1</td>\n",
" <td>2018-10-01 15:16:28</td>\n",
" <td>4</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>0.009798</td>\n",
" <td>0.009976</td>\n",
" <td>1</td>\n",
" <td>1.002495</td>\n",
" <td>1</td>\n",
" <td>2018-10-01 15:16:29</td>\n",
" <td>5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>0.009706</td>\n",
" <td>0.009967</td>\n",
" <td>1</td>\n",
" <td>0.994911</td>\n",
" <td>1</td>\n",
" <td>2018-10-01 15:16:30</td>\n",
" <td>6</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>0.009625</td>\n",
" <td>0.009954</td>\n",
" <td>1</td>\n",
" <td>0.998919</td>\n",
" <td>1</td>\n",
" <td>2018-10-01 15:16:31</td>\n",
" <td>7</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>0.009632</td>\n",
" <td>0.009938</td>\n",
" <td>1</td>\n",
" <td>0.995047</td>\n",
" <td>1</td>\n",
" <td>2018-10-01 15:16:32</td>\n",
" <td>8</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>0.009648</td>\n",
" <td>0.009922</td>\n",
" <td>1</td>\n",
" <td>0.980786</td>\n",
" <td>1</td>\n",
" <td>2018-10-01 15:16:33</td>\n",
" <td>9</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" XNS_fiat_external XNS_fiat_internal run stable_external substep \\\n",
"0 0.010000 0.010000 1 1.000000 0 \n",
"1 0.009944 0.010000 1 1.000172 1 \n",
"2 0.009889 0.009997 1 1.003516 1 \n",
"3 0.009848 0.009992 1 0.990655 1 \n",
"4 0.009814 0.009985 1 1.001346 1 \n",
"5 0.009798 0.009976 1 1.002495 1 \n",
"6 0.009706 0.009967 1 0.994911 1 \n",
"7 0.009625 0.009954 1 0.998919 1 \n",
"8 0.009632 0.009938 1 0.995047 1 \n",
"9 0.009648 0.009922 1 0.980786 1 \n",
"\n",
" timestamp timestep \n",
"0 2018-10-01 15:16:24 0 \n",
"1 2018-10-01 15:16:25 1 \n",
"2 2018-10-01 15:16:26 2 \n",
"3 2018-10-01 15:16:27 3 \n",
"4 2018-10-01 15:16:28 4 \n",
"5 2018-10-01 15:16:29 5 \n",
"6 2018-10-01 15:16:30 6 \n",
"7 2018-10-01 15:16:31 7 \n",
"8 2018-10-01 15:16:32 8 \n",
"9 2018-10-01 15:16:33 9 "
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df.head(10)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,183 @@
from decimal import Decimal
import numpy as np
from datetime import timedelta
import pprint
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import env_proc_trigger, ep_time_step, config_sim
from typing import Dict, List
# from cadCAD.utils.sys_config import exo, exo_check
pp = pprint.PrettyPrinter(indent=4)
seeds = {
'z': np.random.RandomState(1),
'a': np.random.RandomState(2),
'b': np.random.RandomState(3),
'c': np.random.RandomState(3)
}
# Optional
g: Dict[str, List[int]] = {
'alpha': [1],
'beta': [2, 5],
'gamma': [3, 4],
'omega': [7]
}
# Policies per Mechanism
def p1m1(_g, step, sL, s):
return {'param1': 1}
def p2m1(_g, step, sL, s):
return {'param2': 4}
def p1m2(_g, step, sL, s):
return {'param1': 'a', 'param2': _g['beta']}
def p2m2(_g, step, sL, s):
return {'param1': 'b', 'param2': 0}
def p1m3(_g, step, sL, s):
return {'param1': np.array([10, 100])}
def p2m3(_g, step, sL, s):
return {'param1': np.array([20, 200])}
# Internal States per Mechanism
def s1m1(_g, step, sL, s, _input):
return 's1', 0
def s2m1(_g, step, sL, s, _input):
return 's2', _g['beta']
def s1m2(_g, step, sL, s, _input):
return 's1', _input['param2']
def s2m2(_g, step, sL, s, _input):
return 's2', _input['param2']
def s1m3(_g, step, sL, s, _input):
return 's1', 0
def s2m3(_g, step, sL, s, _input):
return 's2', 0
# Exogenous States
proc_one_coef_A = 0.7
proc_one_coef_B = 1.3
def es3p1(_g, step, sL, s, _input):
return 's3', _g['gamma']
# @curried
def es4p2(_g, step, sL, s, _input):
return 's4', _g['gamma']
ts_format = '%Y-%m-%d %H:%M:%S'
t_delta = timedelta(days=0, minutes=0, seconds=1)
def es5p2(_g, step, sL, s, _input):
y = 'timestep'
x = ep_time_step(s, dt_str=s['timestep'], fromat_str=ts_format, _timedelta=t_delta)
return (y, x)
# Environment States
# @curried
# def env_a(param, x):
# return x + param
def env_a(x):
return x
def env_b(x):
return 10
# Genesis States
genesis_states = {
's1': Decimal(0.0),
's2': Decimal(0.0),
's3': Decimal(1.0),
's4': Decimal(1.0),
# 'timestep': '2018-10-01 15:16:24'
}
# remove `exo_update_per_ts` to update every ts
raw_exogenous_states = {
"s3": es3p1,
"s4": es4p2,
# "timestep": es5p2
}
# ToDo: make env proc trigger field agnostic
# ToDo: input json into function renaming __name__
triggered_env_b = env_proc_trigger(1, env_b)
env_processes = {
"s3": env_a, #sweep(beta, env_a),
"s4": triggered_env_b #rename('parameterized', triggered_env_b) #sweep(beta, triggered_env_b)
}
# parameterized_env_processes = parameterize_states(env_processes)
#
# pp.pprint(parameterized_env_processes)
# exit()
# ToDo: The number of values entered in sweep should be the # of config objs created,
# not dependent on the # of times the sweep is applied
# sweep exo_state func and point to exo-state in every other funtion
# param sweep on genesis states
partial_state_update_block = {
"m1": {
"policies": {
"b1": p1m1,
"b2": p2m1
},
"variables": {
"s1": s1m1,
"s2": s2m1
}
},
"m2": {
"policies": {
"b1": p1m2,
"b2": p2m2,
},
"variables": {
"s1": s1m2,
"s2": s2m2
}
},
"m3": {
"policies": {
"b1": p1m3,
"b2": p2m3
},
"variables": {
"s1": s1m3,
"s2": s2m3
}
}
}
# config_sim Necessary
sim_config = config_sim(
{
"N": 2,
"T": range(5),
"M": g # Optional
}
)
# New Convention
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
seeds=seeds,
raw_exogenous_states={}, #raw_exogenous_states,
env_processes={}, #env_processes,
partial_state_update_blocks=partial_state_update_block
)

View File

@ -0,0 +1,181 @@
from decimal import Decimal
import numpy as np
from datetime import timedelta
import pprint
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import env_proc_trigger, ep_time_step, config_sim
from typing import Dict, List
pp = pprint.PrettyPrinter(indent=4)
seeds = {
'z': np.random.RandomState(1),
'a': np.random.RandomState(2),
'b': np.random.RandomState(3),
'c': np.random.RandomState(3)
}
# Optional
g: Dict[str, List[int]] = {
'alpha': [1],
'beta': [2, 5],
'gamma': [3, 4],
'omega': [7]
}
# Policies per Mechanism
def p1m1(_g, step, sL, s):
return {'param1': 1}
def p2m1(_g, step, sL, s):
return {'param2': 4}
def p1m2(_g, step, sL, s):
return {'param1': 'a', 'param2': _g['beta']}
def p2m2(_g, step, sL, s):
return {'param1': 'b', 'param2': 0}
def p1m3(_g, step, sL, s):
return {'param1': np.array([10, 100])}
def p2m3(_g, step, sL, s):
return {'param1': np.array([20, 200])}
# Internal States per Mechanism
def s1m1(_g, step, sL, s, _input):
return 's1', 0
def s2m1(_g, step, sL, s, _input):
return 's2', _g['beta']
def s1m2(_g, step, sL, s, _input):
return 's1', _input['param2']
def s2m2(_g, step, sL, s, _input):
return 's2', _input['param2']
def s1m3(_g, step, sL, s, _input):
return 's1', 0
def s2m3(_g, step, sL, s, _input):
return 's2', 0
# Exogenous States
proc_one_coef_A = 0.7
proc_one_coef_B = 1.3
def es3p1(_g, step, sL, s, _input):
return 's3', _g['gamma']
# @curried
def es4p2(_g, step, sL, s, _input):
return 's4', _g['gamma']
ts_format = '%Y-%m-%d %H:%M:%S'
t_delta = timedelta(days=0, minutes=0, seconds=1)
def es5p2(_g, step, sL, s, _input):
y = 'timestep'
x = ep_time_step(s, dt_str=s['timestep'], fromat_str=ts_format, _timedelta=t_delta)
return (y, x)
# Environment States
# @curried
# def env_a(param, x):
# return x + param
def env_a(x):
return x
def env_b(x):
return 10
# Genesis States
genesis_states = {
's1': Decimal(0.0),
's2': Decimal(0.0),
's3': Decimal(1.0),
's4': Decimal(1.0),
# 'timestep': '2018-10-01 15:16:24'
}
# remove `exo_update_per_ts` to update every ts
raw_exogenous_states = {
"s3": es3p1,
"s4": es4p2,
# "timestep": es5p2
}
# ToDo: make env proc trigger field agnostic
# ToDo: input json into function renaming __name__
triggered_env_b = env_proc_trigger(1, env_b)
env_processes = {
"s3": env_a, #sweep(beta, env_a),
"s4": triggered_env_b #rename('parameterized', triggered_env_b) #sweep(beta, triggered_env_b)
}
# parameterized_env_processes = parameterize_states(env_processes)
#
# pp.pprint(parameterized_env_processes)
# exit()
# ToDo: The number of values entered in sweep should be the # of config objs created,
# not dependent on the # of times the sweep is applied
# sweep exo_state func and point to exo-state in every other funtion
# param sweep on genesis states
partial_state_update_block = {
"m1": {
"policies": {
"b1": p1m1,
"b2": p2m1
},
"variables": {
"s1": s1m1,
"s2": s2m1
}
},
"m2": {
"policies": {
"b1": p1m2,
"b2": p2m2,
},
"variables": {
"s1": s1m2,
"s2": s2m2
}
},
"m3": {
"policies": {
"b1": p1m3,
"b2": p2m3
},
"variables": {
"s1": s1m3,
"s2": s2m3
}
}
}
# config_sim Necessary
sim_config = config_sim(
{
"N": 2,
"T": range(5),
"M": g # Optional
}
)
# New Convention
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
seeds=seeds,
raw_exogenous_states=raw_exogenous_states,
env_processes=env_processes,
partial_state_update_blocks=partial_state_update_block
)

View File

@ -0,0 +1,118 @@
from decimal import Decimal
import numpy as np
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import bound_norm_random, config_sim
seeds = {
'z': np.random.RandomState(1),
'a': np.random.RandomState(2),
'b': np.random.RandomState(3),
'c': np.random.RandomState(3)
}
# Policies per Mechanism
def p1(_g, step, sL, s):
return {'param1': 10}
def p2(_g, step, sL, s):
return {'param1': 10, 'param2': 40}
# Internal States per Mechanism
def s1(_g, step, sL, s, _input):
y = 'ds1'
x = s['ds1'] + 1
return (y, x)
def s2(_g, step, sL, s, _input):
y = 'ds2'
x = _input['param2']
return (y, x)
# Exogenous States
proc_one_coef_A = 0.7
proc_one_coef_B = 1.3
def es(_g, step, sL, s, _input):
y = 'ds3'
x = s['ds3'] * bound_norm_random(seeds['a'], proc_one_coef_A, proc_one_coef_B)
return (y, x)
# Environment States
def env_a(x):
return 5
def env_b(x):
return 10
# Genesis States
genesis_states = {
'ds1': Decimal(0.0),
'ds2': Decimal(0.0),
'ds3': Decimal(1.0)
}
raw_exogenous_states = {
"ds3": es
}
env_processes = {
"ds3": env_a
}
partial_state_update_block = {
"m1": {
"policies": {
"p1": p1,
"p2": p2
},
"variables": {
"ds1": s1,
"ds2": s2
}
},
"m2": {
"policies": {
"p1": p1,
"p2": p2
},
"variables": {
"ds1": s1,
"ds2": s2
}
},
"m3": {
"policies": {
"p1": p1,
"p2": p2
},
"variables": {
"ds1": s1,
"ds2": s2
}
}
}
sim_config = config_sim(
{
"N": 2,
"T": range(4),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
seeds=seeds,
raw_exogenous_states=raw_exogenous_states,
env_processes=env_processes,
partial_state_update_blocks=partial_state_update_block,
policy_ops=[lambda a, b: a + b]
)

View File

@ -1,4 +0,0 @@
from engine import run
from tabulate import tabulate
result = run.main()
print(tabulate(result, headers='keys', tablefmt='psql'))

0
testing/__init__.py Normal file
View File

57
testing/generic_test.py Normal file
View File

@ -0,0 +1,57 @@
import unittest
from parameterized import parameterized
from functools import reduce
def generate_assertions_df(df, expected_results, target_cols, evaluations):
test_names = []
for eval_f in evaluations:
def wrapped_eval(a, b):
try:
return eval_f(a, b)
except KeyError:
return True
test_name = f"{eval_f.__name__}_test"
test_names.append(test_name)
df[test_name] = df.apply(
lambda x: wrapped_eval(
x.filter(items=target_cols).to_dict(),
expected_results[(x['run'], x['timestep'], x['substep'])]
),
axis=1
)
return df, test_names
def make_generic_test(params):
class TestSequence(unittest.TestCase):
def generic_test(self, tested_df, expected_reults, test_name):
erroneous = tested_df[(tested_df[test_name] == False)]
# print(tabulate(tested_df, headers='keys', tablefmt='psql'))
if erroneous.empty is False: # Or Entire df IS NOT erroneous
for index, row in erroneous.iterrows():
expected = expected_reults[(row['run'], row['timestep'], row['substep'])]
unexpected = {f"invalid_{k}": expected[k] for k in expected if k in row and expected[k] != row[k]}
for key in unexpected.keys():
erroneous[key] = None
erroneous.at[index, key] = unexpected[key]
# etc.
# ToDo: Condition that will change false to true
self.assertTrue(reduce(lambda a, b: a and b, tested_df[test_name]))
@parameterized.expand(params)
def test_validation(self, name, result_df, expected_reults, target_cols, evaluations):
# alt for (*) Exec Debug mode
tested_df, test_names = generate_assertions_df(result_df, expected_reults, target_cols, evaluations)
for test_name in test_names:
self.generic_test(tested_df, expected_reults, test_name)
return TestSequence

View File

View File

@ -0,0 +1,66 @@
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import config_sim
import pandas as pd
from cadCAD.utils import SilentDF
df = SilentDF(pd.read_csv('/DiffyQ-SimCAD/simulations/external_data/output.csv'))
def query(s, df):
return df[
(df['run'] == s['run']) & (df['substep'] == s['substep']) & (df['timestep'] == s['timestep'])
].drop(columns=['run', 'substep', "timestep"])
def p1(_g, substep, sL, s):
result_dict = query(s, df).to_dict()
del result_dict["ds3"]
return {k: list(v.values()).pop() for k, v in result_dict.items()}
def p2(_g, substep, sL, s):
result_dict = query(s, df).to_dict()
del result_dict["ds1"], result_dict["ds2"]
return {k: list(v.values()).pop() for k, v in result_dict.items()}
# integrate_ext_dataset
def integrate_ext_dataset(_g, step, sL, s, _input):
result_dict = query(s, df).to_dict()
return 'external_data', {k: list(v.values()).pop() for k, v in result_dict.items()}
def increment(y, incr_by):
return lambda _g, step, sL, s, _input: (y, s[y] + incr_by)
increment = increment('increment', 1)
def view_policies(_g, step, sL, s, _input):
return 'policies', _input
external_data = {'ds1': None, 'ds2': None, 'ds3': None}
state_dict = {
'increment': 0,
'external_data': external_data,
'policies': external_data
}
policies = {"p1": p1, "p2": p2}
states = {'increment': increment, 'external_data': integrate_ext_dataset, 'policies': view_policies}
PSUB = {'policies': policies, 'states': states}
# needs M1&2 need behaviors
partial_state_update_blocks = {
'PSUB1': PSUB,
'PSUB2': PSUB,
'PSUB3': PSUB
}
sim_config = config_sim({
"N": 2,
"T": range(4)
})
append_configs(
sim_configs=sim_config,
initial_state=state_dict,
partial_state_update_blocks=partial_state_update_blocks,
policy_ops=[lambda a, b: {**a, **b}]
)

View File

@ -0,0 +1,93 @@
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import config_sim, access_block
policies, variables = {}, {}
exclusion_list = ['nonexsistant', 'last_x', '2nd_to_last_x', '3rd_to_last_x', '4th_to_last_x']
# Policies per Mechanism
# WARNING: DO NOT delete elements from sH
# state_history, target_field, psu_block_offset, exculsion_list
def last_update(_g, substep, sH, s):
return {"last_x": access_block(
state_history=sH,
target_field="last_x",
psu_block_offset=-1,
exculsion_list=exclusion_list
)
}
policies["last_x"] = last_update
def second2last_update(_g, substep, sH, s):
return {"2nd_to_last_x": access_block(sH, "2nd_to_last_x", -2, exclusion_list)}
policies["2nd_to_last_x"] = second2last_update
# Internal States per Mechanism
# WARNING: DO NOT delete elements from sH
def add(y, x):
return lambda _g, substep, sH, s, _input: (y, s[y] + x)
variables['x'] = add('x', 1)
# last_partial_state_update_block
def nonexsistant(_g, substep, sH, s, _input):
return 'nonexsistant', access_block(sH, "nonexsistant", 0, exclusion_list)
variables['nonexsistant'] = nonexsistant
# last_partial_state_update_block
def last_x(_g, substep, sH, s, _input):
return 'last_x', _input["last_x"]
variables['last_x'] = last_x
# 2nd to last partial state update block
def second_to_last_x(_g, substep, sH, s, _input):
return '2nd_to_last_x', _input["2nd_to_last_x"]
variables['2nd_to_last_x'] = second_to_last_x
# 3rd to last partial state update block
def third_to_last_x(_g, substep, sH, s, _input):
return '3rd_to_last_x', access_block(sH, "3rd_to_last_x", -3, exclusion_list)
variables['3rd_to_last_x'] = third_to_last_x
# 4th to last partial state update block
def fourth_to_last_x(_g, substep, sH, s, _input):
return '4th_to_last_x', access_block(sH, "4th_to_last_x", -4, exclusion_list)
variables['4th_to_last_x'] = fourth_to_last_x
genesis_states = {
'x': 0,
'nonexsistant': [],
'last_x': [],
'2nd_to_last_x': [],
'3rd_to_last_x': [],
'4th_to_last_x': []
}
PSUB = {
"policies": policies,
"variables": variables
}
partial_state_update_block = {
"PSUB1": PSUB,
"PSUB2": PSUB,
"PSUB3": PSUB
}
sim_config = config_sim(
{
"N": 1,
"T": range(3),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
partial_state_update_blocks=partial_state_update_block
)

View File

@ -0,0 +1,98 @@
import pprint
from typing import Dict, List
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import env_trigger, var_substep_trigger, config_sim, psub_list
pp = pprint.PrettyPrinter(indent=4)
def some_function(x):
return x
# Optional
# dict must contain lists opf 2 distinct lengths
g: Dict[str, List[int]] = {
'alpha': [1],
'beta': [2, some_function],
'gamma': [3, 4],
'omega': [7]
}
psu_steps = ['m1', 'm2', 'm3']
system_substeps = len(psu_steps)
var_timestep_trigger = var_substep_trigger([0, system_substeps])
env_timestep_trigger = env_trigger(system_substeps)
env_process = {}
# ['s1', 's2', 's3', 's4']
# Policies per Mechanism
def gamma(_g, step, sL, s):
return {'gamma': _g['gamma']}
def omega(_g, step, sL, s):
return {'omega': _g['omega']}
# Internal States per Mechanism
def alpha(_g, step, sL, s, _input):
return 'alpha', _g['alpha']
def beta(_g, step, sL, s, _input):
return 'beta', _g['beta']
def policies(_g, step, sL, s, _input):
return 'policies', _input
def sweeped(_g, step, sL, s, _input):
return 'sweeped', {'beta': _g['beta'], 'gamma': _g['gamma']}
psu_block = {k: {"policies": {}, "variables": {}} for k in psu_steps}
for m in psu_steps:
psu_block[m]['policies']['gamma'] = gamma
psu_block[m]['policies']['omega'] = omega
psu_block[m]["variables"]['alpha'] = alpha
psu_block[m]["variables"]['beta'] = beta
psu_block[m]['variables']['policies'] = policies
psu_block[m]["variables"]['sweeped'] = var_timestep_trigger(y='sweeped', f=sweeped)
# Genesis States
genesis_states = {
'alpha': 0,
'beta': 0,
'policies': {},
'sweeped': {}
}
# Environment Process
env_process['sweeped'] = env_timestep_trigger(trigger_field='timestep', trigger_vals=[5], funct_list=[lambda _g, x: _g['beta']])
sim_config = config_sim(
{
"N": 2,
"T": range(5),
"M": g, # Optional
}
)
# New Convention
partial_state_update_blocks = psub_list(psu_block, psu_steps)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
env_processes=env_process,
partial_state_update_blocks=partial_state_update_blocks
)
print()
print("Policie State Update Block:")
pp.pprint(partial_state_update_blocks)
print()
print()

View File

@ -0,0 +1,83 @@
from cadCAD.configuration import append_configs
from cadCAD.configuration.utils import config_sim
# Policies per Mechanism
def p1m1(_g, step, sL, s):
return {'policy1': 1}
def p2m1(_g, step, sL, s):
return {'policy2': 2}
def p1m2(_g, step, sL, s):
return {'policy1': 2, 'policy2': 2}
def p2m2(_g, step, sL, s):
return {'policy1': 2, 'policy2': 2}
def p1m3(_g, step, sL, s):
return {'policy1': 1, 'policy2': 2, 'policy3': 3}
def p2m3(_g, step, sL, s):
return {'policy1': 1, 'policy2': 2, 'policy3': 3}
# Internal States per Mechanism
def add(y, x):
return lambda _g, step, sH, s, _input: (y, s[y] + x)
def policies(_g, step, sH, s, _input):
y = 'policies'
x = _input
return (y, x)
# Genesis States
genesis_states = {
'policies': {},
's1': 0
}
variables = {
's1': add('s1', 1),
"policies": policies
}
partial_state_update_block = {
"m1": {
"policies": {
"p1": p1m1,
"p2": p2m1
},
"variables": variables
},
"m2": {
"policies": {
"p1": p1m2,
"p2": p2m2
},
"variables": variables
},
"m3": {
"policies": {
"p1": p1m3,
"p2": p2m3
},
"variables": variables
}
}
sim_config = config_sim(
{
"N": 1,
"T": range(3),
}
)
append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
partial_state_update_blocks=partial_state_update_block,
policy_ops=[lambda a, b: a + b, lambda y: y * 2] # Default: lambda a, b: a + b
)

View File

View File

@ -0,0 +1,110 @@
import unittest
import pandas as pd
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from simulations.regression_tests import external_dataset
from cadCAD import configs
from testing.generic_test import make_generic_test
exec_mode = ExecutionMode()
print("Simulation Execution: Single Configuration")
print()
first_config = configs
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
def get_expected_results(run):
return {
(run, 0, 0): {
'external_data': {'ds1': None, 'ds2': None, 'ds3': None},
'increment': 0,
'policies': {'ds1': None, 'ds2': None, 'ds3': None}
},
(run, 1, 1): {
'external_data': {'ds1': 0, 'ds2': 0, 'ds3': 1},
'increment': 1,
'policies': {'ds1': 0, 'ds2': 0, 'ds3': 1}
},
(run, 1, 2): {
'external_data': {'ds1': 1, 'ds2': 40, 'ds3': 5},
'increment': 2,
'policies': {'ds1': 1, 'ds2': 40, 'ds3': 5}
},
(run, 1, 3): {
'external_data': {'ds1': 2, 'ds2': 40, 'ds3': 5},
'increment': 3,
'policies': {'ds1': 2, 'ds2': 40, 'ds3': 5}
},
(run, 2, 1): {
'external_data': {'ds1': 3, 'ds2': 40, 'ds3': 5},
'increment': 4,
'policies': {'ds1': 3, 'ds2': 40, 'ds3': 5}
},
(run, 2, 2): {
'external_data': {'ds1': 4, 'ds2': 40, 'ds3': 5},
'increment': 5,
'policies': {'ds1': 4, 'ds2': 40, 'ds3': 5}
},
(run, 2, 3): {
'external_data': {'ds1': 5, 'ds2': 40, 'ds3': 5},
'increment': 6,
'policies': {'ds1': 5, 'ds2': 40, 'ds3': 5}
},
(run, 3, 1): {
'external_data': {'ds1': 6, 'ds2': 40, 'ds3': 5},
'increment': 7,
'policies': {'ds1': 6, 'ds2': 40, 'ds3': 5}
},
(run, 3, 2): {
'external_data': {'ds1': 7, 'ds2': 40, 'ds3': 5},
'increment': 8,
'policies': {'ds1': 7, 'ds2': 40, 'ds3': 5}
},
(run, 3, 3): {
'external_data': {'ds1': 8, 'ds2': 40, 'ds3': 5},
'increment': 9,
'policies': {'ds1': 8, 'ds2': 40, 'ds3': 5}
},
(run, 4, 1): {
'external_data': {'ds1': 9, 'ds2': 40, 'ds3': 5},
'increment': 10,
'policies': {'ds1': 9, 'ds2': 40, 'ds3': 5}
},
(run, 4, 2): {
'external_data': {'ds1': 10, 'ds2': 40, 'ds3': 5},
'increment': 11,
'policies': {'ds1': 10, 'ds2': 40, 'ds3': 5}
},
(run, 4, 3): {
'external_data': {'ds1': 11, 'ds2': 40, 'ds3': 5},
'increment': 12,
'policies': {'ds1': 11, 'ds2': 40, 'ds3': 5}
}
}
expected_results = {}
expected_results_1 = get_expected_results(1)
expected_results_2 = get_expected_results(2)
expected_results.update(expected_results_1)
expected_results.update(expected_results_2)
def row(a, b):
return a == b
params = [["external_dataset", result, expected_results, ['increment', 'external_data', 'policies'], [row]]]
class GenericTest(make_generic_test(params)):
pass
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,124 @@
import unittest
import pandas as pd
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from testing.generic_test import make_generic_test
from testing.system_models import historical_state_access
from cadCAD import configs
exec_mode = ExecutionMode()
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=configs)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
expected_results = {
(1, 0, 0): {'x': 0, 'nonexsistant': [], 'last_x': [], '2nd_to_last_x': [], '3rd_to_last_x': [], '4th_to_last_x': []},
(1, 1, 1): {'x': 1,
'nonexsistant': [],
'last_x': [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}],
'2nd_to_last_x': [],
'3rd_to_last_x': [],
'4th_to_last_x': []},
(1, 1, 2): {'x': 2,
'nonexsistant': [],
'last_x': [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}],
'2nd_to_last_x': [],
'3rd_to_last_x': [],
'4th_to_last_x': []},
(1, 1, 3): {'x': 3,
'nonexsistant': [],
'last_x': [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}],
'2nd_to_last_x': [],
'3rd_to_last_x': [],
'4th_to_last_x': []},
(1, 2, 1): {'x': 4,
'nonexsistant': [],
'last_x': [
{'x': 4, 'run': 1, 'substep': 1, 'timestep': 1}, # x: 1
{'x': 2, 'run': 1, 'substep': 2, 'timestep': 1},
{'x': 3, 'run': 1, 'substep': 3, 'timestep': 1}
],
'2nd_to_last_x': [{'x': -1, 'run': 1, 'substep': 0, 'timestep': 0}], # x: 0
'3rd_to_last_x': [],
'4th_to_last_x': []},
(1, 2, 2): {'x': 5,
'nonexsistant': [],
'last_x': [
{'x': 1, 'run': 1, 'substep': 1, 'timestep': 1},
{'x': 2, 'run': 1, 'substep': 2, 'timestep': 1},
{'x': 3, 'run': 1, 'substep': 3, 'timestep': 1}
],
'2nd_to_last_x': [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}],
'3rd_to_last_x': [],
'4th_to_last_x': []},
(1, 2, 3): {'x': 6,
'nonexsistant': [],
'last_x': [
{'x': 1, 'run': 1, 'substep': 1, 'timestep': 1},
{'x': 2, 'run': 1, 'substep': 2, 'timestep': 1},
{'x': 3, 'run': 1, 'substep': 3, 'timestep': 1}
],
'2nd_to_last_x': [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}],
'3rd_to_last_x': [],
'4th_to_last_x': []},
(1, 3, 1): {'x': 7,
'nonexsistant': [],
'last_x': [
{'x': 4, 'run': 1, 'substep': 1, 'timestep': 2},
{'x': 5, 'run': 1, 'substep': 2, 'timestep': 2},
{'x': 6, 'run': 1, 'substep': 3, 'timestep': 2}
],
'2nd_to_last_x': [
{'x': 1, 'run': 1, 'substep': 1, 'timestep': 1},
{'x': 2, 'run': 1, 'substep': 2, 'timestep': 1},
{'x': 3, 'run': 1, 'substep': 3, 'timestep': 1}
],
'3rd_to_last_x': [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}],
'4th_to_last_x': []},
(1, 3, 2): {'x': 8,
'nonexsistant': [],
'last_x': [
{'x': 4, 'run': 1, 'substep': 1, 'timestep': 2},
{'x': 5, 'run': 1, 'substep': 2, 'timestep': 2},
{'x': 6, 'run': 1, 'substep': 3, 'timestep': 2}
],
'2nd_to_last_x': [
{'x': 1, 'run': 1, 'substep': 1, 'timestep': 1},
{'x': 2, 'run': 1, 'substep': 2, 'timestep': 1},
{'x': 3, 'run': 1, 'substep': 3, 'timestep': 1}
],
'3rd_to_last_x': [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}],
'4th_to_last_x': []},
(1, 3, 3): {'x': 9,
'nonexsistant': [],
'last_x': [
{'x': 4, 'run': 1, 'substep': 1, 'timestep': 2},
{'x': 5, 'run': 1, 'substep': 2, 'timestep': 2},
{'x': 6, 'run': 1, 'substep': 3, 'timestep': 2}
],
'2nd_to_last_x': [
{'x': 1, 'run': 1, 'substep': 1, 'timestep': 1},
{'x': 2, 'run': 1, 'substep': 2, 'timestep': 1},
{'x': 3, 'run': 1, 'substep': 3, 'timestep': 1}
],
'3rd_to_last_x': [{'x': 0, 'run': 1, 'substep': 0, 'timestep': 0}],
'4th_to_last_x': []}
}
def row(a, b):
return a == b
params = [
["historical_state_access", result, expected_results,
['x', 'nonexsistant', 'last_x', '2nd_to_last_x', '3rd_to_last_x', '4th_to_last_x'], [row]]
]
class GenericTest(make_generic_test(params)):
pass
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,73 @@
import unittest
import pandas as pd
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from testing.system_models import param_sweep
from cadCAD import configs
from testing.generic_test import make_generic_test
from testing.system_models.param_sweep import some_function
exec_mode = ExecutionMode()
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
run = Executor(exec_context=multi_proc_ctx, configs=configs)
def get_expected_results(run, beta, gamma):
return {
(run, 0, 0): {'policies': {}, 'sweeped': {}, 'alpha': 0, 'beta': 0},
(run, 1, 1): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 1, 2): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 1, 3): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 2, 1): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 2, 2): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 2, 3): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 3, 1): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 3, 2): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 3, 3): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 4, 1): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 4, 2): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 4, 3): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': {'beta': beta, 'gamma': gamma}, 'alpha': 1, 'beta': beta},
(run, 5, 1): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': beta, 'alpha': 1, 'beta': beta},
(run, 5, 2): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': beta, 'alpha': 1, 'beta': beta},
(run, 5, 3): {'policies': {'gamma': gamma, 'omega': 7}, 'sweeped': beta, 'alpha': 1, 'beta': beta}
}
expected_results_1 = {}
expected_results_1a = get_expected_results(1, 2, 3)
expected_results_1b = get_expected_results(2, 2, 3)
expected_results_1.update(expected_results_1a)
expected_results_1.update(expected_results_1b)
expected_results_2 = {}
expected_results_2a = get_expected_results(1, some_function, 4)
expected_results_2b = get_expected_results(2, some_function, 4)
expected_results_2.update(expected_results_2a)
expected_results_2.update(expected_results_2b)
i = 0
expected_results = [expected_results_1, expected_results_2]
config_names = ['sweep_config_A', 'sweep_config_B']
def row(a, b):
return a == b
def create_test_params(feature, fields):
i = 0
for raw_result, _ in run.execute():
yield [feature, pd.DataFrame(raw_result), expected_results[i], fields, [row]]
i += 1
params = list(create_test_params("param_sweep", ['alpha', 'beta', 'policies', 'sweeped']))
class GenericTest(make_generic_test(params)):
pass
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,43 @@
import unittest
import pandas as pd
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from testing.generic_test import make_generic_test
from testing.system_models import policy_aggregation
from cadCAD import configs
exec_mode = ExecutionMode()
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=configs)
raw_result, tensor_field = run.execute()
result = pd.DataFrame(raw_result)
expected_results = {
(1, 0, 0): {'policies': {}, 's1': 0},
(1, 1, 1): {'policies': {'policy1': 1, 'policy2': 4}, 's1': 1}, # 'policy1': 2
(1, 1, 2): {'policies': {'policy1': 8, 'policy2': 8}, 's1': 2},
(1, 1, 3): {'policies': {'policy1': 4, 'policy2': 8, 'policy3': 12}, 's1': 3},
(1, 2, 1): {'policies': {'policy1': 2, 'policy2': 4}, 's1': 4},
(1, 2, 2): {'policies': {'policy1': 8, 'policy2': 8}, 's1': 5},
(1, 2, 3): {'policies': {'policy1': 4, 'policy2': 8, 'policy3': 12}, 's1': 6},
(1, 3, 1): {'policies': {'policy1': 2, 'policy2': 4}, 's1': 7},
(1, 3, 2): {'policies': {'policy1': 8, 'policy2': 8}, 's1': 8},
(1, 3, 3): {'policies': {'policy1': 4, 'policy2': 8, 'policy3': 12}, 's1': 9}
}
def row(a, b):
return a == b
params = [["policy_aggregation", result, expected_results, ['policies', 's1'], [row]]]
class GenericTest(make_generic_test(params)):
pass
if __name__ == '__main__':
unittest.main()

21
testing/utils.py Normal file
View File

@ -0,0 +1,21 @@
#
# def record_generator(row, cols):
# return {col: row[col] for col in cols}
def gen_metric_row(row, cols):
return ((row['run'], row['timestep'], row['substep']), {col: row[col] for col in cols})
# def gen_metric_row(row):
# return ((row['run'], row['timestep'], row['substep']), {'s1': row['s1'], 'policies': row['policies']})
# def gen_metric_row(row):
# return {
# 'run': row['run'],
# 'timestep': row['timestep'],
# 'substep': row['substep'],
# 's1': row['s1'],
# 'policies': row['policies']
# }
def gen_metric_dict(df, cols):
return dict([gen_metric_row(row, cols) for index, row in df.iterrows()])

Some files were not shown because too many files have changed in this diff Show More