diff --git a/VRC_PonderRNN.ipynb b/VRC_PonderRNN.ipynb index 8dc13b0..98a30aa 100644 --- a/VRC_PonderRNN.ipynb +++ b/VRC_PonderRNN.ipynb @@ -3,19 +3,23 @@ { "cell_type": "markdown", "source": [ - "Building on [PonderNet](https://arxiv.org/abs/2107.05407), this notebook implements a neural alternative of the [Variable Rate Coding](https://doi.org/10.32470/CCN.2019.1397-0) model to produce human-like responses.\n", + "## Intro\n", + "\n", + "In the context of behavioral data, we are interested in simultaneously modeling speed and accuracy. Yet, most advanced techniques in machine learning cannot capture such a duality of decision making data.\n", + "\n", + "\n", + "Building on [PonderNet](https://arxiv.org/abs/2107.05407) and [Variable Rate Coding](https://doi.org/10.32470/CCN.2019.1397-0), this notebook implements a neural model that captures speed and accuracy of human-like responses.\n", "\n", "Given stimulus symbols as inputs, the model produces two outputs:\n", "\n", "- Response symbol, which, in comparison with the input stimuli, can be used to measure accuracy).\n", - "- Remaining entropy (to be contrasted against a decision threshold and ultimateely halt the process).\n", + "- Halting probability ($\\lambda_n$).\n", "\n", - "Under the hood, the model uses a RNN along with multiple Poisson processes to...\n", + "Under the hood, the model iterates over a ICOM-like component to reach a halting point in time. Unlike DDM and ICOM models, all the parameters and outcomes of the current model *seem* cognitively interpretable.\n", "\n", + "### Additional resources\n", "\n", - "## Resources\n", - "\n", - "- [Network model](https://drive.google.com/file/d/16eiUUwKGWfh9pu9VUxzlx046hQNHV0Qe/view?usp=sharinghttps://drive.google.com/file/d/16eiUUwKGWfh9pu9VUxzlx046hQNHV0Qe/view?usp=sharing)\n" + "- [ICOM network model](https://drive.google.com/file/d/16eiUUwKGWfh9pu9VUxzlx046hQNHV0Qe/view?usp=sharinghttps://drive.google.com/file/d/16eiUUwKGWfh9pu9VUxzlx046hQNHV0Qe/view?usp=sharing)\n" ], "metadata": {} }, @@ -369,9 +373,33 @@ }, { "cell_type": "code", - "execution_count": null, - "source": [], - "outputs": [], + "execution_count": 45, + "source": [ + "# example code to decode a stimulus into multiple sequence (one per channel)\n", + "\n", + "import torch\n", + "from torch import nn\n", + "\n", + "n_inputs = 7\n", + "max_timestep = 10\n", + "n_channels = 5\n", + "\n", + "X = torch.nn.functional.one_hot(torch.tensor(4), num_classes=n_inputs).type(torch.float)\n", + "\n", + "decode = nn.Linear(n_inputs, n_channels * max_timestep)\n", + "out = decode(X).reshape((n_channels, max_timestep))\n", + "\n", + "print(out.shape)" + ], + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "torch.Size([5, 10])\n" + ] + } + ], "metadata": {} } ],