What is it about?

Building models of real-world systems and predicting their behavior is a great way to test our understanding of these systems. However, measuring how well the simulation describes the system is generally not a simple task. In this paper, we tackle this task for neural network simulations and present concepts, methods, and implementations for how to compare network activities. The goal of this comparison is to evaluate the accuracy of models. With this work, we show how to formalize this process and to make it more reproducible.

Featured Image

Why is it important?

Collaborative work is essential to tackle big questions, such as "how does the brain work?". Key components to facilitate collaborative work is the formalization, standardization, and comparability of software tools, data formats, and workflows, as well as a rigorous quantitative assessment of progress. This work contributes to this effort by building on existing resources and community projects and adding a tool to make neural network models and their evaluation more reproducible.

Read the Original

This page is a summary of: Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data, Frontiers in Neuroinformatics, December 2018, Frontiers,
DOI: 10.3389/fninf.2018.00090.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page