Goto

Collaborating Authors

 antlr




Appendix A Versatility of the neuron model In our neuron model, depending on the decay coefficients

Neural Information Processing Systems

The SRM-based back-propagation can be summarized using the relationship between the potentials as follows. Hyper-parameters used for loss landscape estimation (Section 3.4) and random spike-train matching Some of the hyper-parameters were not mentioned in the paper. Table A1: Hyper-parameters used for loss landscape estimation (Section 3.4) and random spike-train matching




Review for NeurIPS paper: Unifying Activation- and Timing-based Learning Rules for Spiking Neural Networks

Neural Information Processing Systems

Weaknesses: More detailed discussions about the main weaknesses of this work: (P1 lack of novelty): The authors' main argument is that the activation and timing-based methods have their respective pros and cons, so combining them using a weighted sum of the two (in terms of the intermediate derivative partial_L/partial_V both methods compute) will retain the best of the two worlds. While this is a reasonable assumption, but the idea lacks fundamental new contribution. Timing or spiking activation are just two facets of the same spiking phenomena. On what basis can the derivatives with respect to timing and activation be added together? I don't see an appropriate unifying mathematical handling here.


Why you should not use (f)lex, yacc and bison - Federico Tomassetti - Software Architect

#artificialintelligence

In the field of parsing Lex and Yacc, as well as their respective successors flex and GNU Bison, have a sort of venerable status. And you could still use them today. But you should not do that. In this article will explain why they have problems and show you some alternatives. Lex and Yacc were the first popular and efficient lexers and parsers generators, flex and Bison were the first widespread open-source versions compatible with the original software. Each of these software has more than 30 years of history, which is an achievement in itself. For some people these are still the first software they think about when talking about parsing. So, why you should avoid them? Well, we found a few reasons based in our experience developing parsers for our clients. For example, we had to worked with existing lexers in flex and found difficult adding modern features, like Unicode support or making the lexer re-entrant (i.e., usable in many threads). With Bison our clients had trouble organizing large codebases and we found difficult improving the efficiency of a parser without rewriting large part of the grammar. The short version is that there are tools that are more flexible and productive, like ANTLR.