Goto

Collaborating Authors

 fractal distribution


Is Deeper Better only when Shallow is Good?

Eran Malach, Shai Shalev-Shwartz

Neural Information Processing Systems

While current works account for the importance ofdepth for the expressivepower ofneural-networks, itremains an open question whether these benefits are exploited during a gradient-based optimization process.




We thank the reviewers for their overall positive feedback

Neural Information Processing Systems

We thank the reviewers for their overall positive feedback. The results are shown in Figure 1. We follow a proof similar to the one given in the original submission.


Reviews: Is Deeper Better only when Shallow is Good?

Neural Information Processing Systems

This paper investigates the effect of depth of expressivity and learnability, given a distribution generated by an iterated function system. In particular, they showed that shallow networks need an exponential number of neurons to realize a fractal distribution while deep networks only require a number of neurons that is linear with the depth of the fractal distribution. The results are interesting and could shed some lights on the theoretical understanding of deep learning. So, the reviewers have shown their support to this paper, despite that it studies a mathematically narrow case whose practical value is not very clear. The impact of the work will be greatly improved if the authors could extend their studies to more general cases.


Is Deeper Better only when Shallow is Good?

Malach, Eran, Shalev-Shwartz, Shai

arXiv.org Machine Learning

Understanding the power of depth in feed-forward neural networks is an ongoing challenge in the field of deep learning theory. While current works account for the importance of depth for the expressive power of neural-networks, it remains an open question whether these benefits are exploited during a gradient-based optimization process. In this work we explore the relation between expressivity properties of deep networks and the ability to train them efficiently using gradient-based algorithms. We give a depth separation argument for distributions with fractal structure, showing that they can be expressed efficiently by deep networks, but not with shallow ones. These distributions have a natural coarse-to-fine structure, and we show that the balance between the coarse and fine details has a crucial effect on whether the optimization process is likely to succeed. We prove that when the distribution is concentrated on the fine details, gradient-based algorithms are likely to fail. Using this result we prove that, at least in some distributions, the success of learning deep networks depends on whether the distribution can be well approximated by shallower networks, and we conjecture that this property holds in general.


The Fractal Nature of the Semantic Web

Berners-Lee, Tim (Massachusetts Institute of Technology) | Kagal, Lalana (Massachusetts Institute of Technology)

AI Magazine

In the past, many knowledge representation systems failed because they were too monolithic and didn’t scale well, whereas other systems failed to have an impact because they were small and isolated. Along with this trade-off in size, there is also a constant tension between the cost involved in building a larger community that can interoperate through common terms and the cost of the lack of interoperability. The semantic web offers a good compromise between these approaches as it achieves wide-scale communication and interoperability using finite effort and cost. The semantic web is a set of standards for knowledge representation and exchange that is aimed at providing interoperability across applications and organizations. We believe that the gathering success of this technology is not derived from the particular choice of syntax or of logic. Its main contribution is in recognizing and supporting the fractal patterns of scalable web systems. These systems will be composed of many overlapping communities of all sizes, ranging from one individual to the entire population that have internal (but not global) consistency. The information in these systems, including documents and messages, will contain some terms that are understood and accepted globally, some that are understood within certain communities, and some that are understood locally within the system. The amount of interoperability between interacting agents (software or human) will depend on how many communities they have in common and how many ontologies (groups of consistent and related terms) they share. In this article we discuss why fractal patterns are an appropriate model for web systems and how semantic web technologies can be used to design scalable and interoperable systems.