nu-net
SupplementaryMaterial OnNumerosityofDeepNeuralNetworks 1 Generalizationstudyonobjectdensity
Here we add the generalization results of the Nu-Net when the object density is outside of the distribution of the training set. Specifically, we run the Nu-Net on test images that are the same as the training images but have 50% greatervariations inobject density. The 85% estimation interval length for each input number is shown in Figure 1. It can be seen that, fornumbers 1,2and4,the85% estimation intervallength is1,meaning thattheNu-Net performs very well on small numbers, i.e., on the task of subitizing.
- North America > United States (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > United States (0.14)
- North America > Canada > Ontario > Hamilton (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Review for NeurIPS paper: On Numerosity of Deep Neural Networks
This paper demonstrates that an analysis relied upon in a previous paper (Nasr et al., 2019) to identify number-sensitive units in a neural network trained for object recognition is flawed, and that indeed the same network with randomly initialized weights also has a large number of number sensitive units. Moreover, the number of units detected depends strongly on the sample size of the statistical test, with larger sample sizes detecting no number sensitive units. The paper additionally performs some analyses on a network trained specifically to predict number. The reviewers generally felt that the demonstration of Nasr et al.'s flawed analysis was important, with R2 arguing that the work is "imperative to publish" and R1 and R3 finding the experiments in the first part of the paper convincing. However, R1, R3, and R4 all had concerns with the second part of the paper, in which it is claimed that a network trained to classify number (Nu-Net) can learn to subitize. I feel that the results in the first part of the paper are sufficiently impactful that the paper should be accepted.
Rethinking the Unpretentious U-net for Medical Ultrasound Image Segmentation
Chen, Gongping, Li, Lei, Zhang, JianXun, Dai, Yu
Breast tumor segmentation is one of the key steps that helps us characterize and localize tumor regions. However, variable tumor morphology, blurred boundary, and similar intensity distributions bring challenges for accurate segmentation of breast tumors. Recently, many U-net variants have been proposed and widely used for breast tumors segmentation. However, these architectures suffer from two limitations: (1) Ignoring the characterize ability of the benchmark networks, and (2) Introducing extra complex operations increases the difficulty of understanding and reproducing the network. To alleviate these challenges, this paper proposes a simple yet powerful nested U-net (NU-net) for accurate segmentation of breast tumors. The key idea is to utilize U-Nets with different depths and shared weights to achieve robust characterization of breast tumors. NU-net mainly has the following advantages: (1) Improving network adaptability and robustness to breast tumors with different scales, (2) This method is easy to reproduce and execute, and (3) The extra operations increase network parameters without significantly increasing computational cost. Extensive experimental results with twelve state-of-the-art segmentation methods on three public breast ultrasound datasets demonstrate that NU-net has more competitive segmentation performance on breast tumors. Furthermore, the robustness of NU-net is further illustrated on the segmentation of renal ultrasound images. The source code is publicly available on https://github.com/CGPzy/NU-net.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > China > Tianjin Province > Tianjin (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (3 more...)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Oncology (0.97)
On Numerosity of Deep Neural Networks
Recently, a provocative claim was published that number sense spontaneously emerges in a deep neural network trained merely for visual object recognition. This has, if true, far reaching significance to the fields of machine learning and cognitive science alike. In this paper, we prove the above claim to be unfortunately incorrect. The statistical analysis to support the claim is flawed in that the sample set used to identify number-aware neurons is too small, compared to the huge number of neurons in the object recognition network. By this flawed analysis one could mistakenly identify number-sensing neurons in any randomly initialized deep neural networks that are not trained at all. With the above critique we ask the question what if a deep convolutional neural network is carefully trained for numerosity? Our findings are mixed. Even after being trained with number-depicting images, the deep learning approach still has difficulties to acquire the abstract concept of numbers, a cognitive task that preschoolers perform with ease. But on the other hand, we do find some encouraging evidences suggesting that deep neural networks are more robust to distribution shift for small numbers than for large numbers.
- North America > United States (0.14)
- North America > Canada > Ontario > Hamilton (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > China > Shanghai > Shanghai (0.04)