lipman
NeuS: LearningNeuralImplicitSurfaces byVolumeRenderingforMulti-viewReconstruction-SupplementaryMaterial-ADerivationforComputingOpacityαi
Next consider the case where[ti,ti+1] lies in a range[t`,tr] over which the camera ray is exiting the surface, i.e. the signed distance function is increasing onp(t) over [t`,tr]. Then we have ( f(p(t)) v) < 0 in [ti,ti+1]. Then, according to Eqn. 1, we haveρ(t) = 0. Therefore, by Eqn.12ofthepaper,wehave αi=1 exp Recall that our S-density fieldφs(f(x)) is defined using the logistic density functionφs(x) = se sx/(1+e sx)2, which is the derivative of the Sigmoid functionΦs(x) = (1+e sx) 1, i.e. φs(x)=Φ0s(x). As a first-order approximation of signed distance functionf, suppose that locally the surface is tangentially approximated byasufficiently small planar patch with itsoutwardunitnormal vector denotedas n. Nowsupposep(t)isapoint on the surfaceS,that is, f(p(t)) = 0. Next we will examine the value ofdwdt(t) at t = t . Thesigneddistancefunction f ismodeledbyanMLP that consists of 8hidden layers with hidden size of 256.
Explaining vague language
Why is language vague? Vagueness may be explained and rationalized if it can be shown that vague language is more useful to speaker and hearer than precise language. In a well-known paper, Lipman proposes a game-theoretic account of vagueness in terms of mixed strategy that leads to a puzzle: vagueness cannot be strictly better than precision at equilibrium. More recently, \'Egr\'e, Spector, Mortier and Verheyen have put forward a Bayesian account of vagueness establishing that using vague words can be strictly more informative than using precise words. This paper proposes to compare both results and to explain why they are not in contradiction. Lipman's definition of vagueness relies exclusively on a property of signaling strategies, without making any assumptions about the lexicon, whereas \'Egr\'e et al.'s involves a layer of semantic content. We argue that the semantic account of vagueness is needed, and more adequate and explanatory of vagueness.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Information Technology > Game Theory (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.93)
Set2Graph: Learning Graphs From Sets
Serviansky, Hadar, Segol, Nimrod, Shlomi, Jonathan, Cranmer, Kyle, Gross, Eilam, Maron, Haggai, Lipman, Yaron
Many problems in machine learning (ML) can be cast as learning functions from sets to graphs, or more generally to hypergraphs; in short, Set2Graph functions. Examples include clustering, learning vertex and edge features on graphs, and learning triplet data in a collection. Current neural network models that approximate Set2Graph functions come from two main ML sub-fields: equivariant learning, and similarity learning. Equivariant models would be in general computationally challenging or even infeasible, while similarity learning models can be shown to have limited expressive power. In this paper we suggest a neural network model family for learning Set2Graph functions that is both practical and of maximal expressive power (universal), that is, can approximate arbitrary continuous Set2Graph functions over compact sets. Testing our models on different machine learning tasks, including an application to particle physics, we find them favorable to existing baselines.
- North America > United States > New York (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)