Goto

Collaborating Authors

 psnr





GAN You See Me? Enhanced Data Reconstruction Attacks against Split Inference Ziang Li1, Mengda Y ang

Neural Information Processing Systems

To overcome these challenges, we propose a G AN-based LA tent S pace S earch attack ( GLASS) that harnesses abundant prior knowledge from public data using advanced StyleGAN technologies. Additionally, we introduce GLASS++ to enhance reconstruction stability.



Cascaded Dilated Dense Network with Two-step Data Consistency for MRI Reconstruction

Hao Zheng, Faming Fang, Guixu Zhang

Neural Information Processing Systems

Compressed Sensing MRI (CS-MRI) aims at reconstrcuting de-aliased images fromsub-Nyquist samplingk-space datatoaccelerate MRImaging. Inspired by recent deep learning methods, we propose a Cascaded Dilated Dense Network (CDDN)forMRIreconstruction.


NeuS: LearningNeuralImplicitSurfaces byVolumeRenderingforMulti-viewReconstruction-SupplementaryMaterial-ADerivationforComputingOpacityαi

Neural Information Processing Systems

Next consider the case where[ti,ti+1] lies in a range[t`,tr] over which the camera ray is exiting the surface, i.e. the signed distance function is increasing onp(t) over [t`,tr]. Then we have ( f(p(t)) v) < 0 in [ti,ti+1]. Then, according to Eqn. 1, we haveρ(t) = 0. Therefore, by Eqn.12ofthepaper,wehave αi=1 exp Recall that our S-density fieldφs(f(x)) is defined using the logistic density functionφs(x) = se sx/(1+e sx)2, which is the derivative of the Sigmoid functionΦs(x) = (1+e sx) 1, i.e. φs(x)=Φ0s(x). As a first-order approximation of signed distance functionf, suppose that locally the surface is tangentially approximated byasufficiently small planar patch with itsoutwardunitnormal vector denotedas n. Nowsupposep(t)isapoint on the surfaceS,that is, f(p(t)) = 0. Next we will examine the value ofdwdt(t) at t = t . Thesigneddistancefunction f ismodeledbyanMLP that consists of 8hidden layers with hidden size of 256.


Net Hybrid UnrolledMulti Scale

Neural Information Processing Systems

The number of cascades in unrolled networks has a fundamental impact on their performance. The results are summarized inTable 3. Weobservethat ASR boosts the reconstruction quality of E2E-VarNet. Traditional Transformers for NLP receive a sequence of 1D token embeddings. The input to the Transformer encoder is thisN D representation, which we also refer to in the paperastokenrepresentation, aseachrowintherepresentation corresponds toatoken(inourcase animagepatch)intheoriginalinput.