Goto

Collaborating Authors

 deci


Deci Introduces World's Most Advanced Semantic Segmentation Models

#artificialintelligence

Deci, the deep learning company harnessing AI to build AI, today announced a new set of industry-leading semantic segmentation models, dubbed DeciSeg. Deci's proprietary Automated Neural Architecture Construction (AutoNAC) technology automatically generated semantic segmentation models that significantly outperform the most powerful models publicly available, such as the MobileViT released by Apple, and the DeepLab family released by Google. Deci's models deliver more than 2x lower latency, as well as 3-7% higher accuracy. Semantic segmentation is one of the most widely used computer vision tasks across many business verticals, including automotive, smart cities, healthcare, and consumer applications, and is often required for many edge AI applications. However, significant barriers exist to running semantic segmentation models directly on edge devices, such as high latency and the inability to deploy those models due to their size.


Deci's NLP Model Achieves Breakthrough Performance at MLPerf

#artificialintelligence

TEL AVIV, Israel, Sept. 8, 2022 -- Deci, the deep learning company harnessing Artificial Intelligence (AI) to build better AI, announced results for its Natural Language Processing (NLP) inference model submitted to the MLPerf Inference v2.1 benchmark suite under the open submission track. Generated by Deci's Automated Neural Architecture Construction (AutoNAC) technology, the NLP model, dubbed DeciBERT-Large, ran on Dell-PowerEdge-R7525-2 hardware using the AMD EPYCTM 7773X processor. The resulting model outperformed both the throughput performance of the BERT-Large model by 6.46x and achieved a 1% boost in accuracy. The model was submitted under the offline scenario in MLPerf's open division in the BERT 99.9 category. The goal was to maximize throughput while keeping the accuracy within a 0.1% margin of error from the baseline, which is 90.874


New Tool Moves AI from the Backend to the Edge

#artificialintelligence

Artificial Intelligence is moving from the backend to the frontend, in part thanks to emerging solutions designed to optimize how AI runs. The market is still young, but Deci is one of the emerging challengers in this space. The Tel Aviv-based company aims to bring AI to "the real world." The New Stack asked co-founder Yonatan Geifman to explain what that meant. "A lot of AI is currently in the lab, for example on Kaggle on some experimentation phase, and we are trying to help people to get from the lab to the real world," Geifman said.


Deci deep-learning platform aims to ease AI application development

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Deci, a deep-learning software maker that uses AI templates designed to create AI-based applications, today launched v2.0 of its development platform, which it claims speeds the way for developers to build, optimize and deploy computer vision models. The term "speed" and AI application development are rarely used in the same sentence, but by using this platform, resulting AI models can be more swiftly prepared to run on any hardware and environment, including cloud, edge and mobile – with accuracy and high runtime performance, Deci CEO and co-founder Yonatan Geifman said in a media advisory. This is because much of the grunge work has been eliminated by the Deci series of DeciNet templates made available in the v2.0 platform. Using Deci, the company says, AI developers can achieve improved inference performance and efficiency to enable effective deployments on resource-constrained edge devices, maximize hardware use and reduce training and inference cost, Geifman said.


New Electronics - Breakthrough deep learning performance on a CPU

#artificialintelligence

Deci's proprietary Automated Neural Architecture Construction (AutoNAC) technology automatically generated the new image classification models that improve all published models and deliver more than 2x improvement in runtime, coupled with improved accuracy, as compared to the most powerful models publicly available such as EfficientNets, developed by Google. While GPUs have traditionally been used in convolutional neural networks (CNNs), CPUs are a much cheaper alternative. Although it is possible to run deep learning inference on CPUs, they are significantly less powerful than GPUs and, as a result, deep learning models typically perform 3-10X slower on a CPU than on a GPU. DeciNets significantly close that performance gap so that tasks, that previously could not be carried out on a CPU because they were too resource intensive, are now possible. Additionally, these tasks will see a marked performance improvement.


Deep End-to-end Causal Inference

Geffner, Tomas, Antoran, Javier, Foster, Adam, Gong, Wenbo, Ma, Chao, Kiciman, Emre, Sharma, Amit, Lamb, Angus, Kukla, Martin, Pawlowski, Nick, Allamanis, Miltiadis, Zhang, Cheng

arXiv.org Machine Learning

Causal inference is essential for data-driven decision making across domains such as business engagement, medical treatment or policy making. However, research on causal discovery and inference has evolved separately, and the combination of the two domains is not trivial. In this work, we develop Deep End-to-end Causal Inference (DECI), a single flow-based method that takes in observational data and can perform both causal discovery and inference, including conditional average treatment effect (CATE) estimation. We provide a theoretical guarantee that DECI can recover the ground truth causal graph under mild assumptions. In addition, our method can handle heterogeneous, real-world, mixed-type data with missing values, allowing for both continuous and discrete treatment decisions. Moreover, the design principle of our method can generalize beyond DECI, providing a general End-to-end Causal Inference (ECI) recipe, which enables different ECI frameworks to be built using existing methods. Our results show the superior performance of DECI when compared to relevant baselines for both causal discovery and (C)ATE estimation in over a thousand experiments on both synthetic datasets and other causal machine learning benchmark datasets.


Deci snaps up $21M for tech to build better AI models based on available data and compute power – TechCrunch

#artificialintelligence

Building usable models to run AI algorithms requires not just adequate data to train systems, but also the right hardware subsequently to run them. But because the theoretical and practical are often not the same thing, there is often a gap between what data scientists may hope to do and what they practically do. Today, a startup called Deci that has built a deep learning platform to help bridge that gap -- by building models that can work with the data and hardware that are available to use -- is announcing some funding after finding strong traction for its products with Fortune 500 tech companies running mass-market, AI-based products based on video and other computer vision-based services. The Tel Aviv-based startup has picked up a Series A of $21 million, money that it will be using to continue expanding its product and customer base. Insight Partners is leading the round, with previous backers Square Peg, Emerge and Jibe Ventures, alongside some new backers: Samsung Next, Vintage Investment Partners, and Fort Ross Ventures.


Intel works with Deci to speed up machine learning on its chips

#artificialintelligence

Intel today announced a strategic business and technology collaboration with Deci to optimize machine learning on the former's processors. Deci says that in the coming weeks, it will work with Intel to deploy "innovative AI technologies" to the companies' mutual customers. Machine learning deployments have historically been constrained by the size and speed of algorithms and the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington's Grover fake news detection model cost $25,000 to train in about two weeks.


Intel hooks up with Deci for deep learning

#artificialintelligence

As one of the first companies to participate in Intel Ignite startup accelerator, Deci will now work with Intel to deploy innovative AI technologies to mutual customers. The collaboration takes helps enable deep learning inference at scale on Intel CPUs, reducing costs and latency, and enabling new applications of deep learning inference. New deep learning tasks can be performed in a real-time environment on edge devices and companies that use large scale inference scenarios can dramatically cut cloud or datacenter cost, simply by changing the inference hardware from GPU to Intel CPU. "By optimizing the AI models that run on Intel's hardware, Deci enables customers to get even more speed and will allow for cost-effective and more general deep learning use cases on Intel CPUs," says Deci CEO and co-founder Yonatan Geifman. Deci and Intel's collaboration began with MLPerf where on several Intel CPUs, Deci's AutoNAC (Automated Neural Architecture Construction) technology accelerated the inference speed of the well-known ResNet-50 neural network, reducing the submitted models' latency by a factor of up to 11.8x and increasing throughput by up to 11x.


Global Big Data Conference

#artificialintelligence

Deep learning startup Deci today announced that it raised $9.1 million in a seed funding round led by Israel-based Emerge. According to a spokesperson, the company plans to devote the proceeds to customer acquisition efforts as it expands its Tel Aviv workforce. Machine learning deployments have historically been constrained by the size and speed of algorithms and the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington's Grover fake news detection model cost $25,000 to train in about two weeks.