efficient machine
Interview with Yezi Liu: Trustworthy and efficient machine learning
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Yezi Liu is working on trustworthy and efficient machine learning. We asked her about her research to date, what she has found particularly interesting, plans for future work, and what is was that inspired her to study AI. Tell us a bit about your PhD - where are you studying, and what is the topic of your research? My research focuses on trustworthy machine learning, with particular emphasis on graph neural networks as well as trustworthy and efficient large language models.
- North America > United States > California > Orange County > Irvine (0.06)
- North America > Canada > Quebec > Montreal (0.05)
- Asia > China > Guangdong Province > Guangzhou (0.05)
A hybrid framework for effective and efficient machine unlearning
Li, Mingxin, Yu, Yizhen, Wang, Ning, Wang, Zhigang, Wang, Xiaodong, Qu, Haipeng, Xu, Jia, Su, Shen, Yin, Zhichao
Recently machine unlearning (MU) is proposed to remove the imprints of revoked samples from the already trained model parameters, to solve users' privacy concern. Different from the runtime expensive retraining from scratch, there exist two research lines, exact MU and approximate MU with different favorites in terms of accuracy and efficiency. In this paper, we present a novel hybrid strategy on top of them to achieve an overall success. It implements the unlearning operation with an acceptable computation cost, while simultaneously improving the accuracy as much as possible. Specifically, it runs reasonable unlearning techniques by estimating the retraining workloads caused by revocations. If the workload is lightweight, it performs retraining to derive the model parameters consistent with the accurate ones retrained from scratch. Otherwise, it outputs the unlearned model by directly modifying the current parameters, for better efficiency. In particular, to improve the accuracy in the latter case, we propose an optimized version to amend the output model with lightweight runtime penalty. We particularly study the boundary of two approaches in our frameworks to adaptively make the smart selection. Extensive experiments on real datasets validate that our proposals can improve the unlearning efficiency by 1.5$\times$ to 8$\times$ while achieving comparable accuracy.
- Asia > China > Guangdong Province > Guangzhou (0.04)
- North America > United States > California > Orange County > Anaheim (0.04)
- Asia > China > Shandong Province (0.04)
- Information Technology > Security & Privacy (1.00)
- Law (0.94)
Comet AI nabs $4.5M for more efficient machine learning model management
As we get further along in the new way of working, the new normal if you will, finding more efficient ways to do just about everything is becoming paramount for companies looking at buying new software services. To that end, Comet AI announced a $4.5 million investment today as it tries to build a more efficient machine learning platform. The money came from existing investors Trilogy Equity Partners, Two Sigma Ventures and Founder's Co-op. Today's investment comes on top of an earlier $2.3 million seed. "We provide a self-hosted and cloud-based meta machine learning platform, and we work with data science AI engineering teams to manage their work to try and explain and optimize their experiments and models," company co-founder and CEO Gideon Mendels told TechCrunch.
Will automation and AI give us four-day weekends – or simply leave us without jobs?
In 1900 many people worked in dreadful conditions, doing repetitive and tedious jobs. The streets were full of horses and carts. Life expectancy for someone born that year was just 41. Wind forward to 1962 and working conditions had greatly improved. The streets were full of cars and trucks and the jet age had begun. Life expectancy had nearly doubled, to 71.
- Asia > India (0.05)
- Oceania > Australia (0.04)
- North America > United States > Massachusetts (0.04)
- (2 more...)
- Banking & Finance (0.96)
- Transportation > Ground > Road (0.95)
- Media > Music (0.95)
- (3 more...)
More efficient machine learning could upend the AI paradigm
In January, Google launched a new service called Cloud AutoML, which can automate some tricky aspects of designing machine-learning software. While working on this project, the company's researchers sometimes needed to run as many as 800 graphics chips in unison to train their powerful algorithms. Unlike humans, who can recognize coffee cups from seeing one or two examples, AI networks based on simulated neurons need to see tens of thousands of examples in order to identify an object. Imagine trying to learn to recognize every item in your environment that way, and you begin to understand why AI software requires so much computing power. If researchers could design neural networks that could be trained to do certain tasks using only a handful of examples, it would "upend the whole paradigm," Charles Bergan, vice president of engineering at Qualcomm, told the crowd at MIT Technology Review's EmTech China conference earlier this week.
Phones don't need a NPU to benefit from machine learning
Neural Networks and Machine Learning are some of this year's biggest buzzwords in the world of smartphone processors. Huawei's HiSilicon Kirin 970, Apple's A11 Bionic, and the image processing unit (IPU) inside the Google Pixel 2 all boast dedicated hardware support for this emerging technology. The trend so far has suggested that machine learning requires a dedicated piece of hardware, like a Neural Processing Unit (NPU), IPU, or "Neural Engine", as Apple would call it. However, the reality is these are all just fancy words for custom digital signal processors (DSP) -- that is, hardware specialized in performing complex mathematical functions quickly. Today's latest custom silicon has been specifically optimized around machine learning and neural network operations, the most common of which include dot product math and matrix multiply.
- Telecommunications (0.39)
- Semiconductors & Electronics (0.32)