Goto

Collaborating Authors

 dery



Deep Model Reassembly

Neural Information Processing Systems

In this paper, we explore a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy), for general-purpose model reuse.Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints. Such ambitious nature of DeRy inevitably imposes significant challenges, including, in the first place, the feasibility of its solution. We strive to showcase that, through a dedicated paradigm proposed in this paper, DeRy can be made not only possibly but practically efficiently. Specifically, we conduct the partitions of all pre-trained networks jointly via a cover set optimization, and derive a number of equivalence set, within each of which the network blocks are treated as functionally equivalent and hence interchangeable. The equivalence sets learned in this way, in turn, enable picking and assembling blocks to customize networks subject to certain constraints, which is achieved via solving an integer program backed up with a training-free proxy to estimate the task performance. The reassembled models give rise to gratifying performances with the user-specified constraints satisfied. We demonstrate that on ImageNet, the best reassemble model achieves 78.6% top-1 accuracy without fine-tuning, which could be further elevated to 83.2% with end-to-end fine-tuning. Our code is available at https://github.com/Adamdad/DeRy.


Deep Model Reassembly Xingyi Y ang 1 Daquan Zhou 1,2 Songhua Liu 1 Jingwen Y e

Neural Information Processing Systems

Specifically, we conduct the partitions of all pre-trained networks jointly via a cover set optimization, and derive a number of equivalence set, within each of which the network blocks are treated as functionally equivalent and hence interchangeable.


Deep Model Reassembly

Neural Information Processing Systems

In this paper, we explore a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy), for general-purpose model reuse.Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints. Such ambitious nature of DeRy inevitably imposes significant challenges, including, in the first place, the feasibility of its solution. We strive to showcase that, through a dedicated paradigm proposed in this paper, DeRy can be made not only possibly but practically efficiently. Specifically, we conduct the partitions of all pre-trained networks jointly via a cover set optimization, and derive a number of equivalence set, within each of which the network blocks are treated as functionally equivalent and hence interchangeable. The equivalence sets learned in this way, in turn, enable picking and assembling blocks to customize networks subject to certain constraints, which is achieved via solving an integer program backed up with a training-free proxy to estimate the task performance. The reassembled models give rise to gratifying performances with the user-specified constraints satisfied.


Deep Model Reassembly

Yang, Xingyi, Zhou, Daquan, Liu, Songhua, Ye, Jingwen, Wang, Xinchao

arXiv.org Artificial Intelligence

In this paper, we explore a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy), for general-purpose model reuse. Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints. Such ambitious nature of DeRy inevitably imposes significant challenges, including, in the first place, the feasibility of its solution. We strive to showcase that, through a dedicated paradigm proposed in this paper, DeRy can be made not only possibly but practically efficiently. Specifically, we conduct the partitions of all pre-trained networks jointly via a cover set optimization, and derive a number of equivalence set, within each of which the network blocks are treated as functionally equivalent and hence interchangeable. The equivalence sets learned in this way, in turn, enable picking and assembling blocks to customize networks subject to certain constraints, which is achieved via solving an integer program backed up with a training-free proxy to estimate the task performance. The reassembled models, give rise to gratifying performances with the user-specified constraints satisfied. We demonstrate that on ImageNet, the best reassemble model achieves 78.6% top-1 accuracy without fine-tuning, which could be further elevated to 83.2% with end-to-end training. Our code is available at https://github.com/Adamdad/DeRy


How to Fight Employee Burnout: Let AI Automate Dreaded HR and IT Tasks

#artificialintelligence

We've all had those days where getting the simplest thing fixed, or even a basic question answered, takes hours. It might be anything from a mysterious glitch in your desktop computer, to a query about what's covered by your employer's health insurance plan. Whatever your dilemma, resolving it takes a seemingly endless exchange of emails and voicemails that not only distracts you from your real work, but wrecks your mood, too. If it seems like you're slogging through more of those frustrating days lately than ever before, you're not imagining it. As work keeps getting more complex, fast-paced, and demanding, time vampires have a maddening way of multiplying.


How to Fight Employee Burnout: Let AI Automate Dreaded HR and IT Tasks

#artificialintelligence

We've all had those days where getting the simplest thing fixed, or even a basic question answered, takes hours. It might be anything from a mysterious glitch in your desktop computer, to a query about what's covered by your employer's health insurance plan. Whatever your dilemma, resolving it takes a seemingly endless exchange of emails and voicemails that not only distracts you from your real work, but wrecks your mood, too. If it seems like you're slogging through more of those frustrating days lately than ever before, you're not imagining it. As work keeps getting more complex, fast-paced, and demanding, time vampires have a maddening way of multiplying.


The Morning After: Motorola's foldable phone plans

Engadget

Add one more name to the folding phone fight card: Motorola. Also, we reviewed Sony's new Aibo and the 2019 Acura RDX. Finally, the rise of the robots includes some autonomous gear from FedEx and Boeing. Here's an idea for a name: MPx.Motorola is making a foldable phone, too Motorola VP of Global Product Dan Dery told Engadget: "We started to work on foldables a long time ago... and we have been doing a lot of iteration." According to Dery, Motorola has "no intention of coming later than everybody else in the market," and considering the upcoming launch dates for the Samsung Galaxy Fold (in April) and Huawei's Mate X (in mid 2019), it seems safe to assume we're looking at a Motorola launch by summer.


Motorola's Alexa mod is just the start of an important AI plan

Engadget

Motorola might have lured people to its MWC press conference with the promise of new phones, but the real talking point came toward the end of the event. After hyping a pair of mid-range devices and some fun Moto Mod concepts, the company confirmed that it's working with Amazon to bring Alexa to Moto phones. While the first steps of Motorola's Alexa partnership are now well known, it's the stuff that Motorola later told Engadget about its plans that seems most exciting. Let's start from the beginning. Our first taste of this Amazon/Motorola mash-up will come in the form of an Alexa-powered Moto Mod.