Multi-step learning and underlying structure in statistical models

Neural Information Processing Systems

In multi-step learning, where a final learning task is accomplished via a sequence of intermediate learning tasks, the intuition is that successive steps or levels transform the initial data into representations more and more suited" to the final learning task. A related principle arises in transfer-learning where Baxter (2000) proposed a theoretical framework to study how learning multiple tasks transforms the inductive bias of a learner. The most widespread multi-step learning approach is semi-supervised learning with two steps: unsupervised, then supervised. Several authors (Castelli-Cover, 1996; Balcan-Blum, 2005; Niyogi, 2008; Ben-David et al, 2008; Urner et al, 2011) have analyzed SSL, with Balcan-Blum (2005) proposing a version of the PAC learning framework augmented by a compatibility function" to link concept class and unlabeled data distribution. We propose to analyze SSL and other multi-step learning approaches, much in the spirit of Baxter's framework, by defining a learning problem generatively as a joint statistical model on $X \times Y$.

Scientists create 'Baxter' the robot who can assist the elderly amid a shortage of nurses

Daily Mail - Science & tech

Scientists have created a robot that may be able to help the elderly perform tasks amid a shortage of nurses in the UK. Named Baxter, it has two arms and 3D printed'fingers', allowing it to step in when a person is struggling with things such as getting dressed. Artificial intelligence allows the robot to detect when assistance is needed and learn about the owners difficulties over time. When it's ready for use in healthcare settings, it could help free up the time of staff so they can do other work. There are around 40,000 nurse vacancies in NHS England, which is expected to double after Brexit, according to figures.

How AI companies can avoid ethics washing


One of the essential phrases necessary to understand AI in 2019 has to be "ethics washing." Put simply, ethics washing -- also called "ethics theater" -- is the practice of fabricating or exaggerating a company's interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes "AI for good" initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other. Accusations of ethics washing have been lobbed at the biggest AI companies in the world, as well as startups. The most high-profile example this year may have been Google's external AI ethics panel, which devolved into a PR nightmare and was disbanded after about a week.

There's still time to prevent biased AI from taking over the world


Mobile maps route us through traffic, algorithms can now pilot automobiles, virtual assistants help us smoothly toggle between work and life, and smart code is adept at surfacing our next our new favorite song. But AI could prove dangerous, too. Tesla CEO Elon Musk once warned that biased, unmonitored and unregulated AI could be the "greatest risk we face as a civilization." Instead, AI experts are concerned that automated systems are likely to absorb bias from human programmers. And when bias is coded into the algorithms that power AI it will be nearly impossible to remove.

Will Artificial Intelligence (AI) Steal Our Jobs? GetSmarter Blog


As artificial intelligence develops and disrupts more industries, more working professionals are becoming increasingly concerned about its implications for the future of work. According to a Pew Research Center survey completed in 2017, 72% of Americans fear AI technology is capable of replacing jobs, with 25% feeling exceptionally worried.1 The industries most at risk are predicted to be jobs within science, healthcare, security, farming, construction, transport, and banking.2 While it's speculated AI will take over 1.8 million human jobs by the year 2020,4 the technology is also expected to create a 2.3 million new kinds of jobs, many of which will involve the collaboration between humans and AI.5 Research shows artificial intelligence is capable of performing several tasks better than humans in specific occupations, but it's not capable of performing all tasks required for the job better than humans.6 In other words, most jobs will be affected by AI but in such a way that a partnership is formed between humans and machines, a more powerful alliance compared to either working individually.7 What will this look like?

Video Friday: Package Delivery by Robot, and More

IEEE Spectrum Robotics

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. Using machine-learning and sensory hardware, Alberto Rodriguez, assistant professor of mechanical engineering, and members of MIT's MCube lab have developed a robot that is learning how to play the game Jenga. The technology could be used in robots for manufacturing assembly lines.

Executing Better By Listening To Public Discontent


In the wake of the closure of Apple's autonomous car division (Project Titan) this week, one questions if Steve Jobs' axiom still holds true. "Some people say, 'Give the customers what they want.' Our job is to figure out what they're going to want before they do," declared Jobs and continued with an analogy, "I think Henry Ford once said, 'If I'd asked customers what they wanted, they would have told me, 'a faster horse!'" Titan joins a growing graveyard of autonomous innovations, which is filled with the tombstones of Baxter, Jibo, Kuri and many broken quadcopters. If anything holds true, not every founder is Steve Jobs or Henry Ford and listening to public sentiment could be a bellwether for success. Adam Jonas of Morgan Stanley announced on January 9, 2019 from the Consumer Electronic Show (CES) floor, "It's official. It's the timing… the telemetry of adoption for L5 cars without safety drivers expected by many investors may be too aggressive by a decade… possibly decades."

Watch an AI robot program itself to, er, pick things up and push them around


Vid Robots normally need to be programmed in order to get them to perform a particular task, but they can be coaxed into writing the instructions themselves with the help of machine learning, according to research published in Science. Engineers at Vicarious AI, a robotics startup based in California, USA, have built what they call a "visual cognitive computer" (VCC), a software platform connected to a camera system and a robot gripper. Given a set of visual clues, the VCC writes a short program of instructions to be followed by the robot so it knows how to move its gripper to do simple tasks. "Humans are good at inferring the concepts conveyed in a pair of images and then applying them in a completely different setting," the paper states. "The human-inferred concepts are at a sufficiently high level to be effortlessly applied in situations that look very different, a capacity so natural that it is used by IKEA and LEGO to make language-independent assembly instructions."

The Year in Robots (2018): Boston Dynamics, Baxter, and More


Depending on your perspective, 2018 either brought us closer to salvation by way of robots, or closer to doom by way of robots: Where some see the end of meaningless work, others see the end of humanity, also meaningless. Whatever your biases toward the machines, this year has been a big one for the field of robotics, which continues to roll around joyously in the convergence of falling prices, better software and hardware, and skyrocketing demand from industry. Given that it's That Time of Year again, we've collected a list of the biggest moments in robotics in 2018, from the continued ascendance of Boston Dynamics' SpotMini quadruped to the rapid rise and fall of the home robot. Taking a quick break from uploading videos of its humanoid robot Atlas doing backflips, Boston Dynamics announced that one of its machines, the four-legged SpotMini, will finally go on sale in 2019. The question now becomes: What do you do with a robot that can fight off stick-wielding humans?

Why Australia is quickly developing a technology-based human rights problem


Artificial intelligence (AI) might be technology's Holy Grail, but Australia's Human Rights Commissioner Edward Santow has warned about the need for responsible innovation and an understanding of the challenges new technology poses for basic human rights. "AI is enabling breakthroughs right now: Healthcare, robotics, and manufacturing; pretty soon we're told AI will bring us everything from the perfect dating algorithm to interstellar travel -- it's easy in other words to get carried away, yet we should remember AI is still in its infancy," Santow told the Human Rights & Technology conference in Sydney in July. Santow was launching the Human Rights and Technology Issues Paper, which was described as the beginning of a major project by the Human Rights Commission to protect the rights of Australians in a new era of technological change. The paper [PDF] poses questions centred on what protections are needed when AI is used in decisions that affect the basic rights of people. It asks also what is required from lawmakers, governments, researchers, developers, and tech companies big and small. Pointing to Microsoft's AI Twitter bot Tay, which in March 2016 showed the ugly side of humanity -- at least as present on social media -- Santow said it is a key example of how AI must be right before it's unleashed onto humans.