Asia Pacific Assistive Robotics Association (APARA), a non-profit organization founded to facilitate adoption and augmentation of Artificial Intelligence (AI) and Robotics, today announced AIBotics Go-Digital Series 2020, an AI and Robotics event themed around'Augmenting the Human Potential', will be launched between August and November 2020, aiming to facilitate the increasing dependency on AI technology into improving human lives. An international event endorsed and supported by the International Alliance of Robotics Associations (IARA) and a number of global and regional partners including the University of Oxford, the ASEAN Smart Cities Network, Japan Science & Technology Agency, the Malaysian Artificial Intelligence and Robotics Association, among others, AIBotics Go-Digital Series 2020 reviews ethical and responsible AI and robotics innovations through webinars and a virtual exhibition 24 hours a day, seven days a week from August for four months, bringing together renowned industry experts as well as a number of projects and innovative solutions from all around the world. To enable a smart, seamless and sustainable digital conferencing experience, APARA is collaborating with Tencent Cloud, the official conferencing solution provider of AIBotics Go-Digital Series 2020, to bring visitors and delegates a series of power-packed webinars and a virtual exhibition through Tencent Cloud Conference (TCC) solutions which have been widely adopted by local and overseas organizations and enterprises at online and digital business conferences, annual meetings, road shows, lectures, industry forums, among others. "As we adjust to the'new normal' brought about by the COVID-19 pandemic, AI has also become much more mainstream while allowing gatherings and business meetings to be held amid current circumstances. We are excited to present AIBotics Go-Digital Series 2020, highlighting how AI and Robotics can truly augment human potential, which is a timely message in light of the virus-related disruptions globally," said Shanlynn Lee, President of APARA.
In a bid to make transformer models even better for real-world applications, researchers from Google, University of Cambridge, DeepMind and Alan Turing Institute have proposed a new transformer architecture called "Performer" -- based on what they call fast attention via orthogonal random features (FAVOR). Believed to be particularly well suited for language understanding tasks when proposed in 2017, transformer is a novel neural network architecture based on a self-attention mechanism. To date, in addition to achieving SOTA performance in Natural Language Processing and Neural Machine Translation tasks, transformer models have also performed well across other machine learning (ML) tasks such as document generation/summarization, time series prediction, image generation, and analysis of biological sequences. Neural networks usually process language by generating fixed- or variable-length vector-space representations. A transformer however only performs a small, constant number of steps -- in each step, it applies a self-attention mechanism that can directly model relationships between all words in a sentence, regardless of their respective position.
The Information Commissioner's Office (ICO) has published an 80-page guidance document for companies and other organisations about using artificial intelligence (AI) in line with data protection principles. The guidance is the culmination of two years research and consultation by Reuben Binns, an associate professor in the department of Computer Science at the University of Oxford, and the ICO's AI team. The guidance covers what the ICO thinks is "best practice for data protection-compliant AI, as well as how we interpret data protection law as it applies to AI systems that process personal data. The guidance is not a statutory code. It contains advice on how to interpret relevant law as it applies to AI, and recommendations on good practice for organisational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate".
A professor of mathematics at the University of Oxford, doubling as a philosopher of science and religion, John Lennox has some pretty unique insights to put forward when it comes to the future of artificial intelligence. The title of his new book, ambitiously named 2084: Artificial Intelligence and the Future of Humanity, certainly suggests a post-Orwellian vision of dystopia, complete with an algorithmic Big Brother and an army of bio-engineered super-humans. And similar predictions have already been made by other influential academics, too. Yuval Noah Harari, in his bestselling book Homo Deus, for example, anticipates that technological developments will lead to humans enhancing themselves with abilities like eternal life. But far from portraying an Ex-Machina-esque scenario, in which our AI creations would take over the world and fundamentally change human nature, Lennox warns that the dangers of AI are more imminent. "If creating an AI that surpasses humans were to happen, of course it would be a threat," Lennox tells ZDNet. "But there are major dangers long before then, and these dangers are actually happening now. I think it is misleading to tell people about the problems that will come in the future – it's what's happening now that demands an ethical and moral response."
Sometimes it's tempting to think of every technological advancement as the brave first step on new shores, a fresh chance to shape the future rationally. In reality, every new tool enters the same old world with its same unresolved issues. In a moment where society is collectively reckoning with just how deep the roots of racism reach, a new paper from researchers at DeepMind -- the AI lab and sister company to Google -- and the University of Oxford presents a vision to "decolonize" artificial intelligence. The aim is to keep society's ugly prejudices from being reproduced and amplified by today's powerful machine learning systems. The paper, published this month in the journal Philosophy & Technology, has at heart the idea that you have to understand historical context to understand why technology can be biased.
Once again, artificial intelligence shows its potential for sifting through massive amounts of medical test data to deliver actionable results, this time with COVID-19 screening in hospitals and emergency departments. We've written numerous posts about AI applications in medicine, often for diagnostics. For example, we covered AI assisting with autism spectrum disorder diagnosis at the University of California Davis and Google Health's success with AI deep learning to improve breast cancer detection. A group of researchers from Oxford University and Harvard University developed two AI models for COVID-19 early-detection using routinely collected data in hospital emergency departments (EDs) and hospital admissions. Information is available in a research study preprint from medRxiv and bioRxiv.
What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
Jaguar Land Rover envisions a not-too-distant future where its cars have touchless monitors to "reduce the spread of bacteria." The automaker teamed up with Cambridge University and came up with "touchless touchscreens," or an AI-powered gesture-reading system in the wake of the pandemic. The technology called "Predictive Touch" responds by guessing where a driver intends to tap a screen without their fingers making contact with the surface. The theory is that it would enable motorists to pay more attention to the road and less attention to buttons and glass monitors while driving. "In the'new normal' once lockdowns around the world are lifted, a greater emphasis will be placed on safe, clean mobility where personal space and hygiene will carry premiums," the British automaker said in a press release.
A student has designed a handheld'robotic guide dog' to help support people with visual impairments who are unable to house a real assistance animal. Loughborough University design engineer Anthony Camu was inspired to develop the device by responsive virtual reality gaming controllers. Dubbed'Theia' -- after the Titan goddess of light in Greek mythology -- the prototype can replicate the key functions of a real guide dog. The voice-activated device can program quick and safe routes to given destinations using real-time online data -- much like a car's satnav -- and onboard sensors. Force feedback delivered through Theia's handle then helps direct the user -- creating a sensation the designers say is similar to the pull of a guide dog's leash.