reflection


How To Make Sure Your Robot Doesn't Drop Your Wine Glass

#artificialintelligence

From microelectronics to mechanics and machine learning, the modern-day robots are a marvel of multiple engineering disciplines. They use sensors, image processing and reinforcement learning algorithms to move the objects around and move around the obstacles as well. However, this is not the case when it comes to handling objects such as glass. The surface properties of glass are transparent, and non-uniform light reflection makes it difficult for the sensors mounted on the robot to understand how to engage in a simple pick and place operation. To address this problem, researchers at Google AI along with Synthesis AI and Columbia University devised a novel machine-learning algorithm called ClearGrasp, that is capable of estimating accurate 3D data of transparent objects from RGB-D images.


Google researchers release audit framework to close AI accountability gap

#artificialintelligence

Researchers associated with Google and the Partnership on AI have created a framework to help companies and their engineering teams audit AI systems before deploying them. The framework, intended to add a layer of quality assurance to businesses launching AI, translates into practice values often espoused in AI ethics principles and tackles an accountability gap authors say exists in AI today. The work, titled "Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing" is one of a handful of outstanding AI ethics research papers accepted for publication as part of the Fairness, Accountability, and Transparency (FAT) conference, which takes place this week in Barcelona, Spain. "The proposed auditing framework is intended to contribute to closing the development and deployment accountability gap of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity," the paper reads. "At a minimum, the internal audit process should enable critical reflections on the potential impact of a system, serving as internal education and training on ethical awareness in addition to leaving what we refer to as a'transparency trail' of documentation at each step of the development cycle."


15 Artificial Intelligence Books You Should Read

#artificialintelligence

This year, we've seen business leaders and technologists take an increasing interest in artificial intelligence. Many businesses are integrating artificial intelligence into their workflows and businesses. Many businesses are disrupting their industries with applications of AI that not only bring on change but also bring in different types of customers. Often, leaders, technologists, programmers, and workers of every profession misunderstand Artificial Intelligence and how it is applied today. Countries like China apply AI in different ways that alter the structure of their society.


Reflections on 2019 in Technology Law, and a Peek into 2020 New Media and Technology Law Blog

#artificialintelligence

It is that time of year when we look back to see what tech-law issues took up most of our time this year and look ahead to see what the emerging issues are for 2020. Data presented a wide variety of challenging legal issues in 2019. Data is solidly entrenched as a key asset in our economy, and as a result, the issues around it demanded a significant level of attention. I am not going out on a limb in saying that 2020 and beyond promise many interesting developments in "big data," privacy and data security. Social media platforms experienced an interesting year.


Deep Learning is UnAmerican

#artificialintelligence

The fundamental principle of Deep Learning is that truth, maybe even Truth, lies in a big puddle of unparsed data, and a sufficiently speedy computer that can zip through and turn the data into other data comprised of relationships will eventuate into some reflection of this t/Truth. Seems to make enough sense on the face of it, but nobody has asked the question: Can human-sized truths be extracted from planet-sized datasets? Or: What is the relationship between the dataset and the individual? In the currently-fashionable model, the individual does not exist except as, at best, a cluster of datapoints, and then, only in and as a relationship. All fine and well, maybe even quite metaphysically Buddhist, but since when did we have Buddhist metaphysician computers influencing public thinking?


Do we really need to talk so much about the future of work?

#artificialintelligence

There has never been so much talk about the future of work. We live in an age of technological acceleration and never has so much changed in such a short time. Technology greatly influences the way we live. Today we can order hot food that has just been made at our favorite restaurant and have it delivered to our doorstep. We can call a private transport through an application.


15 Artificial Intelligence Books You Should Read

#artificialintelligence

This year, we've seen business leaders and technologists take an increasing interest in artificial intelligence. Many businesses are integrating artificial intelligence into their workflows and businesses. Many businesses are disrupting their industries with applications of AI that not only bring on change but also bring in different types of customers. Often, leaders, technologists, programmers, and workers of every profession misunderstand Artificial Intelligence and how it is applied today. Countries like China apply AI in different ways that alter the structure of their society.


Reflections on NeurIPs 2019

#artificialintelligence

There is a huge push among the researchers here for accountability. I was presenting a poster on "Objective Mismatch in Model-based Reinforcement Learning" at the Deep RL Workshop, and the crowd was very receptive to the idea that some of our underlying assumptions of how RL works may be flawed. I also happened to be presenting my poster next to a researcher at Google pushing for more metrics of reliability in RL algorithms. This means: how consistent is the performance papers propose when they claim a new "state-of-the-art" across environments and random seeds. This realistic robustness may be the key to getting these algorithms to be more useful on real applications (such as robotics which I will always bring up as a great interpretable platform for RL).


To secure a safer future for AI, we need the benefit of a female perspective John Naughton

#artificialintelligence

Everybody knows (or should know) by now that machine learning (which is what most current artificial intelligence actually amounts to) is subject to bias. Last week, the New York Times had the idea of asking three prominent experts in the field to talk about the bias problem, in particular the ways that social bias can be reflected and amplified in dangerous ways by the technology to discriminate against, or otherwise damage, certain social groups. At first sight, the resulting article looked like a run-of-the-mill review of what has become a common topic – except for one thing: the three experts were all women. One, Daphne Koller, is a co-founder of the online education company Coursera; another, Olga Russakovsky, is a Princeton professor who is working to reduce bias in ImageNet, the data set that powered the current machine-learning boom; the third, Timnit Gebru, is a research scientist at Google in the company's ethical AI team. Reading the observations of these three women brought to the surface a thought that's been lurking at the back of my mind for years.


Robot Affect: the Amygdala as Bloch Sphere

arXiv.org Artificial Intelligence

In the design of artificially sentient robots, an obstacle always has been that conventional computers cannot really process information in parallel, whereas the human affective system is capable of producing experiences of emotional concurrency (e.g., happy and sad). Another schism that has been in the way is the persistent Cartesian divide between cognition and affect, whereas people easily can reflect on their emotions or have feelings about a thought. As an essentially theoretical exercise, we posit that quantum physics at the basis of neurology explains observations in cognitive emotion psychology from the belief that the construct of reality is partially imagined (Im) in the complex coordinate space C^3. We propose a quantum computational account to mixed states of reflection and affect, while transforming known psychological dimensions into the actual quantum dynamics of electromotive forces. As a precursor to actual simulations, we show examples of possible robot behaviors, using Einstein-Podolsky-Rosen circuits. Keywords: emotion, reflection, modelling, quantum computing