The idea of artificial general intelligence as we know it today starts with a dot-com blowout on Broadway. Twenty years ago--before Shane Legg clicked with neuroscience postgrad Demis Hassabis over a shared fascination with intelligence; before the pair hooked up with Hassabis's childhood friend Mustafa Suleyman, a progressive activist, to spin that fascination into a company called DeepMind; before Google bought that company for more than half a billion dollars four years later--Legg worked at a startup in New York called Webmind, set up by AI researcher Ben Goertzel. Today the two men represent two very different branches of the future of artificial intelligence, but their roots reach back to common ground. Even for the heady days of the dot-com bubble, Webmind's goals were ambitious. Goertzel wanted to create a digital baby brain and release it onto the internet, where he believed it would grow up to become fully self-aware and far smarter than humans.
More and more, when you apply for a job, ask for a raise, or wait for your work schedule, AI is choosing your fate. Alarmingly, many job applicants never realize that they are being evaluated by a computer, and they have almost no recourse when the software is biased, makes a mistake, or fails to accommodate a disability. While New York City has taken the important step of trying to address the threat of AI bias, the problem is that the rules pending before the City Council are bad, really bad, and we should listen to the activists speaking out before it's too late. Some advocates are calling for amendments to this legislation, such as expanding definitions of discrimination beyond race and gender, increasing transparency, and covering the use of AI tools in hiring, not just their sale. But many more problems plague the current bill, which is why a ban on the technology is presently preferable to a bill that sounds better than it actually is.
Albert Einstein once said that "wisdom is not a product of schooling, but the lifelong attempt to acquire it." Centuries of human progress have been built on our brains' ability to continually acquire, fine-tune and transfer knowledge and skills. Such continual learning however remains a long-standing challenge in machine learning (ML), where the ongoing acquisition of incrementally available information from non-stationary data often leads to catastrophic forgetting problems. Gradient-based deep architectures have spurred the development of continual learning in recent years, but continual learning algorithms are often designed and implemented from scratch with different assumptions, settings, and benchmarks, making them difficult to compare, port, or reproduce. Now, a research and development team from ContinualAI with researchers from KU Leuven, ByteDance AI Lab, University of California, New York University and other institutions has proposed Avalanche, an end-to-end library for continual learning based on PyTorch.
Persado's 2021 State of AI and Creativity Survey highlights the growing importance of technology to generate and deliver more predictive, personalized creative that can be directly attributed to business outcomes NEW YORK–(BUSINESS WIRE)–#AI–Persado, the leading AI content generation and decisioning platform that unlocks the value of the right words at every customer interaction, today announced the results of a first-of-its kind survey: 2021 State of AI and Creativity. More than 400 chief marketing officers and senior marketing leaders were asked to provide input on their company's readiness, and on best practices for applying AI to an area of business that receives significant time, energy, and investment: the creative process. The survey found a growing trend among senior marketing leaders to leverage AI and machine learning in new ways to deliver more effective messages to prospects and customers. Key findings of the survey from U.S. respondents include: "Marketers have been leveraging technology to gain insights and improve performance across their portfolios for many years – applying AI to targeting and segmentation, marketing mix optimization, promotions and discounts, and dynamic pricing," says Amy Heidersbach, Chief Marketing Officer of Persado. "But how to optimize creative at scale has largely remained a blind spot for data-driven, digital-first companies. Now, it's clear that marketing leaders are turning their attention toward creative to unlock new sources of value – replacing human-only guesswork with human-plus-machine certainty."
A robot dog joined the human members of the NYPD's response to a domestic dispute inside a public housing apartment building in Manhattan. NEW YORK - Now viral videos show -- for lack of a better term -- a robot dog joining the human members of the NYPD's response to a domestic dispute inside a NYCHA building in Kips Bay, Monday. "I can't believe what I'm seeing," 344 E. 28th St. Tenant Association President Melanie Aucello said. Aucello shot one of those viral videos on her smartphone and compared the scene she witnessed to something out of a dystopian movie. "It scared me," she said.
The internet is terrified of the New York Police Department's newest "canine" on unit: Digidog, a robo-dog that the Netflix series "Black Mirror" warned of. After a video went viral of Digidog in action, the internet started comparing it to Series 4 Episode 5, "Metalhead," where human society is no longer in existence and has been overrun by robot dogs. Some fear that this new invention could eventually turn into something negative. It was first deployed in February when men were being held hostage in a Bronx apartment and the robot was able to see how safe it was and if it was safe for the police to enter, the New York Times reported. The creators of Digidog, Boston Dynamics, explained that these devices won't be used as a weapon, but a political art collective has shared a few examples of how easy it is for things to go downhill fast, including a handful of Muslim Americans being killed by drones, according to the Guardian.
Artificial intelligence (AI) technologies hold big promise for the financial services industry, but they also bring risks that must be addressed with the right governance approaches, according to a white paper by a group of academics and executives from the financial services and technology industries, published by Wharton AI for Business. Wharton is the academic partner of the group, which calls itself Artificial Intelligence/Machine Learning Risk & Security, or AIRS. Based in New York City, the AIRS working group was formed in 2019, and includes about 40 academics and industry practitioners. The white paper details the opportunities and challenges of implementing AI strategies by financial firms and how they could identify, categorize, and mitigate potential risks by designing appropriate governance frameworks. However, AIRS stopped short of making specific recommendations and said that its paper is meant for discussion purposes.
When Jerrel Gantt was released from prison after three years, he was handed a pamphlet about healthcare and nothing else. He began searching for employment, a deep source of anxiety for him, and secured housing through a ministry in New York City. He later enrolled in school part-time. As he settled into life outside of prison and developed a support system, Gantt began going on dates with people he met on apps like Tinder. The process has not been without challenges – revealing that he is formerly incarcerated usually comes up early in the dating process for Gantt.
At a Fintech conference in New York put on by Fordham University in the spring of 2017, an AI expert made a bold prediction: Someday there would be a company with a market cap of one trillion dollars. He predicted that this valuation, which at the time seemed incredible, would be based on that firm's extensive use of AI. He was correct in at least one regard: Apple became the world's first trillion-dollar company a little over a year later. Was Apple's staggering valuation due to the power of AI? Are AI and, more broadly, data analytics, the key drivers of business growth? Apple uses data analytics and AI extensively.
Transformer architectures have become the building blocks for many state-of-the-art natural language processing (NLP) models. While transformers are certainly powerful, researchers' understanding of how they actually work remains limited. This is problematic due to the lack of transparency and the possibility of biases being inherited via training data and algorithms, which could cause models to produce unfair or incorrect predictions. In the paper Transformer Visualization via Dictionary Learning: Contextualized Embedding as a Linear Superposition of Transformer Factors, a Yann LeCun team from Facebook AI Research, UC Berkeley and New York University leverages dictionary learning techniques to provide detailed visualizations of transformer representations and insights into the semantic structures -- such as word-level disambiguation, sentence-level pattern formation, and long-range dependencies -- that are captured by transformers. Previous attempts to visualize and analyze this "black box" issue in transformers include direct visualization and, more recently, "probing tasks" designed to interpret transformer models.