If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Nearly two-thirds of Americans want the U.S to regulate the development and use of artificial intelligence in the next year or sooner -- with half saying that regulation should have begun yesterday, according to a Morning Consult poll. Another 13% say that regulation should start in the next year. "You can thread this together," Austin Carson, founder of new nonprofit group SeedAI and former government relations lead for Nvidia, said in an email. "Half or more Americans want to address all of these things, split pretty evenly along ideological lines." The poll, which SeedAI commissioned, backs up earlier findings that while U.S. adults support investment in the development of AI, they want clear rules around that development.
Every time a dramatic, unforeseen political event happens, there follows a left-field fixation that some out-of-control technology created it. Whenever this fear about big tech comes around we are told that something new, even more toxic, has infiltrated our public discourse, triggering hatred towards politicians and public figures, conspiracy theories about Covid and even major political events like Brexit. The concern over anonymity online becomes a particular worry – as if ending it will somehow, like throwing a blanket at a raging house fire, subdue our fevered state. You may remember that during the summer's onslaught of racist abuse towards black players in the England football team, instead of reckoning with the fact that racism still haunts this country, we busied ourselves with bluster about how "cowards" online would be silenced if we only just demanded they identify themselves. We resort to this explanation, that shadowy social media somehow stimulate our worst impulses, despite there being little evidence that most abuse is from unidentifiable sources.
The changing dynamics of the digital world have led to several privacy challenges for businesses, large and small. This is placing increasing pressure on them to evolve their processes and strategies. Much of the burden stems from the sheer volume of data present today, and in fact, the volume of data is predicted to balloon to 175 zettabytes (ZB) by 2025. Today, it is simply beyond human capability to effectively process and protect privacy without the assistance of privacy-enhancing technologies (PETs). This has led to an explosion of adaptive machine learning (ML) algorithms that can wade through the mountain of data while continuously and efficiently changing their behavior in real-time as new data streams are fed into them.
The European Union (EU) has launched the world's first comprehensive legislative package to regulate AI. The Artificial Intelligence Act (AIA), which is currently progressing through the EU legislative process, will establish a risk-based framework for regulating use of AI anywhere within the EU, including by companies based outside the EU. A limited number of unacceptable AI use cases, such as social profiling by governments, would be completely banned; high-risk use cases would be subjected to prior conformity assessment and wide-ranging new compliance obligations; medium risk functions are subject to enhanced transparency rules, and low-risk use cases can largely be pursued without any new obligations under the AIA. By legislating now, the EU hopes to establish a de facto global standard for AI. The EU is certainly well ahead of the US in this area, with debate in the US more focused on the extent to which the US may be falling behind China in military applications of AI, although some think tanks are looking at the ethics of AI and new state privacy laws have tasked regulators to develop standards for transparency and choice.
In brief A man was detained in Japan for selling uncensored pornographic content that he had, in a way, depixelated using machine-learning tools. Masayuki Nakamoto, 43, was said to have made about 11 million yen ($96,000) from peddling over 10,000 processed porn clips, and was formally accused of selling ten hardcore photos for 2,300 yen ($20). Explicit images of genitalia are forbidden in Japan, and as such its porn is partially pixelated. Don't pretend you don't know what we're talking about. Nakamato flouted these rules by downloading smutty photos and videos, and reportedly used deepfake technology to generate fake private parts in place of the pixelation.
The limitations of the models are summarized in Figure 12. Based on the models for Reddit and Twitter, for binary classification, the limitation is the models are over-fitting due to the class imbalance although it gave better accuracy. Furthermore, the war-crime examples seem to share quite similar texts, therefore lack diversity. This is another potential reason for which our models have decent performances, despite the dataset being small. The future scope for binary models is to find more efficient ways to clean the texts.
The symposium on Artificial Intelligence – or AI – organized by the Pontifical Council for Culture, in cooperation with the German Embassy to the Holy See, will open in Rome on Thursday. The theme for the gathering is, "The Challenge of Artificial Intelligence for Human Society and the Idea of the Human Person". The aim of the meeting is to promote a better awareness of the profound cultural impact AI is likely to have on human society. The symposium will feature six experts from the fields of neuroscience, philosophy, Catholic theology, human rights law, ethics and electrical engineering. Experts from the Allen Institute for Brain Science, Goethe University, Boston College, and Google will discuss questions regarding AI and whether it can reproduce consciousness, AI and philosophical challenges, and AI and religion, and what it would mean in relation to Catholic doctrine.
New York City has launched its first artificial intelligence (AI) strategy, with an emphasis on digital ethics. It aims to help the city establish a shared understanding of AI and capitalise on the benefits while managing the risks. The 116-page AI Strategy focuses on how to use AI to better serve residents; building AI know-how within government; modernising data infrastructure; city governance and policy around AI; developing partnerships with external organisations; and promoting equitable access to opportunities. It is the latest in a series of initiatives that aim to make New York City'future-ready', following on from the IoT Strategy and the Internet Master Plan. "As a global epicentre of innovation, New York City has a key role to play in shaping the future of AI," said New York City Chief Technology Officer, John Paul Farmer.