Goto

Collaborating Authors

 human-level artificial intelligence


Where did I put it? Loss of vital crypto key voids election

New Scientist

Feedback is entertained by the commotion at the International Association for Cryptologic Research's recent elections, where results could not be decrypted after an honest but unfortunate human mistake The phrase "you couldn't make it up", Feedback feels, is often misunderstood. It doesn't mean there are limits to the imagination, but rather that there are some developments you can't include in a fictional story because people would say "oh come on, that would never happen". The trouble is, those people are wrong, because real life is frequently ridiculous. In the world of codes and ciphers, one of the more important organisations is the International Association for Cryptologic Research, described as " a non-profit organization devoted to supporting the promotion of the science of cryptology ". The IACR recently held elections to choose new officers and directors and to tweak its bylaws.


Has GPT-4 really passed the startling threshold of human-level artificial intelligence? Well, it depends

#artificialintelligence

Recent public interest in tools like ChatGPT has raised an old question in the artificial intelligence community: is artificial general intelligence (in this case, AI that performs at human level) achievable? An online preprint this week has added to the hype, suggesting the latest advanced large language model, GPT-4, is at the early stages of artificial general intelligence (AGI) as it's exhibiting "sparks of intelligence". OpenAI, the company behind ChatGPT, has unabashedly declared its pursuit of AGI. Meanwhile, a large number of researchers and public intellectuals have called for an immediate halt to the development of these models, citing "profound risks to society and humanity". These calls to pause AI research are theatrical and unlikely to succeed – the allure of advanced intelligence is too provocative for humans to ignore, and too rewarding for companies to pause.


Human-level AI is a giant risk. Why are we entrusting its development to tech CEOs?

#artificialintelligence

Technology companies are racing to develop human-level artificial intelligence, whose development poses one of the greatest risks to humanity. Last week, John Carmack, a software engineer and video game developer, announced that he has raised 20 million dollars to start Keen Technologies, a company devoted to building fully human-level AI. He is not the only one. There are currently 72 projects around the world focused on developing a human-level AI, also known as an AGI -- meaning an AI which can do any cognitive task at least as well as humans can. Many have raised concerns about the effects that even today's use of artificial intelligence, which is far from human-level, already has on our society.


Google Says It's Closing in on Human-Level Artificial Intelligence

#artificialintelligence

Artificial intelligence researchers are doubling down on the concept that we will see artificial general intelligence (AGI) -- that's AI that can accomplish anything humans can, and probably many we can't -- within our lifetimes. Responding to a pessimistic op-ed published by TheNextWeb columnist Tristan Greene, Google DeepMind lead researcher Dr. Nando de Freitas boldly declared that "the game is over" and that as we scale AI, so too will we approach AGI. Greene's original column made the relatively mainstream case that, in spite of impressive advances in machine learning over the past few decades, there's no way we're gonna see human-level artificial intelligence within our lifetimes. But it appears that de Freitas, like OpenAI Chief Scientist Ilya Sutskever, believes otherwise. "Solving these scaling challenges is what will deliver AGI," the DeepMind researcher tweeted, later adding that Sutskever "is right" to claim, quite controversially, that some neural networks may already by "slightly conscious."


Google's DeepMind says it is close to achieving 'human-level' artificial intelligence

Daily Mail - Science & tech

DeepMind, a British company owned by Google, may be on the verge of achieving human-level artificial intelligence (AI). Nando de Freitas, a research scientist at DeepMind and machine learning professor at Oxford University, has said'the game is over' in regards to solving the hardest challenges in the race to achieve artificial general intelligence (AGI). AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training. According to De Freitas, the quest for scientists is now scaling up AI programs, such as with more data and computing power, to create an AGI. Earlier this week, DeepMind unveiled a new AI'agent' called Gato that can complete 604 different tasks'across a wide range of environments'. Gato uses a single neural network – a computing system with interconnected nodes that works like nerve cells in the human brain.


Language Acquisition Environment for Human-Level Artificial Intelligence

Park, Deokgun

arXiv.org Artificial Intelligence

Despite recent advances in many application-specific domains, we do not know how to build a human-level artificial intelligence (HLAI). We conjecture that learning from others' experience with the language is the essential characteristic that differentiates human intelligence from the rest. Humans can update the action-value function only with the verbal description as if they experience states, actions, and corresponding rewards sequences first hand. In this paper, we present our ongoing effort to build an environment to facilitate the research for models of this capability. In this environment, there are no explicit definitions of tasks or rewards given when accomplishing those tasks. Rather the models experience the experience of the human infants from fetus to 12 months. The agent should learn to speak the first words as a human child does. We expect the environment will contribute to the research for HLAI.


How Facebook's Yann LeCun is charting a path to human-level artificial intelligence

#artificialintelligence

When Yann LeCun founded the Facebook AI Research (FAIR) lab in 2013, artificial intelligence was entering a boom period that his research helped trigger. Facebook's chief AI scientist had been among a group of computer scientists who retained faith in deep neural networks during an "AI winter" of reduced funding and interest in the field. In 2019, his efforts earned him a share of the Turning Award, together with his friends Yoshua Bengio and Geoffrey Hinton. Today, AI is now an essential component of Facebook's vast array of applications, touching everything from Messenger to content moderation. "You take AI out of Facebook, and basically the services crumble," LeCun tells TNW. But fears are now emerging that another winter will soon arrive if AI can't live up to its current hype, particularly around the promise of artificial general intelligence (AGI): the idea that a machine can perform any intellectual task a human can -- and many that they can't.


Artificial intelligence isn't very intelligent and won't be any time soon

#artificialintelligence

Many think we'll see human-level artificial intelligence in the next 10 years . Industry continues to boast smarter tech like personalized assistants or self-driving cars . And in computer science, new and powerful tools embolden researchers to assert that we are nearing the goal in the quest for human-level artificial intelligence. Despite the hype, despite progress, we are far from machines that think like you and me. Last year Google unveiled Duplex -- a Pixel smartphone assistant which can call and make reservations for you.

  artificial intelligence, human-level artificial intelligence, programmer, (7 more...)
  Industry: Leisure & Entertainment > Games > Chess (1.00)

Artificial intelligence isn't very intelligent and won't be any time soon 7wData

#artificialintelligence

Many think we'll see human-level artificial intelligence in the next 10 years. Industry continues to boast smarter tech like personalized assistants or self-driving cars. And in computer science, new and powerful tools embolden researchers to assert that we are nearing the goal in the quest for human-level artificial intelligence. Despite the hype, despite progress, we are far from machines that think like you and me. Last year Google unveiled Duplex -- a Pixel smartphone assistant which can call and make reservations for you. When asked to schedule an appointment, say at a hair salon, Duplex makes the phone call.

  artificial intelligence, human-level artificial intelligence, programmer, (7 more...)
  Country: North America > Canada > Ontario > Toronto (0.17)
  Industry: Leisure & Entertainment > Games > Chess (1.00)

Artificial intelligence isn't very intelligent and won't be any time soon

#artificialintelligence

Many think we'll see human-level artificial intelligence in the next 10 years. Industry continues to boast smarter tech like personalized assistants or self-driving cars. And in computer science, new and powerful tools embolden researchers to assert that we are nearing the goal in the quest for human-level artificial intelligence. Despite the hype, despite progress, we are far from machines that think like you and me. Last year Google unveiled Duplex -- a Pixel smartphone assistant which can call and make reservations for you.