Goto

Collaborating Authors

US Strike Kills Jihadist Leader In Syria

International Business Times

A US drone strike in northwestern Syria killed a Yemeni leader of a local jihadist group affiliated to Al-Qaeda, the US military and a Syrian war monitor said. The strike, carried out on Monday just before midnight (2100 GMT) on the eastern edge of the city of Idlib, took out a man described as a leader of the Hurras al-Deen group. "Abu Hamzah al Yemeni was travelling alone on a motorcycle at the time of the strike," US Central Command said in a statement, adding that an "initial review indicates no civilian casualties". The US is "highly confident" that the strike, carried out from a drone, killed Abu Hamzah al-Yemeni, a US official with knowledge of the operation told CNN, speaking on condition of anonymity. The Britain-based Syrian Observatory for Human Rights, which relies on a wide network of sources inside Syria, confirmed on Tuesday that Yemeni was killed in the attack, saying it was the second such attempt to neutralise him after a similar strike last year.


Understanding new developments in Convolutional Neural Networks (Deep Learning)

#artificialintelligence

Abstract: Singing techniques are used for expressive vocal performances by employing temporal fluctuations of the timbre, the pitch, and other components of the voice. Their classification is a challenging task, because of mainly two factors: 1) the fluctuations in singing techniques have a wide variety and are affected by many factors and 2) existing datasets are imbalanced. To deal with these problems, we developed a novel audio feature learning method based on deformable convolution with decoupled training of the feature extractor and the classifier using a class-weighted loss function. The experimental results show the following: 1) the deformable convolution improves the classification results, particularly when it is applied to the last two convolutional layers, and 2) both re-training the classifier and weighting the cross-entropy loss function by a smoothed inverse frequency enhance the classification performance. Abstract: Detection of cartilage loss is crucial for the diagnosis of osteo- and rheumatoid arthritis.


Training a 20–Billion Parameter AI Model on a Single Processor - EETimes

#artificialintelligence

Cerebras has shown off the capabilities of its second–generation wafer–scale engine, announcing it has set the record for the largest AI model ever trained on a single device. For the first time, a natural language processing network with 20 billion parameters, GPT–NeoX 20B, was trained on a single device. A new type of neural network, the transformer, is taking over. Today, transformers are mainly used for natural language processing (NLP) where their attention mechanism can help spot the relationship between words in a sentence, but they are spreading to other AI applications, including vision. The bigger a transformer is, the more accurate it is.


GPT-3 Training Programmers for the Present (and the Future)

#artificialintelligence

Last year, I wrote a paper in Spanish about the future of programmers. TL;DR: Instead of manually translating my paper, I decided to rewrite it completely with GPT-3. In the same way, The Guardian asked GPT-3 when it was in private beta. When I asked it to translate the article, GPT-3 decided the title was not good enough. The current market is looking for programmers to stack bricks (1) using their trendy languages.


Nonsense Sentience, Condemning GPT-4chan, DeepFake Bans, CVPR Plagiarism

#artificialintelligence

This week: LaMDA's Sentience is Nonsense, Condemning the deployment of GPT-4chan, Only 12% of companies are'AI Achievers', EU To Target Big Tech, Over Deepfakes, and more! If you are a fan, we'd appreciate your feedback! Feel free to let us know your thoughts via a review on Apple Podcast, email to contact@lastweekin.ai, or just DM us on Twitter!


NASA is set to launch its 'CAPSTONE' spacecraft this morning

Daily Mail - Science & tech

NASA is finally set to launch its'CAPSTONE' spacecraft mission on Tuesday morning, marking an important early stage in its Artemis programme. The spacecraft, which is about the size of a microwave oven and weighs just 55 pounds, will blast off from Māhia Peninsula, New Zealand at 5:55 EDT (10:55 BST). Over six months, it will test the stability of a halo-shaped orbit around the moon before this orbit is used by Lunar Gateway, NASA's planned lunar outpost. Lunar Gateway will serve as a'staging area' for landing humans on the moon for the first time in 50 years and potentially as a jumping-off point for missions to Mars. The public can watch today's CAPSTONE launch from New Zealand on NASA Live.


AIhub monthly digest: June 2022 – bootstrapped meta-learning, ethical AI, and a song contest

AIHub

Welcome to our June 2022 monthly digest, where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we find out about meta-learning, explore the importance of images in communicating about AI, and ponder over who to vote for in the AI Song Contest. In the latest episode of New voices in AI, Oumaima Hajri shares her work and journey in ethical AI. Sebastian Flennerhag, Yannick Schroecker, Tom Zahavy, Hado van Hasselt, David Silver, and Satinder Singh won an ICLR 2022 outstanding paper award for their work Bootstrapped meta-learning. We spoke to Sebastian about how the team approached the problem of meta-learning, how their algorithm performs, and plans for future work.


An AI Was Trained To Play Minecraft With 70,000 Hours Of YouTube Videos

#artificialintelligence

OpenAI, the artificial intelligence research organization founded by Elon Musk, has trained an AI to play Minecraft almost as well as humans. It only took about 70,000 hours of binging YouTube videos. A blog post detailing the feat reveals that researchers used a technique called "Video PreTraining (VPT)" to train a neural network on how to play Minecraft. This involved gathering 2,000 hours of sample dataset from actual humans playing Minecraft to include not just the raw video, but also exact keypresses and mouse movements. From there, the researchers trained an inverse dynamics model (IDM) to predict the future action being taken at each step in the videos.


Machine learning, radiomics differentiates glioma

#artificialintelligence

An automated method based on a machine-learning algorithm and MRI radiomics can differentiate between low-grade and high-grade gliomas, according to research presented at the annual Society for Imaging Informatics in Medicine (SIIM) conference in Kissimmee, FL. After developing a workflow to support it, researchers from Yale School of Medicine created an automated approach that segments gliomas on brain MR exams, performs radiomics analysis, and then predicts if the tumor is high or low grade. In testing, their approach yielded an area under the curve (AUC) of 0.86. "We were able to develop a PACS-based auto-segmentation tool, which was linked to a high- versus low-grade glioma prediction tool," said Sara Merkaj, a postgraduate research fellow. "This algorithm could potentially be incorporated into clinical practice."


Stemming vs Lemmatization in NLP: Must-Know Differences

#artificialintelligence

This article was published as a part of the Data Science Blogathon. In the field of Natural Language Processing i.e., NLP, Lemmatization and Stemming are Text Normalization techniques. These techniques are used to prepare words, text, and documents for further processing. Languages such as English, Hindi consists of several words which are often derived from one another. Further, Inflected Language is a term used for a language that contains derived words. For instance, word "historical" is derived from the word "history" and hence is the derived word.