Goto

Collaborating Authors

choi


Samsung bets big on 6G, expects roll out as early as 2028 - CRN - India

#artificialintelligence

South Korean tech giant Samsung has started working on the sixth next generation cellular technology and it expects completion of the 6G standard and its earliest commercialisation date could be as early as 2028. Mass commercialisation of 6G may occur around 2030, Samsung said on Tuesday, adding that both humans and machines will be the main users of 6G. Samsung, which released a white paper entitled "The Next Hyper-Connected Experience for All," said 6G will be characterised by provision of advanced services such as truly immersive extended reality (XR), high-fidelity mobile hologram and digital replica. The development comes even as the world is still far from realising the full potential of the fifth generation cellular technology, commonly known as 5G. Samsung said its vision for 6G is to bring the next hyper-connected experience to every corner of life.


Samsung expects 6G to launch as early as 2028

ZDNet

Samsung Electronics expects 6G communication to be commercialised as early as 2028 and go mainstream by 2030, it said in a white paper published on Tuesday. ITU-R, a sector of the International Telecommunication Union (ITU) responsible for radio communication, is expected to begin their work to define a 6G vision next year, Samsung said. The time spent on defining and developing technical standards for each successive generation, from 2G to 5G, has shortened, the company said. It took 15 years for 3G to be defined and eight years for 5G, and this trend of acceleration will continue to 6G, it added. The South Korean tech giant also said its vision for 6G is to bring the next hyper-connected experience to every corner of life.


IBM pulls in Singapore industry partners to trial 5G use cases for manufacturing

ZDNet

IBM has pulled together industry partners in Singapore to trial 5G use cases for the manufacturing sector, focusing on the use of artificial intelligence (AI) and augmented reality (AR) to enhance video analytics and predictive maintenance, amongst others. As part of the partnership, local telco M1 and South Korean networking vendor Samsung Electronics will develop and test applications that tap 5G as well as other technologies such as AI and Internet of Things (IoT). Industry regulator Infocomm Media Development Authority (IMDA) will then share these applications and learnings from the trial with local enterprises and small and midsized-businesses (SMBs) in the manufacturing industry as well as other participants in the local 5G ecosystem. The trials are scheduled to begin in the second quarter and will be run at IBM Singapore's Centre of Competency for Smart Factoring Operating Model. With China-US trade relations still tense, efforts to cut out Chinese vendors such as Huawei from 5G implementations may create separate ecosystems and consumers could lose out on benefits from the wide adoption of global standards, as demonstrated with 4G.


Common Sense Comes to Computers

#artificialintelligence

One evening last October, the artificial intelligence researcher Gary Marcus was amusing himself on his iPhone by making a state-of-the-art neural network look stupid. Marcus' target, a deep learning network called GPT-2, had recently become famous for its uncanny ability to generate plausible-sounding English prose with just a sentence or two of prompting. When journalists at The Guardian fed it text from a report on Brexit, GPT-2 wrote entire newspaper-style paragraphs, complete with convincing political and geographic references. Marcus, a prominent critic of AI hype, gave the neural network a pop quiz. Surely a system smart enough to contribute to The New Yorker would have no trouble completing the sentence with the obvious word, "fire."


On Tractable Representations of Binary Neural Networks

arXiv.org Artificial Intelligence

We consider the compilation of a binary neural network's decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs). Obtaining this function as an OBDD/SDD facilitates the explanation and formal verification of a neural network's behavior. First, we consider the task of verifying the robustness of a neural network, and show how we can compute the expected robustness of a neural network, given an OBDD/SDD representation of it. Next, we consider a more efficient approach for compiling neural networks, based on a pseudo-polynomial time algorithm for compiling a neuron. We then provide a case study in a handwritten digits dataset, highlighting how two neural networks trained from the same dataset can have very high accuracies, yet have very different levels of robustness. Finally, in experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.


VaB-AL: Incorporating Class Imbalance and Difficulty with Variational Bayes for Active Learning

arXiv.org Machine Learning

Active Learning for discriminative models has largely been studied with the focus on individual samples, with less emphasis on how classes are distributed or which classes are hard to deal with. In this work, we show that this is harmful. We propose a method based on the Bayes' rule, that can naturally incorporate class imbalance into the Active Learning framework. We derive that three terms should be considered together when estimating the probability of a classifier making a mistake for a given sample; i) probability of mislabelling a class, ii) likelihood of the data given a predicted class, and iii) the prior probability on the abundance of a predicted class. Implementing these terms requires a generative model and an intractable likelihood estimation. Therefore, we train a Variational Auto Encoder (VAE) for this purpose. To further tie the VAE with the classifier and facilitate VAE training, we use the classifiers' deep feature representations as input to the VAE. By considering all three probabilities, among them especially the data imbalance, we can substantially improve the potential of existing methods under limited data budget. We show that our method can be applied to classification tasks on multiple different datasets -- including one that is a real-world dataset with heavy data imbalance -- significantly outperforming the state of the art.


AI still doesn't have the common sense to understand human language

#artificialintelligence

Until pretty recently, computers were hopeless at producing sentences that actually made sense. But the field of natural-language processing (NLP) has taken huge strides, and machines can now generate convincing passages with the push of a button. These advances have been driven by deep-learning techniques, which pick out statistical patterns in word usage and argument structure from vast troves of text. But a new paper from the Allen Institute of Artificial Intelligence calls attention to something still missing: machines don't really understand what they're writing (or reading). This is a fundamental challenge in the grand pursuit of generalizable AI--but beyond academia, it's relevant for consumers, too.


Cut back on email if you want to fight global warming

The Japan Times

NEW YORK – Everyone has seen warnings at the end of email saying, "Please consider the environment before printing." But for those who care about global warming, you might want to consider not writing so many emails in the first place. More and more, people rely on their electronic mailboxes as a life organizer. Old emails, photos and files from years past sit undisturbed, awaiting your search for a name, lost address, or maybe a photo of an old boyfriend. The problem is that all those messages require energy to preserve them.


Study may explain how infections reduce autism symptoms

#artificialintelligence

For many years, some parents have noticed that their autistic children's behavioral symptoms diminished when they had a fever. This phenomenon has been documented in at least two large-scale studies over the past 15 years, but it was unclear why fever would have such an effect. A new study from MIT and Harvard Medical School sheds light on the cellular mechanisms that may underlie this phenomenon. In a study of mice, the researchers found that in some cases of infection, an immune molecule called IL-17a is released and suppresses a small region of the brain's cortex that has previously been linked to social behavioral deficits in mice. "People have seen this phenomenon before [in people with autism], but it's the kind of story that is hard to believe, which I think stems from the fact that we did not know the mechanism," says Gloria Choi, the Samuel A. Goldblith Career Development Assistant Professor of Applied Biology and an assistant professor of brain and cognitive sciences at MIT. "Now the field, including my lab, is trying hard to show how this works, all the way from the immune cells and molecules to receptors in the brain, and how those interactions lead to behavioral changes."


Developing AI ScaleNet: Enabling Seamless, High-resolution 8K Streaming

#artificialintelligence

Youngo Park (left) and Kwangpyo Choi, from Samsung Research's Visual Technology team It's official: We've entered the era of 8K TVs. Around the world, sales of 8K displays are steadily increasing, and with TV manufacturers constantly adding more offerings to the mix, the 8K market is expected to continue to grow. There are, however, a few challenges that need to be addressed before viewers around the world will be able to enjoy 8K's stunning visuals in their entirety. First, more 8K content will need to be produced, and second, network connections need to be made capable of supporting the seamless streaming of 8K movies and shows. To address these issues, researchers from Samsung Research, an advanced R&D hub within Samsung Electronics' SET Business, have developed an AI Codec known as AI ScaleNet.