For anyone who has ever misplaced their iPhone, Apple's "Find My" app is a game-changer that borders on pure magic. Sign into the app, tap a button to sound an alarm on your MIA device, and, within seconds, it'll emit a loud noise -- even if your phone is set on silent mode -- that allows you to go find the missing handset. Yeah, it's usually stuck behind your sofa cushions or left facedown on a shelf somewhere. You can think of SArdo, a new drone project created by researchers at Germany's NEC Laboratories Europe GmbH, as Apple's "Find My" app on steroids. The difference is that, while finding your iPhone is usually just a matter of convenience, the technology developed by NEC investigators could be a literal lifesaver.
Sentiance is awarded as the Best Mobile User Insight Platform & Innovation in Data Privacy and Security 2020 by Wealth & Finance International. The Artificial Intelligence Awards by Wealth & Finance International have been launched to acknowledge exemplary performance and innovation to companies within this rapidly evolving AI market. Sentiance uses data science and machine learning to turn smartphone sensor data into customers' rich behavioral insights. These insights benefit our clients across insurance, mobility and commerce industries to create innovative and personalized offerings. So what kind of user insights can Sentiance provide?
"I don't use Facebook anymore," she said. I was leading a usability session for the design of a new mobile app when she stunned me with that statement. It was a few years back, when I was a design research lead at IDEO and we were working on a service design project for a telecommunications company. The design concept we were showing her had a simultaneously innocuous and yet ubiquitous feature -- the ability to log in using Facebook. But the young woman, older than 20, less than 40, balked at that feature and went on to tell me why she didn't trust the social network any more. This session was, of course, in the aftermath of the 2016 Presidential election. An election in which a man who many regarded as a television spectacle at best and grandiose charlatan at worst had just been elected to our highest office. Though now in 2020, our democracy remains intact.
Huawei Technologies has launched a lab in Singapore to offer mobile developers resources and access to key technologies, including its core kits, artificial intelligence (AI), and augment reality. The Chinese tech giant also is upping its commitment to deliver more localised apps in Singapore, where it saw a 143% jump in new registered developers last year. Led by its mobile arm Huawei Mobile Services (HMS), the new DigiX Lab is located at its local office in Changi Business Park and the first of such facility in Asia-Pacific, the vendor said in a statement Tuesday. It said the lab would support mobile developers throughout the entire app development cycle and its resources would be made available online, accessible virtually across the region. Industry regulator Infocomm Media Development Authority has set aside S$40 million (US$29.53 million) to support research and development efforts and drive adoption of 5G, which include initiatives focused on key verticals such as urban mobility and maritime.
Most of the world has not yet experienced the benefits of a 5G network, but the geopolitical race for the next big thing in telecommunications technology is already heating up. For companies and governments, the stakes couldn't be higher. The first to develop and patent 6G will be the biggest winners in what some call the next industrial revolution. Though still at least a decade away from becoming reality, 6G -- which could be up to 100 times faster than the peak speed of 5G -- could deliver the kind of technology that's long been the stuff of science fiction, from real-time holograms to flying taxis and internet-connected human bodies and brains. The scrum for 6G is already intensifying even as it remains a theoretical proposition, and underscores how geopolitics is fueling technological rivalries, particularly between the U.S. and China.
The edge of a network, as you may know, is the furthest extent of its reach. A cloud platform is a kind of network overlay that makes multiple network locations part of a single network domain. It should therefore stand to reason that an edge cloud is a single addressable, logical network at the furthest extent of a physical network. And an edge cloud on a global scale should be a way to make multiple, remote data centers accessible as a single pool of resources -- of processors, storage, and bandwidth. The combination of 5G and edge computing will unleash new capabilities from real-time analytics to automation to self-driving cars and trucks.
Edge computing-enhanced Internet of Vehicles (EC-IoV) enables ubiquitous data processing and content sharing among vehicles and terrestrial edge computing (TEC) infrastructures (e.g., 5G base stations and roadside units) with little or no human intervention, plays a key role in the intelligent transportation systems. However, EC-IoV is heavily dependent on the connections and interactions between vehicles and TEC infrastructures, thus will break down in some remote areas where TEC infrastructures are unavailable (e.g., desert, isolated islands and disaster-stricken areas). Driven by the ubiquitous connections and global-area coverage, space-air-ground integrated networks (SAGINs) efficiently support seamless coverage and efficient resource management, represent the next frontier for edge computing. In light of this, we first review the state-of-the-art edge computing research for SAGINs in this article. After discussing several existing orbital and aerial edge computing architectures, we propose a framework of edge computing-enabled space-air-ground integrated networks (EC-SAGINs) to support various IoV services for the vehicles in remote areas. The main objective of the framework is to minimize the task completion time and satellite resource usage. To this end, a pre-classification scheme is presented to reduce the size of action space, and a deep imitation learning (DIL) driven offloading and caching algorithm is proposed to achieve real-time decision making. Simulation results show the effectiveness of our proposed scheme. At last, we also discuss some technology challenges and future directions.
In frequency-division duplexing systems, the downlink channel state information (CSI) acquisition scheme leads to high training and feedback overheads. In this paper, we propose an uplink-aided downlink channel acquisition framework using deep learning to reduce these overheads. Unlike most existing works that focus only on channel estimation or feedback modules, to the best of our knowledge, this is the first study that considers the entire downlink CSI acquisition process, including downlink pilot design, channel estimation, and feedback. First, we propose an adaptive pilot design module by exploiting the correlation in magnitude among bidirectional channels in the angular domain to improve channel estimation. Next, to avoid the bit allocation problem during the feedback module, we concatenate the complex channel and embed the uplink channel magnitude to the channel reconstruction at the base station. Lastly, we combine the above two modules and compare two popular downlink channel acquisition frameworks. The former framework estimates and feeds back the channel at the user equipment subsequently. The user equipment in the latter one directly feeds back the received pilot signals to the base station. Our results reveal that, with the help of uplink, directly feeding back the pilot signals can save approximately 20% of feedback bits, which provides a guideline for future research. J. Guo and S. Jin are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing, 210096, P. R. C.-K. Wen is with the Institute of Communications Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan (email: email@example.com). Since the standardization of the fifth generation (5G) communication system has gradually been solidified, researchers in the communication community are beginning to turn their attention to 5G evolution and 6G . Further advancement, such as massive multiple-input and multipleoutput (MIMO) with increased antennas, distributed antenna arrangement combined with new network topology, and increased layers for spatial multiplexing, is expected . A massive MIMO architecture is integral to 5G networks, especially as a key technology to utilize millimeter waves effectively , . In massive MIMO systems, base station (BSs) are equipped with a large number of antennas to improve spectral and energy efficiencies through relatively simple (linear) processing.
We address the packet routing problem in highly dynamic mobile ad-hoc networks (MANETs). In the network routing problem each router chooses the next-hop(s) of each packet to deliver the packet to a destination with lower delay, higher reliability, and less overhead in the network. In this paper, we present a novel framework and routing policies, DeepCQ+ routing, using multi-agent deep reinforcement learning (MADRL) which is designed to be robust and scalable for MANETs. Unlike other deep reinforcement learning (DRL)-based routing solutions in the literature, our approach has enabled us to train over a limited range of network parameters and conditions, but achieve realistic routing policies for a much wider range of conditions including a variable number of nodes, different data flows with varying data rates and source/destination pairs, diverse mobility levels, and other dynamic topology of networks. We demonstrate the scalability, robustness, and performance enhancements obtained by DeepCQ+ routing over a recently proposed model-free and non-neural robust and reliable routing technique (i.e. CQ+ routing). DeepCQ+ routing outperforms non-DRL-based CQ+ routing in terms of overhead while maintains same goodput rate. Under a wide range of network sizes and mobility conditions, we have observed the reduction in normalized overhead of 10-15%, indicating that the DeepCQ+ routing policy delivers more packets end-to-end with less overhead used. To the best of our knowledge, this is the first successful application of MADRL for the MANET routing problem that simultaneously achieves scalability and robustness under dynamic conditions while outperforming its non-neural counterpart. More importantly, we provide a framework to design scalable and robust routing policy with any desired network performance metric of interest.
Non-orthogonal multiple access (NOMA) is a key technology to enable massive machine type communications (mMTC) in 5G networks and beyond. In this paper, NOMA is applied to improve the random access efficiency in high-density spatially-distributed multi-cell wireless IoT networks, where IoT devices contend for accessing the shared wireless channel using an adaptive p-persistent slotted Aloha protocol. To enable a capacity-optimal network, a novel formulation of random channel access with NOMA is proposed, in which the transmission probability of each IoT device is tuned to maximize the geometric mean of users' expected capacity. It is shown that the network optimization objective is high dimensional and mathematically intractable, yet it admits favourable mathematical properties that enable the design of efficient learning-based algorithmic solutions. To this end, two algorithms, i.e., a centralized model-based algorithm and a scalable distributed model-free algorithm, are proposed to optimally tune the transmission probabilities of IoT devices to attain the maximum capacity. The convergence of the proposed algorithms to the optimal solution is further established based on convex optimization and game-theoretic analysis. Extensive simulations demonstrate the merits of the novel formulation and the efficacy of the proposed algorithms.