Goto

Collaborating Authors

 cio




Offline Reinforcement Learning for Mobility Robustness Optimization

Alizadeh, Pegah, Giovanidis, Anastasios, Ramachandra, Pradeepa, Koutsoukis, Vasileios, Arouk, Osama

arXiv.org Artificial Intelligence

--In this work we revisit the Mobility Robustness Optimisation (MRO) algorithm and study the possibility of learning the optimal Cell Individual Offset tuning using offline Reinforcement Learning. Such methods make use of collected offline datasets to learn the optimal policy, without further exploration. We adapt and apply a sequence-based method called Decision Transformers as well as a value-based method called Conservative Q-Learning to learn the optimal policy for the same target reward as the vanilla rule-based MRO. The same input features related to failures, ping-pongs, and other handover issues are used. Evaluation for realistic New Radio networks with 3500 MHz carrier frequency on a traffic mix including diverse user service types and a specific tunable cell-pair shows that offline-RL methods outperform rule-based MRO, offering up to 7% improvement. Furthermore, offline-RL can be trained for diverse objective functions using the same available dataset, thus offering operational flexibility compared to rule-based methods. Self-Organizing Network (SON) functionalities have become key aspects of modern cellular networks for automation. They make use of collected data to allow the network to self-configure, self-optimize, and self-heal. Mobility Robustness Optimization (MRO) is the related SON feature whose aim is to optimize the configuration of relevant mobility parameters and allow users to experience seamless connectivity.


Former Palantir and Elon Musk Associates Are Taking Over Key Government IT Roles

WIRED

The Trump administration is replacing some of the nation's top tech officials with Silicon Valley talent tied to Elon Musk and companies associated with Peter Thiel. This could make it easier for Musk's so-called Department of Government Efficiency (DOGE) engineers to gain access to sensitive government systems, sources and experts say. Over the last few weeks, several Musk-aligned tech leaders have been installed as the chief information officers, or CIOs, of the Office of Management and Budget (OMB), the Office of Personnel Management (OPM), and the Department of Energy (DOE). CIOs manage an agency's information technology and oversee access to sensitive databases and systems, including classified ones. "Federal agency CIOs have authority over all agency asset management, which includes software used to monitor civil servant laptops and phones," a former Biden official with firsthand knowledge of a CIO's capabilities tells WIRED.


Combining Incomplete Observational and Randomized Data for Heterogeneous Treatment Effects

Yao, Dong, Tang, Caizhi, Cui, Qing, Li, Longfei

arXiv.org Artificial Intelligence

Data from observational studies (OSs) is widely available and readily obtainable yet frequently contains confounding biases. On the other hand, data derived from randomized controlled trials (RCTs) helps to reduce these biases; however, it is expensive to gather, resulting in a tiny size of randomized data. For this reason, effectively fusing observational data and randomized data to better estimate heterogeneous treatment effects (HTEs) has gained increasing attention. However, existing methods for integrating observational data with randomized data must require \textit{complete} observational data, meaning that both treated subjects and untreated subjects must be included in OSs. This prerequisite confines the applicability of such methods to very specific situations, given that including all subjects, whether treated or untreated, in observational studies is not consistently achievable. In our paper, we propose a resilient approach to \textbf{C}ombine \textbf{I}ncomplete \textbf{O}bservational data and randomized data for HTE estimation, which we abbreviate as \textbf{CIO}. The CIO is capable of estimating HTEs efficiently regardless of the completeness of the observational data, be it full or partial. Concretely, a confounding bias function is first derived using the pseudo-experimental group from OSs, in conjunction with the pseudo-control group from RCTs, via an effect estimation procedure. This function is subsequently utilized as a corrective residual to rectify the observed outcomes of observational data during the HTE estimation by combining the available observational data and the all randomized data. To validate our approach, we have conducted experiments on a synthetic dataset and two semi-synthetic datasets.


CIO In The Know – Stormy clouds ahead and the year of rightsizing

#artificialintelligence

It will likely go down representing many things, but probably none as significant as rightsizing IT. Rightsizing is not a new IT term, nor a new function. As companies at still facing uncertainty with revenue, staffing and technology direction, rightsizing comes into focus. While rightsizing is an ongoing normal function in highly mature IT organizations, it bubbles up more as a project in less mature organizations. Downturns are often a catalyst for rightsizing efforts within IT organizations.


3 ways CIOs should drive the future of work

#artificialintelligence

"Who owns and oversees employee experience and the future of work at your organization" is a question I've been asking CIOs and IT leaders a lot of late. The ensuing conversation usually reveals a telling disconnect that CIOs should remedy for the health of their companies. Most IT leaders pause before responding to this question. Some go on to describe hybrid work plans, which is one aspect of the future of work, but it's not the complete scope. To align on terminology, I share Gartner's definition, "The future of work describes changes in how work will get done over the next decade, influenced by technological, generational, and social shifts," and then ask them to reconsider this greater scope.


Build a Viable IT Architecture for AI and Analytics

#artificialintelligence

I recently visited with the CIO of a Fortune 500 company. He was touting the advances they had made in IT and corporate culture regarding the use of artificial intelligence and analytics, but he had one major concern: How do you fuse AI and analytics into the rest of your transactional line of business IT infrastructure? It hasn't been that way in his enterprise. His IT organization had started its analytics initiative with an internal Hadoop group that was responsible for processing big data internally. Meanwhile other departments in IT supported transactional data processing on an assortment of mainframes and servers in the data center. Regular IT and the Hadoop groups were somewhat siloed from each other because the parallel processing and storage management needs for big data and AI were notably different than what they were for transactional data and processing management.


90% of APAC enterprises plan to deploy AI over the next 12 months: Report

#artificialintelligence

Artificial Intelligence (AI) is expected to go mainstream in the APAC region by the end of 2023 with more than 88% of enterprises in the region already using or planning to use AI or machine learning (ML) applications over the next 12 months, according to an IDC report, jointly commissioned by Lenovo and AMD. "AI applications enable CIOs to analyze large volumes of information and create real-time insights to drive customer engagement and customer experience, managing growing complexity of a rapidly expanding geo-dispersed infrastructure for higher levels of resiliency and agility, and securing their IT operations against the backdrop of growing incidence of ransomware and malware attacks," said the IDC report, which was based on a study of over 900 CIOs & IT decision makers across Asia Pacific. The top three business processes in which enterprises are expected to incorporate AI/ML include IT operations, cybersecurity, and customer support and service, the study noted. "Organizations are seeking AI/ML to streamline IT operations, as till today there are organizations that take time to get a virtual machine allocated to a customer," said Amit Luthra, managing director for India at Lenovo ISG. Given that in the present business environment there are some workloads in the public cloud while some in the hybrid cloud, it becomes difficult to define the operations and ascertain who presides over what data, Luthra pointed out.


How may ChatGPT AI disrupt the NHS?

#artificialintelligence

ChatGPT, the AI-driven chatbot that produces remarkable results from simple queries, has been the sensation of the tech world over the past few months, since launching in November. And unless you've been living in a cave without wifi you are likely to have read a flurry of articles on what impact it may have. Some people believe it marks a technology inflection point; and points to the redefining of many knowledge jobs, beginning with lawyers, journalists, marketers, teachers, lecturers, software coders and possibly even doctors. Others have speculated that it points to a post-Google world, leapfrogging the familiar search paradigm of the past 20 years, or will transform personal business and productivity tools so that emails, spreadsheets, reports and even software may all be generated by AI tools. GPT-3, or Generative Pre-trained Transformer, from San Francisco start-up OpenAI, is a type of artificial intelligence that has the unerring ability to generate remarkably human-like text, from a short query or input text.