Well File:

Information Technology


IBM focuses on shortage of AI talent in IT and security

#artificialintelligence

IBM has been warning about the cybersecurity skills gap for several years now and has recently released a report on the lack of artificial intelligence (AI) skills across Europe. The company said in a Friday email to SC Media that cybersecurity has been experiencing a significant workforce and skills shortage globally, and AI can offer a crucial technology path for helping solve it. "Given that AI skillsets are not yet widespread, embedding AI into existing toolsets that security teams are already using in their daily processes will be key to overcoming this barrier," IBM stated in the email. "AI has great potential to solve some of the biggest challenges facing security teams -- from analyzing the massive amounts of security data that exists to helping resource-strapped security teams prioritize threats that pose the greatest risk, or even recommending and automating parts of the response process." Oliver Tavakoli, CTO at Vectra, said the potential of machine learning (ML) and AI materially helping in the pursuit of a large set of problems across many industries has created an acute imbalance in the supply and demand of AI talent.


AI's role is poised to change monumentally in 2022 and beyond – TechCrunch

#artificialintelligence

The latest developments in technology make it clear that we are on the precipice of a monumental shift in how artificial intelligence (AI) is employed in our lives and businesses. First, let me address the misconception that AI is synonymous with algorithms and automation. This misconception exists because of marketing. Think about it: When was the last time you previewed a new SaaS or tech product that wasn't "fueled by" AI? This term is becoming something like "all-natural" on food packaging: ever-present and practically meaningless.


Advanced Data Science with IBM

#artificialintelligence

Apache Spark is the de-facto standard for large scale data processing. This is the first course of a series of courses towards the IBM Advanced Data Science Specialization. We strongly believe that is is crucial for success to start learning a scalable data science platform since memory and CPU constraints are to most limiting factors when it comes to building advanced machine learning models. In this course we teach you the fundamentals of Apache Spark using python and pyspark. We'll introduce Apache Spark in the first two weeks and learn how to apply it to compute basic exploratory and data pre-processing tasks in the last two weeks.


How can India make its technology policy powerful, innovative, and secure?

#artificialintelligence

Can we ever rein in the Big Tech firms to foster indigenous innovation, stimulate balanced growth, and protect national sovereignty? Can we have a balanced set of rules and a clear framework to safeguard larger public interest? Can we check the weaponisation of the internet with balanced cybersecurity and secure data governance framework to make Google (Alphabet); Apple; Facebook (Meta); Amazon; and Microsoft, besides others, more responsible and resilient? Look around, Big Tech run most of the digital services that are integral and ubiquitous to our life. Our minds, economy, national security, democracy, and progress are invisibly controlled by a few technology firms.


Data Centers Need to Go Green - And AI Can Help

#artificialintelligence

Climate change is here, and it's set to get much worse, experts say – and as a result, many industries have pledged to reduce their carbon footprints in the coming decades. Now, the recent jump in energy prices due mainly to the war in Ukraine, also emphasizes the need for development of cheap, renewable forms of energy from freely available sources, like the sun and wind – as opposed to reliance on fossil fuels controlled by nation-states. But going green is easier for some industries than for others,- and one area where it is likely to be a significant challenge is in data centers, which require huge amounts of electricity to cool off, in some cases, the millions of computers deployed. Growing consumer demand to reduce carbon output, along with rules that regulators are likely to impose in the near future, require companies that run data centers to take immediate steps to go green. And artificial intelligence, machine learning, neural networks, and other related technologies can help enterprises of all kinds achieve that goal, without having to spend huge sums to accomplish it.


Skills and security continue to cloud the promise of cloud-native platforms

ZDNet

Joe McKendrick is an author and independent analyst who tracks the impact of information technology on management and markets. As an independent analyst, he has authored numerous research reports in partnership with Forbes Insights, IDC, and Unisphere Research, a division of Information Today, Inc. The KubeCon and CloudNativeCon events just wrapped up in Europe, and one thing has become clear: the opportunities are outpacing organizations' ability to leverage its potential advantages. Keith Townsend, who attended the conference, observed in a tweet that "talent and education is the number one challenge. I currently don't see a workable way to migrate thousands of apps without loads of resources. Information technology gets more complex every day, and there is no shortage of demand for monitoring and automation capabilities the build and manage systems. Cloud-native platforms are seen as remedies for not only improved maintenance, monitoring, and automation, but also for modernizing ...


Traditional vs Deep Learning Algorithms in the Telecom Industry -- Cloud Architecture and Algorithm Categorization

#artificialintelligence

The unprecedented growth of mobile devices, applications and services have placed the utmost demand on mobile and wireless networking infrastructure. Rapid research and development of 5G systems have found ways to support mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Moreover inference from heterogeneous mobile data from distributed devices experiences challenges due to computational and battery power limitations. ML models employed at the edge-servers are constrained to light-weight to boost model performance by achieving a trade-off between model complexity and accuracy. Also, model compression, pruning, and quantization are largely in place.


Introduction to Artificial Intelligence for Beginners - Analytics Vidhya

#artificialintelligence

We have come a long way in the field of Machine Learning / Deep learning that we are now very much interested in AI (Artificial Intelligence), in this article we are going to introduce you to AI. The short and precise answer to Artificial Intelligence depends on the person you are explaining it to. A normal human with little understanding of this technology will relate this with "robots". They will say that AI is a terminator like-object that can react and can think on its own. If you ask this same question to an AI expert, he will say that "it is a set of patterns and algorithms that can generate solutions to everything without being explicitly instructed to do that work".


SoundWatch

Communications of the ACM

We present SoundWatch, a smartwatch-based deep learning application to sense, classify, and provide feedback about sounds occurring in the environment.


Responsible Data Management

Communications of the ACM

Incorporating ethics and legal compliance into data-driven algorithmic systems has been attracting significant attention from the computing research community, most notably under the umbrella of fair8 and interpretable16 machine learning. While important, much of this work has been limited in scope to the "last mile" of data analysis and has disregarded both the system's design, development, and use life cycle (What are we automating and why? Is the system working as intended? Are there any unforeseen consequences post-deployment?) and the data life cycle (Where did the data come from? How long is it valid and appropriate?). In this article, we argue two points. First, the decisions we make during data collection and preparation profoundly impact the robustness, fairness, and interpretability of the systems we build. Second, our responsibility for the operation of these systems does not stop when they are deployed. To make our discussion concrete, consider the use of predictive analytics in hiring. Automated hiring systems are seeing ever broader use and are as varied as the hiring practices themselves, ranging from resume screeners that claim to identify promising applicantsa to video and voice analysis tools that facilitate the interview processb and game-based assessments that promise to surface personality traits indicative of future success.c Bogen and Rieke5 describe the hiring process from the employer's point of view as a series of decisions that forms a funnel, with stages corresponding to sourcing, screening, interviewing, and selection. The hiring funnel is an example of an automated decision system--a data-driven, algorithm-assisted process that culminates in job offers to some candidates and rejections to others. The popularity of automated hiring systems is due in no small part to our collective quest for efficiency.