Goto

Collaborating Authors

Results


Correlation for feature selection

#artificialintelligence

Correlation measures the degree to which two phenomena are related to one another. For example, there is a correlation between summer temperatures and ice cream sales. When one goes up, so does the other. Two variables are positively correlated if a change in one is associated with a change in the other in the same direction, such as the relationship between height and weight. Taller people weigh more (on average); shorter people weigh less. A correlation is negative if a positive change in one variable is associated with a negative change in the other, such as the relationship between exercise and weight.


Why the high accuracy in classification is not always correct?

#artificialintelligence

Classification accuracy is a statistic that describes a classification model's performance by dividing the number of correct predictions by the total number of predictions. It is simple to compute and comprehend, making it the most often used statistic for assessing classifier models. But not in every scenario accuracy score is to be considered the best metric to evaluate the model. In this article, we will discuss the reasons not to believe in the accuracy performance parameter completely. Following are the topics to be covered.


The journey of Modernizing TorchVision – Memoirs of a TorchVision developer – 3

#artificialintelligence

It's been a while since I last posted a new entry on the TorchVision memoirs series. Thought, I've previously shared news on the official PyTorch blog and on Twitter, I thought it would be a good idea to talk more about what happened on the last release of TorchVision (v0.12), what's coming out on the next one (v0.13) My target is to go beyond providing an overview of new features and rather provide insights on where we want to take the project in the following months. TorchVision v0.12 was a sizable release with dual focus: a) update our deprecation and model contribution policies to improve transparency and attract more community contributors and b) double down on our modernization efforts by adding popular new model architectures, datasets and ML techniques. Key for a successful open-source project is maintaining a healthy, active community that contributes to it and drives it forwards.


[NLP] Pos Analysis

#artificialintelligence

To allow a machine to understand human language, the components of each sentence must be categorized. One of the basic classification systems is the POS (part-of-speach), natively integrated into the nltk library. These tags give each component of the sentence a grammatical meaning. Let's do a test with a short script. To run it you need the pip package and the downloader for NLTK.


SoundWatch

Communications of the ACM

We present SoundWatch, a smartwatch-based deep learning application to sense, classify, and provide feedback about sounds occurring in the environment.


Challenges, Experiments, and Computational Solutions in Peer Review

Communications of the ACM

While researchers are trained to do research, there is little training for peer review. Several initiatives and experiments have looked to address this challenge. Recently, the ICML 2020 conference adopted a method to select and then mentor junior reviewers, who would not have been asked to review otherwise, with a motivation of expanding the reviewer pool to address the large volume of submissions.43 An analysis of their reviews revealed that the junior reviewers were more engaged through various stages of the process as compared to conventional reviewers. Moreover, the conference asked meta reviewers to rate all reviews, and 30% of reviews written by junior reviewers received the highest rating by meta reviewers, in contrast to 14% for the main pool. Training reviewers at the beginning of their careers is a good start but may not be enough. There is some evidence8 that quality of an individual's review falls over time, at a slow but steady rate, possibly because of increasing time constraints or in reaction to poor-quality reviews they themselves receive. While researchers are trained to do research, there is little training for peer review … Training reviewers at the beginning of their careers is a good start but may not be enough.


Responsible Data Management

Communications of the ACM

Incorporating ethics and legal compliance into data-driven algorithmic systems has been attracting significant attention from the computing research community, most notably under the umbrella of fair8 and interpretable16 machine learning. While important, much of this work has been limited in scope to the "last mile" of data analysis and has disregarded both the system's design, development, and use life cycle (What are we automating and why? Is the system working as intended? Are there any unforeseen consequences post-deployment?) and the data life cycle (Where did the data come from? How long is it valid and appropriate?). In this article, we argue two points. First, the decisions we make during data collection and preparation profoundly impact the robustness, fairness, and interpretability of the systems we build. Second, our responsibility for the operation of these systems does not stop when they are deployed. To make our discussion concrete, consider the use of predictive analytics in hiring. Automated hiring systems are seeing ever broader use and are as varied as the hiring practices themselves, ranging from resume screeners that claim to identify promising applicantsa to video and voice analysis tools that facilitate the interview processb and game-based assessments that promise to surface personality traits indicative of future success.c Bogen and Rieke5 describe the hiring process from the employer's point of view as a series of decisions that forms a funnel, with stages corresponding to sourcing, screening, interviewing, and selection. The hiring funnel is an example of an automated decision system--a data-driven, algorithm-assisted process that culminates in job offers to some candidates and rejections to others. The popularity of automated hiring systems is due in no small part to our collective quest for efficiency.


Top 10 AI graduate degree programs

#artificialintelligence

Artificial Intelligence (AI) is a fast-growing and evolving field, and data scientists with AI skills are in high demand. The field requires broad training involving principles of computer science, cognitive psychology, and engineering. If you want to grow your data scientist career and capitalize on the demand for the role, you might consider getting a graduate degree in AI. U.S. News & World Report ranks the best AI graduate programs at computer science schools based on surveys sent to academic officials in fall 2021 and early 2022. Here are the top 10 programs that made the list as having the best AI graduate programs in the US.


Telecom Churn Prediction using Machine Learning, Python, and GridDB

#artificialintelligence

Customer churn is a key business concept that determines the number of customers that stop doing business with a specific company. The churn rate is then defined as the rate by which a company loses customers in a given time frame. For example, a churn rate of 15%/year means that a company loses 15% of its total customer base every year. Customer churn takes special importance in the telecommunication sector, given the increasing competition and appearance of new telecommunication companies. For this reason, the telecom industry expects high churn rates every year.


Introducing Bitfount

#artificialintelligence

Bitfount is a federated analytics and machine learning platform that makes extracting value from sensitive data easy, fast, private, and secure. For data custodians and data scientists or researchers partnering to achieve better insights from data, Bitfount combines the best of data collaboration design, with advanced privacy-preserving capabilities, while playing nicely with all of your existing tools and crucially not requiring the transfer of any raw data. Data collaboration today is a painful, messy business. As anyone who has attempted to set up a collaboration around sensitive data will know, the current process is generally very painful and slow. Valuable datasets languish in silos as a result of regulatory or commercial sensitivity concerns, incompatible data management solutions, lengthy contractual processes, or just plain lack of understanding of what data is available for which purposes within an organisation.