Goto

Collaborating Authors

Scientific Discovery


How Risk Aversion Is Killing the Spirit of Scientific Discovery

Mother Jones

The Allen Telescope Array, used by Northern California's SETI Institute in its often difficult-to-fund search for extraterrestrial life.Redding Record Searchlight / Zuma Press This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration. Science is built on the boldly curious exploration of the natural world. Astounding leaps of imagination and insight--coupled with a laser like focus on empiricism and experimentation--have brought forth countless wonders of insight into the workings of the universe we find ourselves in. But the culture that celebrates, supports, and rewards the audacious mental daring that is the hallmark of science is at risk of collapsing under a mountain of cautious, risk-averse, incurious advancement that seeks merely to win grants and peer approval. I've encountered this problem myself.


Data Discovery for ML Engineers / DataScienceCentral.com

#artificialintelligence

Real-world production ML systems consist of two main components: data and code. Data is clearly the leader, and rapidly taking center stage. Data defines the quality of almost any ML-based product, more so than code or any other aspect. In Feature Store as a Foundation for Machine Learning, we have discussed how feature stores are an integral part of the machine learning workflow. They improve the ROI of data engineering, reduce cost per model, and accelerate model-to-market by simplifying feature definition and extraction.


Understanding Type-I and Type-II errors in hypothesis testing

#artificialintelligence

We all can relate to thinking about whether route A will take less time than route B, if the average return on investment X is more than investment Y, and if movie ABC is better than movie XYZ. In all these cases, we are testing some hypotheses we have in our minds. Setting up hypotheses, proving/disproving them using data, and helping businesses make decisions is like bread and butter for Data Scientists. Data Scientists often rely on probabilities to understand the likelihood of observing data by chance and use that to make conclusions around a hypothesis. Hence, there are always scenarios of making errors while making conclusions around our assumed hypothesis. The below post is written to provide an intuitive yet detailed explanation of Type-I and Type-II errors that happen during statistical hypothesis testing.


Hypothesis Testing

#artificialintelligence

In statistics, hypothesis testing is a form of inference using data to draw certain conclusions about the population. First, we make an assumption about the population which is known as the Null Hypothesis. It is denoted by H₀. Then we define the Alternate Hypothesis which is the opposite of what is stated in the Null Hypothesis, denoted by Hₐ. After defining both the Null Hypothesis and Alternate Hypothesis we perform what is known as a hypothesis test to either accept or reject the Null Hypothesis.


Automating Data Science

Communications of the ACM

Data science covers the full spectrum of deriving insight from data, from initial data gathering and interpretation, via processing and engineering of data, and exploration and modeling, to eventually producing novel insights and decision support systems. Data science can be viewed as overlapping or broader in scope than other data-analytic methodological disciplines, such as statistics, machine learning, databases, or visualization.10 To illustrate the breadth of data science, consider, for example, the problem of recommending items (movies, books, or other products) to customers. While the core of these applications can consist of algorithmic techniques such as matrix factorization, a deployed system will involve a much wider range of technological and human considerations. These range from scalable back-end transaction systems that retrieve customer and product data in real time, experimental design for evaluating system changes, causal analysis for understanding the effect of interventions, to the human factors and psychology that underlie how customers react to visual information displays and make decisions. As another example, in areas such as astronomy, particle physics, and climate science, there is a rich tradition of building computational pipelines to support data-driven discovery and hypothesis testing. For instance, geoscientists use monthly global landcover maps based on satellite imagery at sub-kilometer resolutions to better understand how the Earth's surface is changing over time.50 These maps are interactive and browsable, and they are the result of a complex data-processing pipeline, in which terabytes to petabytes of raw sensor and image data are transformed into databases of a6utomatically detected and annotated objects and information. This type of pipeline involves many steps, in which human decisions and insight are critical, such as instrument calibration, removal of outliers, and classification of pixels. The breadth and complexity of these and many other data science scenarios means the modern data scientist requires broad knowledge and experience across a multitude of topics. Together with an increasing demand for data analysis skills, this has led to a shortage of trained data scientists with appropriate background and experience, and significant market competition for limited expertise. Considering this bottleneck, it is not surprising there is increasing interest in automating parts, if not all, of the data science process.


Science and innovation relies on successful collaboration

#artificialintelligence

It may sound obvious, perhaps even clichéd, but this mantra is something that must be remembered in ongoing political negotiations over Horizon Europe, which could see Switzerland and the UK excluded from EU research projects. We need more, not fewer, researchers collaborating to solve today's and tomorrow's challenges. By closely working with Swiss and British researchers, who have long played key roles, Horizon Europe projects will benefit – as they have in the past. This is the motivation behind ETH Zurich, which collaborates with IBM Research on nanotechnology, leading the Stick to Science campaign. This calls on all three parties – Switzerland, the UK and the EU – to try and solve the current stalemate and put Swiss and British association agreements in place.


New Paradigm of User Identity

#artificialintelligence

Our AI & Deep Learning enabled Multi-modal Biometrics platform guarantees Zero Identity Fraud & establishes trust across User Lifecycle, while ensuring User Privacy & Military-Grade Data Security.


A Data-Driven Approach to Robust Hypothesis Testing Using Sinkhorn Uncertainty Sets

arXiv.org Machine Learning

Hypothesis testing for small-sample scenarios is a practically important problem. In this paper, we investigate the robust hypothesis testing problem in a data-driven manner, where we seek the worst-case detector over distributional uncertainty sets centered around the empirical distribution from samples using Sinkhorn distance. Compared with the Wasserstein robust test, the corresponding least favorable distributions are supported beyond the training samples, which provides a more flexible detector. Various numerical experiments are conducted on both synthetic and real datasets to validate the competitive performances of our proposed method. As a fundamental problem in statistics, hypothesis testing plays a key role in general scientific discovery areas such as anomaly detection and model criticism. The goal of hypothesis testing is to determine which one among given hypotheses is true within a certain error probability level.


The arrogance of Anthony Fauci

FOX News

The Kentucky senator says that a mass return to remote learning is not the proper mitigation strategy for schools across the country. On my very first day in medical school, the dean gave a lecture on serendipity and scientific discovery, highlighting Penicillin as the most famous example of a scientist stumbling upon a discovery he hadn't originally intended to make. Serendipity requires an environment with the freedom to think outside the box and to innovate without excessive central control. When science is made rigidly uniform by placing power in'omniscient men,' the fortuitous finds of individual scientists may be left undiscovered. FAUCI, WALENSKY COMMENTS RAISE NEW CONCERNS ON WHETHER SCIENCE OR'PUSHBACK' ARE GUIDING COVID POLICIES Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases and chief medical adviser to the president, listens during a meeting with the White House COVID-19 Response Team on the latest developments related to the Omicron variant in the South Court Auditorium in the Eisenhower Executive Office Building on the White House Campus in Washington, Tuesday, Jan. 4, 2022.


10 Interesting Facts on Open Science: Scientific Revolution.

#artificialintelligence

The development in the number and scale of universities throughout the world, as well as the expansion of their research endeavors as a method of enhancing their reputations and attracting both students and sponsors, is driving demand in this lucrative academic publishing sector. Because publishing metrics have become the key indicator of academic achievement and the primary motivator for career development, they have become the primary gauge of academic performance and the primary incentive for career progress. The concept "publish or perish" has become norm many fields. As a result, the rate of scientific publishing has increased exponentially in recent decades, with output rates approaching 2.5 million per year by 2017. The proliferation of so-called "predatory" journals, which provide speedy publishing without peer review or considerable editorial control, is another result of this increase in demand for publication channels.To counter the current science climate, Open Science has emerged.