Goto

Collaborating Authors

AI in Agriculture. Group: TY-56.

#artificialintelligence

Agriculture is an important economic sector in every country. The global population is growing at a rapid pace, as is the demand for food. Farmers' traditional methods are not sufficient to meet the demand at this time. As a result, some new automation methods are being introduced to meet these requirements while also providing numerous job opportunities in this sector. Artificial intelligence has emerged as one of the most important technologies in virtually every industry, including education, banking, robotics, agriculture, and so on. It is playing a critical role in the agriculture sector and is transforming the industry.AI protects the agriculture sector from a variety of threats, including climate change, population growth, labour shortages, and food safety.


AI Revolution - Transformers and Large Language Models (LLMs)

#artificialintelligence

Part of the challenge of "AI" is we keep raising the bar on what it means for something to be a machine intelligence. Early machine learning models have been quite successful in terms of real world impact. Large scale applications of machine learning today include Google Search and ads targeting, Siri/Alexa, smart routing on mapping applications, self-piloting drones, defense tech like Anduril, and many other areas. Some areas, like self-driving cars, have shown progress but seem to continuously be "a few years" away every few years. Just as all the ideas for smart phones existed in the 1990s but didn't take place until the iphone launched in 2007, self-driving cars are an inevitable part of the future. In parallel, the machine learning (ML) / artificial intelligence (AI) world has been rocked in the last decade by a series of advancements over time in voice recognition (hence Alexa), image recognition (iphone unlock and the erm, non-creepy, passport controls at Airports). Sequential inventions and discovery include CNNs, RNNs, various forms of Deep Learning, GANs, and other innovations.


Why Everyday AI Can Outshine Moonshots - WSJ

#artificialintelligence

Nearly 10 years later, Dataiku is helping to operationalize AI across a range of business use cases, from fraud detection and customer churn prevention to predictive maintenance and supply chain optimization. If the final destination is weaving AI capabilities so thoroughly into the fabric of day-to-day work that people forget it's there, enterprises are typically somewhere in the middle of the journey, Douetteau says. To get there, they should look inward. In this "AI From the Front Lines" interview, Douetteau and Romain Fouache, Dataiku's chief revenue officer, speak with Beena Ammanath, executive director of the Deloitte AI Institute, about their vision of AI in the enterprise, the importance of building systemization and trust for AI, and how execution will be more important than innovation in democratizing the technologies. "It's not a technology issue--we can build platforms able to continually process and enhance data and build new AI on top to optimize business processes," Douetteau says.


Computing is Productivity. UtilityNet Changes Computing from Technology To Incentives.

#artificialintelligence

"Computing is the first productive force of the digital economy." On July 29, 2022, at the SenseTime Science and Technology Sub-forum of the First Computing Conference in China, Gao Shanshan, director of Shandong Sino US Digital Media International Cooperation Research Center said that, in the era of AI, computing infrastructure keeps changing such industries as finance, medicine and data center. Therefore, AI computing has become the main increment of digital economy development in various countries, and also the foundation of the era of digital economy. Computing represents a new type of productivity. Who owns the computing of the future digital economy industry development will have the ultimate power to lead the development of the digital economy in digital economy.


Phys. Rev. Research 4, L042038 (2022) - Accelerated motional cooling with deep reinforcement learning

#artificialintelligence

Achieving fast cooling of motional modes is a prerequisite for leveraging such bosonic quanta for high-speed quantum information processing. In this Letter, we address the aspect of reducing the time limit for cooling, below that constrained by the conventional sideband cooling techniques, and propose a scheme to apply deep reinforcement learning (DRL) to achieve this. In particular, we have numerically demonstrated how the scheme can be used effectively to accelerate the dynamic motional cooling of a macroscopic magnonic sphere, and how it can be uniformly extended to more complex systems, for example, a tripartite opto-magno-mechanical system, to obtain cooling of the motional mode below the time bound of coherent cooling. While conventional sideband cooling methods do not work beyond the well-known rotating wave approximation (RWA) regimes, our proposed DRL scheme can be applied uniformly to regimes operating within and beyond the RWA, and thus, this offers a new and complete toolkit for rapid control and generation of macroscopic quantum states for application in quantum technologies. Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license.


Machine learning and statistic analysis to predict drug treatment outcome in pediatric epilepsy patients with tuberous sclerosis complex - ScienceDirect

#artificialintelligence

We aimed to investigate the association between multi-modality features and epilepsy drug treatment outcomes and propose a machine learning model to predict epilepsy drug treatment outcomes with multi-modality features. This retrospective study consecutively enrolled 103 epilepsy children with rare TSC. Multi-modality data were used to characterize risk factors for epilepsy drug treatment outcome of TSC, including clinical data, TSC1, and TSC2 genes test results, magnetic resonance imaging (MRI), computerized tomography (CT), and electroencephalogram (EEG). Three common feature selection methods and six common machine learning models were used to find the best combination of feature selection and machine learning model for epilepsy drug treatment outcomes prediction with multi-modality features for TSC clinical application. The analysis of variance based on selected 35 features combined with multilayer perceptron (MLP) model achieved the best area-under-curve score (AUC) of 0.812 (±0.005).


Supervised ensemble classification of Kepler variable stars

#artificialintelligence

Variable star analysis and classification is an important task in the understanding of stellar features and processes. While historically classifications have been done manually by highly skilled experts, the recent and rapid expansion in the quantity and quality of data has demanded new techniques, most notably automatic classification through supervised machine learning. We present an expansion of existing work on the field by analysing variable stars in the Kepler field using an ensemble approach, combining multiple characterization and classification techniques to produce improved classification rates. Classifications for each of the roughly 150 000 stars observed by Kepler are produced separating the stars into one of 14 variable star classes. The study of variable stars has provided a wealth of valuable astrophysical information. Intrinsic sources of variation, such as in pulsation, provide a physical probe and test for our understanding of stellar atmospheres and interiors.


How to Democratize AI/ML and Data Science with AI-generated Synthetic Data - KDnuggets

#artificialintelligence

More and more people across organizations are expected to work with data and to do so safely without breaking or leaking anything. Synthetic data generation is a solution that allows citizen data scientists and auto ML users to quickly and safely create and use business-critical data assets. Letting go of production data is a hard sell for data scientists and engineers privileged enough to have unrestricted access to their companies' most valuable data assets. Old habits are hard to change, but that doesn't mean they shouldn't. More and more companies are creating synthetic data repositories, where curated synthetic data assets replace privacy-sensitive, messy, and biased production data access. Benefits go beyond democratizing data access, and even those with privileged data access build synthetic data generators into their workflows.


OpenAI debuts ChatGPT and GPT-3.5 series as GPT-4 rumors fly

#artificialintelligence

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. As GPT-4 rumors fly around NeurIPS 2022 this week in New Orleans (including whispers that details about GPT-4 will be revealed there), OpenAI has managed to make plenty of news in the meantime. On Monday, the company announced a new model in the GPT-3 family of AI-powered large language models, text-davinci-003, part of what it calls the "GPT-3.5 series," that reportedly improves on its predecessors by handling more complex instructions and producing higher-quality, longer-form content. Unlike davinci-002, which uses supervised fine-tuning on human-written demonstrations and highly scored model samples to improve generation quality, davinci-003 is a true reinforcement learning with human feedback (RLHF) model." Meanwhile, today OpenAI launched an early demo of ChatGPT, another part of the GPT-3.5 series that is an interactive, conversational model whose dialogue format "makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests."


Ushering in a new era of computing

#artificialintelligence

As a graduate student doing his master's thesis on speech recognition at the MIT AI Lab (now the MIT Computer Science and Artificial Intelligence Laboratory), Dan Huttenlocher worked closely with Professor Victor Zue. Well known for pioneering the development of systems that enable an user to interact with computers using spoken language, Zue traveled frequently to Asia -- where much of the early research in speech recognition happened during the 1980s. Huttenlocher occasionally accompanied his professor on these trips, many of which involved interactions with members of MIT Industrial Liaison Program, as he recalls. "It was a tremendous opportunity," according to Huttenlocher, "and it was a large part of what built my interest in engaging with companies and industry in addition to the academic side of research." Huttenlocher went on to earn his PhD in computer vision at the Institute and has since embarked on a career that encompasses academia, industry, and the philanthropic sector.