Goto

Collaborating Authors

 central processing unit


AI for Service: Proactive Assistance with AI Glasses

Wen, Zichen, Wang, Yiyu, Liao, Chenfei, Yang, Boxue, Li, Junxian, Liu, Weifeng, He, Haocong, Feng, Bolong, Liu, Xuyang, Lyu, Yuanhuiyi, Zheng, Xu, Hu, Xuming, Zhang, Linfeng

arXiv.org Artificial Intelligence

In an era where AI is evolving from a passive tool into an active and adaptive companion, we introduce AI for Service (AI4Service), a new paradigm that enables proactive and real-time assistance in daily life. Existing AI services remain largely reactive, responding only to explicit user commands. We argue that a truly intelligent and helpful assistant should be capable of anticipating user needs and taking actions proactively when appropriate. To realize this vision, we propose Alpha-Service, a unified framework that addresses two fundamental challenges: Know When to intervene by detecting service opportunities from egocentric video streams, and Know How to provide both generalized and personalized services. Inspired by the von Neumann computer architecture and based on AI glasses, Alpha-Service consists of five key components: an Input Unit for perception, a Central Processing Unit for task scheduling, an Arithmetic Logic Unit for tool utilization, a Memory Unit for long-term personalization, and an Output Unit for natural human interaction. As an initial exploration, we implement Alpha-Service through a multi-agent system deployed on AI glasses. Case studies, including a real-time Blackjack advisor, a museum tour guide, and a shopping fit assistant, demonstrate its ability to seamlessly perceive the environment, infer user intent, and provide timely and useful assistance without explicit prompts.


Developing an aeroponic smart experimental greenhouse for controlling irrigation and plant disease detection using deep learning and IoT

Narimani, Mohammadreza, Hajiahmad, Ali, Moghimi, Ali, Alimardani, Reza, Rafiee, Shahin, Mirzabe, Amir Hossein

arXiv.org Artificial Intelligence

Controlling environmental conditions and monitoring plant status in greenhouses is critical to promptly making appropriate management decisions aimed at promoting crop production. The primary objective of this research study was to develop and test a smart aeroponic greenhouse on an experimental scale where the status of Geranium plant and environmental conditions are continuously monitored through the integration of the internet of things (IoT) and artificial intelligence (AI). An IoT-based platform was developed to control the environmental conditions of plants more efficiently and provide insights to users to make informed management decisions. In addition, we developed an AI-based disease detection framework using VGG-19, InceptionResNetV2, and InceptionV3 algorithms to analyze the images captured periodically after an intentional inoculation. The performance of the AI framework was compared with an expert's evaluation of disease status. Preliminary results showed that the IoT system implemented in the greenhouse environment is able to publish data such as temperature, humidity, water flow, and volume of charge tanks online continuously to users and adjust the controlled parameters to provide an optimal growth environment for the plants. Furthermore, the results of the AI framework demonstrate that the VGG-19 algorithm was able to identify drought stress and rust leaves from healthy leaves with the highest accuracy, 92% among the other algorithms.


[100%OFF] Basic Structure Of Computers

#artificialintelligence

This is an Introductory course so please buy it if you are a beginner and you want to know more about how the computer works within. Please go through the free preview video before buying that is the introduction part and others so that you will get an idea about what this course is about. The central processing unit (CPU), input devices, and output devices are the three components that make up the basic structure of a computer system. The Central Processing Unit (CPU) can also be separated into two parts: the control unit (CU) and the arithmetic logic unit (ALU). The basic structure of a computer describes a simple concept: data is entered into the central processing unit using input devices such as a keyboard, mouse, joystick, scanner, secondary storage devices, and so on, and when the central processing unit receives the data from the input devices, it has a pre-programmed set of instructions to follow, and the result of instruction execution is the output.


The GPUs for Deep Learning: NVIDIA vs AWS vs Azure and More

#artificialintelligence

As technology advances and more organizations implement machine learning operations (MLOps), people are looking for ways to speed up processes. This is especially true for organizations with deep learning (DL) processes that can be incredibly long to run. You can speed up this process by using graphical processing units (GPUs) on-premises or in the cloud. GPUs are microprocessors that are specially designed to perform specific tasks. These units enable parallel processing of tasks and can be optimized to increase performance in artificial intelligence and deep learning processes.


@Radiology_AI

#artificialintelligence

To develop a convolutional neural network (CNN)–based deformable lung registration algorithm to reduce computation time and assess its potential for lobar air trapping quantification. In this retrospective study, a CNN algorithm was developed to perform deformable registration of lung CT (LungReg) using data on 9118 patients from the COPDGene Study (data collected between 2007 and 2012). Loss function constraints included cross-correlation, displacement field regularization, lobar segmentation overlap, and the Jacobian determinant. LungReg was compared with a standard diffeomorphic registration (SyN) for lobar Dice overlap, percentage voxels with nonpositive Jacobian determinants, and inference runtime using paired t tests. Landmark colocalization error (LCE) across 10 patients was compared using a random effects model.


The Chinese Room Argument: Ray Kurzweil vs. John Searle

#artificialintelligence

" 'When we hear it said that wireless valves think,' [Sir Geoffrey] Jefferson said, 'we may despair of language.' But no cybernetician had said the valves thought, no more than anyone would say that the nerve-cells thought. It was the system as a whole that'thought', in Alan's [Turing] view…" -- Andrew Hodges (from his book Alan Turing: the Enigma). In his rewarding book, How to Create a Mind, Ray Kurzweil tackles John Searle's Chinese room argument. That said, I do find its philosophical sections somewhat naïve. Of course there's no reason why a "world-renowned inventor, thinker and futurist" should also be an accomplished philosopher.


Nvidia Stock Hits Buy Point On Data Center, AI Advancements

#artificialintelligence

Graphics-chip maker Nvidia (NVDA) released a fire hose of news at its online GTC conference, detailing advancements in artificial intelligence, computer graphics, robotics and data centers. Nvidia stock reached a buy point Tuesday following positive reviews of the event. Nvidia also announced that its fiscal first-quarter revenue is tracking above the target it provided in its Feb. 24 earnings release. At the time, the company provided a revenue outlook for its first fiscal quarter, ending May 2, of $5.3 billion, plus or minus 2%. In a news release, Chief Financial Officer Colette Kress said Nvidia is seeing strength across its four market platforms.


Robots Use AI to 'Feel' Pain and Self-Repair – IAM Network

#artificialintelligence

Robots are one step closer to being more like living beings with a new development within the field. Scientists from Nanyang Technological University, Singapore (NTU Singapore) have created an AI system that allows robots to recognize pain and self-repair. The newly developed system relies on AI-enabled sensor nodes, which process'pain' and then respond to it. This pain is identified when there is pressure brought on by an outside physical force. The other major part of the system is self-repair.


Robot Design Goes Back to Nature

WSJ.com: WSJD - Technology

For millions of years, as animals have evolved to take myriad shapes and forms, they have adapted to solve a variety of physical challenges. Many have overcome obstacles that humans face as well. With the rise of new technologies to measure and analyze their movements, we can now see animals with more clarity and precision than ever before. The research is having a significant impact on robotics, materials science and a range of other fields. Jerry's fellow dogs and a number of other species have flexible spines supported by pliable back muscles and controlled by a network of neurons called the central pattern generator; this combination allows them to turn, twist, run, swim and recover from a trip or misstep without the lag time of waiting for commands from the brain.


AI Feast at Baidu Create 2018: Level 4 Autonomous Bus, Apollo 3.0, DuerOS 3.0

#artificialintelligence

The second annual Baidu AI Developers Conference, officially known as Baidu Create 2018, opened in Beijing today. Baidu unveiled China's first cloud-to-edge AI chip, Kunlun, and many other upgraded versions of Baidu's AI products this morning on the first day of this two-day event. Li Yanhong, known as Robin Li, the founder and CEO of Baidu, introduced Baidu's latest research achievements in artificial intelligence (AI) field. Started in 2013, the autonomous driving project was mainly lead and developed by the Baidu Research Institute. At the 2017 Baidu World Congress in November last year, Robin Li stated that Baidu's Level 4 self-driving bus "Apolong" would be mass-produced by July 2018.