Collaborating Authors


Embedded computing development kit for artificial intelligence (AI)-based machine vision offered by AAEON


TAIPEI, Taiwan – AAEON Technology in Taipei, Taiwan, and in Santa Clara, Calif., are introducing the BrainFrame Edge AI Developers Kit (DevKit) for an Intel artificial intelligence (AI) computer to enable system integrators rapidly to create and deploy smart machine vision applications. The BrainFrame Edge AI DevKit helps create solutions such as machine vision-based access control, uniform compliance, manufacturing automation, and video analytics. BrainFrame scales and configures easily and enables a connected camera to become a continuously monitoring Smart Vision system. BrainFrame's automatic algorithm fusion and optimization engine has VisionCapsules,'s open source algorithm packaging format. These self-contained capsules have a negligible memory footprint and include all necessary code, files, and metadata to describe and implement a machine learning algorithm.

Lessons learnt working on applied machine learning research


During my internship at Taipei Medical University, I was working on developing predictive models for Chronic Kidney Disease. The project was an educational experience with challenges that arose from the complexity and large size of the dataset. In this article, I will share the key lessons I learnt that helped me boost my productivity as a beginning researcher. The ideas are quite general and applicable to most machine learning workflows. Some of them are obvious ones that we choose to ignore.

Kneron launches its new AI chip to challenge Google and others – TechCrunch


Fresh off a $40 million Series A round, edge AI specialist Kneron today announced the launch of its newest custom chip, the Kneron KL 720 SoC. With funding from the likes of Alibaba, Sequoia, Horizons Ventures, Qualcomm and SparkLabs Taipei (as well as a few undisclosed backers), it's worth taking the company's efforts seriously, and Kneron has no qualms about comparing its chips to those of Intel and Google, for example. It argues that its KL 720 is twice as energy efficient as Intel's latest Movidius chips and four times more efficient than Google's Coral Edge TPU at running the MobileNetV2 image recognition benchmark. Compared to its previous generation of chips, this updated version can process 4K still images and videos at a 1080P resolution. It also features a number of new audio recognition breakthroughs for the company, which Kneron says will allow devices that use its chips to bypass the standard wake words on other chips and have immediate conversations with the device.

Taiwan's first artificial intelligence park breaks ground


TAIPEI (Taiwan News) -- The construction of Taiwan's first artificial intelligence (A.I.) park was officially launched in New Taipei City's Tucheng District …

TRIPDECODER: Study Travel Time Attributes and Route Preferences of Metro Systems from Smart Card Data Machine Learning

In this paper, we target at recovering the exact routes taken by commuters inside a metro system that arenot captured by an Automated Fare Collection (AFC) system and hence remain unknown. We strategicallypropose two inference tasks to handle the recovering, one to infer the travel time of each travel link thatcontributes to the total duration of any trip inside a metro network and the other to infer the route preferencesbased on historical trip records and the travel time of each travel link inferred in the previous inferencetask. As these two inference tasks have interrelationship, most of existing works perform these two taskssimultaneously. However, our solutionTripDecoderadopts a totally different approach. To the best of ourknowledge,TripDecoderis the first model that points out and fully utilizes the fact that there are some tripsinside a metro system with only one practical route available. It strategically decouples these two inferencetasks by only taking those trip records with only one practical route as the input for the first inference taskof travel time and feeding the inferred travel time to the second inference task as an additional input whichnot only improves the accuracy but also effectively reduces the complexity of both inference tasks. Twocase studies have been performed based on the city-scale real trip records captured by the AFC systems inSingapore and Taipei to compare the accuracy and efficiency ofTripDecoderand its competitors. As expected,TripDecoderhas achieved the best accuracy in both datasets, and it also demonstrates its superior efficiencyand scalability.

Artificial Intelligence Innovation in Taiwan Research Blog


Taiwan is a small island off the coast of China that is roughly one fourth the size of North Carolina. Despite its size, Taiwan has made significant waves in the fields of science and technology. In the 2019 Global Talent Competitiveness Index Taiwan (labeled as Chinese Taipei) ranked number 1 in Asia and 15th globally. However, despite being ahead of many countries in terms of technological innovation, Taiwan was still looking for further ways to improve and support research within the country. Therefore, in 2017 the Taiwan Ministry of Science and Technology (MOST), initiated an AI innovation research program in order to promote the development of AI technologies and attract top AI professionals to work in Taiwan.

Coronavirus update: Artificial Intelligence warned of the killer epidemic week before WHO


According to the Polish Economics Institute (PIE), the first coronavirus warnings were issued on December 31 by a Canada-based health monitoring startup. The Canadian company, BlueDot, even correctly predicted the cities outside of China coronavirus would next appear: Tokyo, Seoul, Taipei and Bangkok. PIE said: "Algorithms using artificial intelligence solutions identified the onset of the coronavirus epidemic a few days earlier than reported in the official information from international organisations such as the WHO or the CDC." BlueDot's AI predicted the spread of coronavirus by analysing airline data, international news stories and reports of coronavirus animal infections.

This AI-Powered Cockpit Knows When To Cut Off The Driver


Fully self-driving cars are still a thing of the future. But in today's laboratories, the technology ranges from commonly used cruise control systems to so much automation that humans don't need to get into a car at all. In Taiwan, a startup is developing a driver's cockpit that's comfortable and packed with artificial intelligence features that transfers control of the vehicle to the computer whenever the system senses that the human driver is sick, tired, distracted or just sloppy. The 3-year-old Taipei-based Mindtronic AI developed this cockpit, called DMX, last year with luxuries like easy-to-use entertainment for the driver. But what if the driver gets mesmerized by a soccer match?

CyberLink CEO Dr. Jau Huang Shares Insights on Edge Computing and Showcases FaceMe AI-based Facial Recognition Engine at Intel Edge Computing Solution Summit - Business Wire - UrIoTNews


TAIPEI, Taiwan–(BUSINESS WIRE)–CyberLink Corp. (5203.TW), a pioneer of AI and facial recognition technologies, participated in the Intel Edge Computing Solution Summit. The summit brought together leaders from the IoT industry who shared insights on AI edge computing's latest breakthroughs and the opportunities that this technology will bring in the future. Dr. Jau Huang, CyberLink's founder and CEO, was invited to speak about the benefits of edge computing and how it enables precise, fast, affordable and secure AIoT use cases including facial recognition, such as the company's FaceMe AI-based engine. With FaceMe, CyberLink has leveraged edge-based technology and AI to deliver one of the world's most precise, flexible and best performing facial recognition engines. Compared with cloud-based solutions, edge computing is much cheaper, greatly enhances flexibility and provides real-time response, helping system integrators quickly develop and add new functionalities into existing systems and new AIoT products.

News Vecow - Wide Temperature Fanless Embedded Computing System, Machine Vision, Video Analytics Surveillance, Intelligent Industrial Automation


New Taipei City, Taiwan, Dec. 10, 2019 - Vecow Co., Ltd., a team of embedded experts, today announced the release of her latest GPC-1000 Series Expandable Dual GPU AI Computing System. Powered by workstation-grade Intel C246 chipset, running on dual NVIDIA Tesla /Quadro /GeForce /AMD Radeon Pro/Radeon graphics, Vecow GPC-1000 Series delivers high-performance computing and help to reduce latency and improve efficiency in data process, storage and analysis, making it ideal for robotic control, public surveillance, autonomous vehicles and deep learning applications. Vecow GPC-1000 series is powered by 9th Generation Intel Xeon /Core processor, which offers 37% better performance compared to previous generation Intel Kaby Lake platform. To address the growing AI applications such as autonomous vehicles, factory automation, public surveillance and traffic vision which requires high performance computing capability, Vecow GPC-1000 Series features dual GPUs with options of NVIDIA or AMD graphics and brings the power of dual GPU to accelerate AI solutions development and deployment. Meanwhile, it supports 9V to 55V power input with 80V surge protection, enabling a simple and wide range of applications to deploy for system integrators.