Goto

Collaborating Authors

application


Designing next generation analog chipsets for AI applications

#artificialintelligence

Researchers at the Indian Institute of Science (IISc) have developed a design framework to build next-generation analog computing chipsets that could be faster and require less power than the digital chips found in most electronic devices. Using their novel design framework, the team has built a prototype of an analog chipset called ARYABHAT-1 (Analog Reconfigurable technologY And Bias-scalable Hardware for AI Tasks). This type of chipset can be especially helpful for Artificial Intelligence (AI)-based applications like object or speech recognition--think Alexa or Siri--or those that require massive parallel computing operations at high speeds. Most electronic devices, particularly those that involve computing, use digital chips because the design process is simple and scalable. "But the advantage of analog is huge. You will get orders of magnitude improvement in power and size," explains Chetan Singh Thakur, assistant professor at the Department of Electronic Systems Engineering (DESE), IISc, whose lab is leading the efforts to develop the analog chipset.


Precision, Accuracy, Scale – And Experience – All Matter With AI

#artificialintelligence

When it comes to building any platform, the hardware is the easiest part and, for many of us, the fun part. But more than anything else, particularly at the beginning of any data processing revolution, it is experience that matters most. Whether to gain it or buy it. With AI being such a hot commodity, many companies that want to figure out how to weave machine learning into their applications are going to have to buy their experience first and cultivate expertise later. This realization is what caused Christopher Ré, an associate professor of computer science at Stanford University and a member of its Stanford AI Lab, Kunle Olukotun, a professor of electrical engineer at Stanford, and Rodrigo Liang, a chip designer who worked at Hewlett-Packard, Sun Microsystems, and Oracle, to co-found SambaNova Systems, one of a handful of AI startups trying to sell complete platforms to customers looking to add AI to their application mix. The company has raised an enormous $1.1 billion in four rounds of venture funding since its founding in 2017, and counts Google Ventures, Intel Capital, BlackRock, Walden International, SoftBank, and others as backers as it attempts to commercialize its DataScale platform and, more importantly, its Dataflow subscription service, which rolls it all up and puts a monthly fee on the stack and the expertise to help use it. SambaNova's customers have been pretty quiet, but Lawrence Livermore National Laboratory and Argonne National Laboratory have installed DataScale platforms and are figuring out how to integrate its AI capabilities into the simulation and modeling applications. Timothy Prickett Morgan: I know we have talked many times before during the rise of the "Niagara" T series of many-threaded Sparc processors, and I had to remind myself of that because I am a dataflow engine, not a storage device, after writing so many stories over more than three decades. I thought it was time to have a chat about what SambaNova is seeing out there in the market, but I didn't immediately make the connection that it was you.


Why Responsible AI Development Needs Cooperation on Safety

#artificialintelligence

We've written a policy research paper identifying four strategies that can be used today to improve the likelihood of long-term industry cooperation on safety norms in AI: communicating risks and benefits, technical collaboration, increased transparency, and incentivizing standards. Our analysis shows that industry cooperation on safety will be instrumental in ensuring that AI systems are safe and beneficial, but competitive pressures could lead to a collective action problem, potentially causing AI companies to under-invest in safety. We hope these strategies will encourage greater cooperation on the safe development of AI and lead to better global outcomes of AI. It's important to ensure that it's in the economic interest of companies to build and release AI systems that are safe, secure, and socially beneficial. This is true even if we think AI companies and their employees have an independent desire to do this, since AI systems are more likely to be safe and beneficial if the economic interests of AI companies are not in tension with their desire to build their systems responsibly.


Deep Longevity Granted The First Microbiomic Aging Clock Patent

#artificialintelligence

On June 28, 2022, Deep Longevity was granted a patent for an aging clock to estimate the age of a person based on their gut bacteria. This is the first patent granted for a microbiomic aging clock. The method uses neural networks to interpret gut metagenomic information. Scientists at Deep Longevity plan to use the technology to identify pro-aging bacteria to help scientists develop treatments to promote healthy longevity. Deep Longevity plans to develop commercial products based on the patent in 2023.


Minimize Your Maintenance Downtime with Artificial Intelligence

#artificialintelligence

Frequent instances of enterprise downtime can derail the growth trajectory of your organization. To avoid that fate, one of the more effective solutions to prevent downtime involves incorporating AI in the workplace. Downtime of any kind, whether it is driven by cyber-attacks, malfunctioning devices, erratically-working applications or maintenance work, is lossmaking for your organization. Unplanned network outages, device breakdown and other events that cause downtime--a loose term used to denote the cumulative "productive company time" lost during repairs--can incur losses of up to US$5 million for organizations, and that figure excludes legal fees, compensation and penalties of any kind. Let's face it, events such as the ones listed above are inevitable for organizations in any sector.


Meta's latest AI can translate 200 languages in real time

Engadget

More than 7,000 languages are currently spoken on this planet and Meta seemingly wants to understand them all. Six months ago, the company launched its ambitious No Language Left Behind (NLLB) project, training AI to translate seamlessly between numerous languages without having to go through English first. On Wednesday, the company announced its first big success, dubbed NLLB-200. It's an AI model that can speak in 200 tongues, including a number of less-widely spoken languages from across Asia and Africa, like Lao and Kamba. According to a Wednesday blog post from the company, NLLB-200 can translate 55 African languages with "high-quality results."


Frontera Group Announces AI Powered Live Streaming Solution

#artificialintelligence

Frontera Group, Inc., a technology-focused strategic acquirer of revenue-generating companies and intellectual property, announced the launch of its Mixie AI 2.0 powered live video broadcasting solution that will be targeting automated broadcasting applications. Frontera's recent acquisition of Intellimedia's Mixie suite of solutions has provided Frontera with a cutting-edge mix of media technology, learning and training platforms, and event broadcasting technologies that position the company at the technology forefront of a new wave of immersive and engaging technologies that continue to redefine applications. "Our AI-powered broadcasting solutions will keep viewers engaged utilizing automated professional-grade broadcasting and analytic techniques that compete with manual production consisting of multiple cameras and production crews. The result for our broadcast customers will be a significant cost reduction while maintaining professional quality production," said Andrew De Luna, interim CEO of Frontera Group. Frontera's recent acquisition of Intellimedia Networks continues to position Frontera as a prime provider of technology solutions that disrupt existing applications in broadcast, training, and event broadcasting.


Council Post: The Next Step In Digital Transformation Is Software-Defined X

#artificialintelligence

Today's cloud was made possible by virtualization technology, which creates a software-based representation of hardware equipment. Virtual machines, such as those popularized by VMWare and the hypervisor technology that manages VM execution, make it possible to run different software on the same machine. This concept is now expanding beyond the cloud to the physical world through the use of software that controls autonomous robots. I call this software-defined X: any physical task (X), from cleaning the floor at an airport terminal to delivering an item from one end of a warehouse to the other, can now be controlled through software. This is really taking "digital transformation" to its logical conclusion.


Random Forest Classifier: Basic Principles and Applications

#artificialintelligence

Predicting customer behavior, consumer demand or stock price fluctuations, identifying fraud, and diagnosing patients -- these are some of the popular applications of the random forest (RF) algorithm. Used for classification and regression tasks, it can significantly enhance the efficiency of business processes and scientific research. This blog post will cover the random forest algorithm, its operating principles, capabilities and limitations, and real-world applications. A random forest is a supervised machine learning algorithm in which the calculations of numerous decision trees are combined to produce one final result. It's popular because it is simple yet effective. Random forest is an ensemble method -- a technique where we take many base-level models and combine them to get improved results.


Understanding Domain Specific Languages(CS)

#artificialintelligence

Abstract: Numerical simulations can help solve complex problems. Most of these algorithms are massively parallel and thus good candidates for FPGA acceleration thanks to spatial parallelism. Modern FPGA devices can leverage high-bandwidth memory technologies, but when applications are memory-bound designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. This development process requires hardware design skills that are uncommon in domain-specific experts. In this paper, we propose an automated tool flow from a domain-specific language (DSL) for tensor expressions to generate massively-parallel accelerators on HBM-equipped FPGAs.