In recent years, artificial intelligence programs have been prompting changes in computer chip designs, and novel computers have made new kinds of neural networks in AI possible. There is a powerful feedback loop going on. In the center of that loop sits software technology that converts neural net programs to run on novel hardware. And at the center of that sits a recent open-source project gaining momentum. Apache TVM is a compiler that operates differently from other compilers.
Mirabilis Design has now the release of VisualSim AI Processor Designer. VisualSim AI accelerates time-to-market of new AI technology, configures high-performance computing systems, eliminates under and over-design, and provides an interactive reference design for end-users to create new applications. VisualSim can be used for the architecture evaluation of AI processor hardware, partition AI algorithms on a System-on-Chip (SoC), test the AI/ML implementation, and measure power and performance of an AI processor in automotive, medical, and data center applications. The Intellectual Property available in the VisualSim AI brings together processor cores, neural networks, accelerators, GPU, and DSP. At the system level, VisualSim AI can be integrated with a network model and FPGA boards for full system verification. “The best processor configuration depends on the application, price point, and the expected performance. Trying to predict the feasibility before building the first prototype requires modeling IP, which is never readily available. The intense competition in the marketplace makes the delay in detecting performance limitation, a major detriment to a successful new product introduction”, says Deepak Shankar, Vice President – Technology, Mirabilis Design Inc. “The complex model requires configurable IPs and an integrated simulation environment.” The AI Designer enables an architect to rapidly construct a graphical model using parameterized IP and integrating around an interconnect such as a Network-on-Chip, Quantum Nodes, or in-Memory elements. The user can accurately simulate AI workloads and real-life interface traffic. The model can vary task allocation between cores, neural networks, and accelerators; size the system parameters; create an equilibrium between response time and power consumption, and select the scheduler and buffer strategy. The combination of the large model capacity, fast model construction, an extremely fast simulator, and a programmable analytics engine, enables users to rapidly arrive at the most suitable architecture. Users can run software on the VisualSim AI architecture to measure response times, power, network throughput, cache hit-ratio, and memory bandwidth. VisualSim AI enables companies to optimize and validate the SoC system specification, and system companies to select the right SoC for the target application. A number of beta customers have utilized this platform to design AI SoC for the data center and automotive applications. Other applications that can use this platform are autonomous driving, radars’ processing, defense systems, flight avionics, medical instruments, high-performance computing, and infotainment systems. Availability VisualSim AI designers have tested the accuracy of the IP blocks for task latency and power consumption across multiple projects. The platform works on VisualSim version 2140b. OS supported includes Windows, Linux, and Mac OS. To learn more, register for a private session in Booth 2441 at Design Automation Conference 2021 in San Francisco by registering at https://calendly.com/mirabilisdesign/dac or contact Mirabilis Design at firstname.lastname@example.org.
BrainChip Holdings Ltd (ASX: BRN), (OTCQX: BCHPY) a leading provider of ultra-low power high performance artificial intelligence technology and the world's first commercial producer of neuromorphic AI chips and IP, today announced that MegaChips, a pioneer in the ASIC industry, has licensed BrainChip Akida IP to enhance and grow its technology positioning for next-generation, Edge-based AI solutions. A multibillion-dollar global fabless semiconductor company based in Japan, MegaChips provides chip solutions that fulfill various requirements, including low power consumption, cost and time to market, while achieving breakthrough functions and performance by fusing knowledge of Large Scale Integrations and applications for problems in device development. By partnering with BrainChip, MegaChips is able to quickly and easily maintain its industry innovator status by supplying solutions and applications that leverage the Akida revolutionary technology in markets such as automotive, IoT, cameras, gaming and industrial robotics. "As a trusted and loyal partner to market leaders, we deliver the technology and expertise they need to ensure products are uniquely designed for their customers and engineered for ultimate performance," said Tetsuo Hikawa, President and CEO of MegaChips. "Working with BrainChip and incorporating their Akida technology into our ASIC solutions service, we are better able to handle the development and support processes needed to design and manufacture integrated circuits and systems on chips that can take advantage of AI at the Edge."
Samsung has said it will build a $17bn (£12.7bn) The plant just outside Austin would be the South Korean company's biggest US investment and is expected to be operational in the second half of 2024. Samsung had also considered sites in Arizona and New York for the factory, which will be much bigger than its only other US chip plant, also in Austin. Samsung said the new facility would boost production of hi-tech chips used for 5G mobile communications, advanced computing and artificial intelligence, and also improve supply chain resilience. The chip shortage has become a significant business obstacle and a serious US national security concern.
Researchers from Western University, SUNY Buffalo State College, University of Cincinnati, and City University of Hong Kong published a new paper in the Journal of Marketing that presents a methodological framework for managers to extract and monitor information related to products and their attributes from consumer reviews. Understanding how concrete product attributes form higher-level benefits for consumers can benefit various corporate teams. Concrete, or "engineered attributes" refer to technical specifications and product features. For example, in the context of tablet computers, such attributes include RAM, CPU, weight, and screen resolution. Understanding how combinations of these lower-level attributes form higher-level benefits, or "meta-attributes," for consumers, such as Hardware and Connectivity, can provide managers with actionable insights.
The semiconductor industry is enjoying renewed growth despite chip shortages plaguing everything from cars to kitchen appliances. But while the chips themselves continue to get faster and smarter, the chip design process itself hasn't changed that much in 20 years. It typically takes 2-3 years to design a chip with a large engineering team and tens or hundreds of millions of dollars to get a chip from ideas to fabrication. But now, change is coming in the form of artificial intelligence, which has recently demonstrated significant improvements in optimizing layouts for power, area (cost), and performance. Using a reinforcement learning (RL) based approach, similar to the one that beat the world's GO champion way back in 2016, Samsung announced that they now have a chip back from their factory that was optimized by the Synopsys DSO.ai platform we discussed in May 2020. As far as we know, this is the industry's very first working chip whose layout was designed by AI.
SAN DIEGO, Nov. 16, 2021 (GLOBE NEWSWIRE) -- GBT Technologies Inc. (OTC PINK: GTCHD) ("GBT" or the "Company"), is developing machine learning based software solutions to include integrated circuit design, verification and manufacturing aspects under one platform, enabling faster design, higher performance, and silicon yield. Based on its recent patented technology, GBT has started the development of a comprehensive software solution to address advanced nanometer challenges under one design environment. The software platform (internal code name MAGIC II), will address a wide variety of IC design aspects among these are functional verification, geometric design-rules correctness, power management, reliability and design for manufacturing (DFM). The platform is targeted to support analog, digital and mixed signal designs, enabling efficient scalability and process migration. GBT's ML technology plans to be implemented to ensure fast performance; especially, with today's very large ICs in the domains of AI, IoT and data processing.
While past information technology (IT) advances have transformed society, future advances hold great additional promise. For example, we have only just begun to reap the changes from artificial intelligence--especially machine learning--with profound advances expected in medicine, science, education, commerce, and government. All too often forgotten, underlying the IT impact are the dramatic improvements in the programmable hardware. Hardware improvements deliver performance that unlocks new capabilities. However, unlike in the 1990s and early 2000s, tomorrow's performance aspirations must be achieved with much less technological advancement (Moore's Law and Den-nard Scaling).
History tells us that scientific progress is imperfect. This adds noise to the marketplace of ideas and often means there is inertia in recognizing promising directions of research. In the field of artificial intelligence (AI) research, this article posits that it is tooling which has played a disproportionately large role in deciding which ideas succeed and which fail. What follows is part position paper and part historical review. I introduce the term "hardware lottery" to describe when a research idea wins because it is compatible with available software and hardware, not because the idea is superior to alternative research directions. The choices about software and hardware have often played decisive roles in deciding the winners and losers in early computer science history. These lessons are particularly salient as we move into a new era of closer collaboration between the hardware, software, and machine-learning research communities. After decades of treating hardware, software, and algorithm as separate choices, the catalysts for closer collaboration include changing hardware economics, a "bigger-is-better" race in the size of deep-learning architectures, and the dizzying requirements of deploying machine learning to edge devices. Closer collaboration is centered on a wave of new-generation, "domain-specific" hardware that optimizes for the commercial use cases of deep neural networks. While domain specialization creates important efficiency gains for mainstream research focused on deep neural networks, it arguably makes it even more costly to veer off the beaten path of research ideas.
The website of the analytical company TrendForce has published a forecast, which identifies 10 main trends that are expected to take place in various segments of the high-tech industry in 2022. The first is the ongoing development of micro-LED and mini-LED displays. According to analysts, "a significant number of technical bottlenecks in micro-LED development will still persist in 2022", which will leave the cost of producing this type of display "sky-high." Analysts are confident that "more advanced AMOLED technology and under-display cameras will usher in a new phase of the smartphone revolution." Retail prices for foldable models are also expected to be within the range of conventional flagship models, which will boost sales.