Specialized replicated compute accelerators (RCA) are multiplied up by having multiple copies per ASICs, multiple ASICs per server, multiple servers per rack, and multiple racks per datacenter. Server controller can be an FPGA, microcontroller, or a Xeon processor. Power delivery and cooling system are customized based on ASIC needs. If required, there would be DRAMs on the PCB as well. Each ASIC interconnects its RCAs using a customized on-chip network.
From the simple embedded processor in your washing machine to powerful processors in data center servers, most computing today takes place on general-purpose programmable processors or CPUs. CPUs are attractive because they are easy to program and because large code bases exist for them. The programmability of CPUs stems from their execution of sequences of simple instructions, such as ADD or BRANCH; however, the energy required to fetch and interpret an instruction is 10x to 4000x more than that required to perform a simple operation such as ADD. This high overhead was acceptable when processor performance and efficiency were scaling according to Moore's Law.32 One could simply wait and an existing application would run faster and more efficiently. Our economy has become dependent on these increases in computing performance and efficiency to enable new features and new applications. Today, Moore's Law has largely ended,12 and we must look to alternative architectures with lower overhead, such as domain-specific accelerators, to continue scaling of performance and efficiency. There are several ways to realize domain-specific accelerators as discussed in the sidebar on accelerator options. A domain-specific accelerator is a hardware computing engine that is specialized for a particular domain of applications. Accelerators have been designed for graphics,26 deep learning,16 simulation,2 bioinformatics,49 image processing,38 and many other tasks. Accelerators can offer orders of magnitude improvements in performance/cost and performance/W compared to general-purpose computers. For example, our bioinformatics accelerator, Darwin,49 is up to 15,000x faster than a CPU at reference-based, long-read assembly. The performance and efficiency of accelerators is due to a combination of specialized operations, parallelism, efficient memory systems, and reduction of overhead. Domain-specific accelerators7 are becoming more pervasive and more visible, because they are one of the few remaining ways to continue to improve performance and efficiency now that Moore's Law has ended.22 Most applications require modifications to achieve high speed up on domain-specific accelerators. These applications are highly tuned to balance the performance of conventional processors with their memory systems.
Apple pioneered the voice revolution in 2011 with the introduction of Siri in its iPhone 4s. Today, you tell your iPhone 11, "Hey Siri, Play Bruce Springsteen by Spotify," and it responds, "I can't talk to Spotify, but you can use Apple music instead," politely displaying options on the screena as shown in the figure here. Or, you tell one of your five Amazon Echo devices at home, "Alexa, add pumpkin pie to my Target shopping list,"b then "order AA Duracell batteries," and it adds pumpkin pie and Amazon Basics batteries to your Amazon shopping cart, ignoring your request to shop at Target and be loyal to Duracell. You are the consumer, but your choices have been ignored. Or, consider you are a brand manager.
From foreign intervention in free elections to the rise of the American surveillance state, the Internet has transformed the relationship between the public and private sectors, especially democracy's public sphere. The global pandemic only further highlights the extent to which technological innovation is changing how we live, work, and play. What has too often gone unacknowledged is that the same revolution has produced a series of conflicts between our desires as consumers and our duties as citizens. Left unaddressed, the consequence is a moral vacuum that has become a threat to liberal democracy and human values. Surveillance in the Internet Age, whether by governments or companies, often relies on algorithmic searches of big data.
Checks are needed to guide the development of guard-rails for ethical and responsible community-engaged computing research. The era of "move fast and break things" can produce false starts, injured communities, and widespread techlash. The tech sector can be more socially conscious and focus on community engagement using research from universities, computing researchers, and professionals. For example, smart cities might increase efficiency and improve quality of life, but for whom?10 Research shows how smart city initiatives can harm certain groups through, for example, facial recognition technologies that misidentify, produce ethnic bias and discrimination, or create opportunities for abuse.5 Technology benefits do not always accrue evenly across community members. Ethics rarely keeps pace with technological innovation.
Since the mid-1960s, intellectual property (IP) law specialists have debated whether computers or computer programs can be "authors" whose outputs can be copyrighted.6 The U.S. Congress was so befuddled about this issue in the mid-1970s that it created a special Commission on New Technological Uses of Copyrighted Works (CONTU) to address this and a few other computer-related issues.4 A second burst of interest in AI authorship broke out in the mid-1980s. Congress once again commissioned a study, this time from its Office of Technology Assessment (OTA), to address this and other controversial computer-related issues. OTA did not offer an answer to the question, perhaps in part because at that time, it was a "toy problem" because no commercially significant outputs of AI or other software programs had yet been generated.5
As artificial intelligence (AI) techniques advance, they are beginning to automate tasks that, until recently, only humans could perform--tasks such as translating text from one language to another or making medical diagnoses. It seems only logical to turn that computer power on computers themselves and use AI to automate programming. In fact, computer scientists are working on just that idea, using various AI techniques to develop new methods of automating the writing of code. "The ultimate goal of this is that you would have professional software engineers not actually write code anymore," says Chris Jermaine, a professor of computer science at Rice University in Houston, TX. Instead, the engineer would tell a computer what a piece of software should do, and the AI system would write the code, perhaps stopping along the way to pose questions to the engineer.
Challenge yourself and reach for the highest bar. If you succeed, keep pushing the boundaries." This is what my friend Hassan Hajji advised when I started my career at IBM Research Tokyo in 2002, and these words have been a guiding force in my career ever since. At IBM, I was challenged to learn as much as possible about the research process in an industrial lab (prototyping ideas, patenting, publishing results), and it dovetailed nicely with my desire to work toward a Ph.D. in systems biology. After receiving my doctorate, which allowed me to enhance my skills in computational and mathematical analysis to understand complex biological systems, I was ready for a new challenge. I left Japan to work in the U.K. at a small startup, ecrebo,a which provides a coupon-issuing system for retailers who seek to attract customers based on their individual purchasing habits. I was responsible for developing a backend server for the coupon system. It had to be able to analyze the contents of the receipt, determine whether it met the conditions for issuing the coupon, and return it within three seconds, including communication time with the POS system.